WorldWideScience

Sample records for benchmark dose modeling

  1. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  2. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework......This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  3. Current modeling practice may lead to falsely high benchmark dose estimates.

    Science.gov (United States)

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  5. Estimate of safe human exposure levels for lunar dust based on comparative benchmark dose modeling.

    Science.gov (United States)

    James, John T; Lam, Chiu-Wing; Santana, Patricia A; Scully, Robert R

    2013-04-01

    Brief exposures of Apollo astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure to lunar dust. The United States and other space faring nations intend to return to the moon for extensive exploration within a few decades. In the meantime, habitats for that exploration, whether mobile or fixed, must be designed to limit human exposure to lunar dust to safe levels. Herein we estimate safe exposure limits for lunar dust collected during the Apollo 14 mission. We instilled three respirable-sized (∼2 μ mass median diameter) lunar dusts (two ground and one unground) and two standard dusts of widely different toxicities (quartz and TiO₂) into the respiratory system of rats. Rats in groups of six were given 0, 1, 2.5 or 7.5 mg of the test dust in a saline-Survanta® vehicle, and biochemical and cellular biomarkers of toxicity in lung lavage fluid were assayed 1 week and one month after instillation. By comparing the dose--response curves of sensitive biomarkers, we estimated safe exposure levels for astronauts and concluded that unground lunar dust and dust ground by two different methods were not toxicologically distinguishable. The safe exposure estimates were 1.3 ± 0.4 mg/m³ (jet-milled dust), 1.0 ± 0.5 mg/m³ (ball-milled dust) and 0.9 ± 0.3 mg/m³ (unground, natural dust). We estimate that 0.5-1 mg/m³ of lunar dust is safe for periodic human exposures during long stays in habitats on the lunar surface.

  6. Benchmark Dose Modeling Estimates of the Concentrations of Inorganic Arsenic That Induce Changes to the Neonatal Transcriptome, Proteome, and Epigenome in a Pregnancy Cohort.

    Science.gov (United States)

    Rager, Julia E; Auerbach, Scott S; Chappell, Grace A; Martin, Elizabeth; Thompson, Chad M; Fry, Rebecca C

    2017-10-16

    Prenatal inorganic arsenic (iAs) exposure influences the expression of critical genes and proteins associated with adverse outcomes in newborns, in part through epigenetic mediators. The doses at which these genomic and epigenomic changes occur have yet to be evaluated in the context of dose-response modeling. The goal of the present study was to estimate iAs doses that correspond to changes in transcriptomic, proteomic, epigenomic, and integrated multi-omic signatures in human cord blood through benchmark dose (BMD) modeling. Genome-wide DNA methylation, microRNA expression, mRNA expression, and protein expression levels in cord blood were modeled against total urinary arsenic (U-tAs) levels from pregnant women exposed to varying levels of iAs. Dose-response relationships were modeled in BMDExpress, and BMDs representing 10% response levels were estimated. Overall, DNA methylation changes were estimated to occur at lower exposure concentrations in comparison to other molecular endpoints. Multi-omic module eigengenes were derived through weighted gene co-expression network analysis, representing co-modulated signatures across transcriptomic, proteomic, and epigenomic profiles. One module eigengene was associated with decreased gestational age occurring alongside increased iAs exposure. Genes/proteins within this module eigengene showed enrichment for organismal development, including potassium voltage-gated channel subfamily Q member 1 (KCNQ1), an imprinted gene showing differential methylation and expression in response to iAs. Modeling of this prioritized multi-omic module eigengene resulted in a BMD(BMDL) of 58(45) μg/L U-tAs, which was estimated to correspond to drinking water arsenic concentrations of 51(40) μg/L. Results are in line with epidemiological evidence supporting effects of prenatal iAs occurring at levels iAs exposure influences neonatal outcome-relevant transcriptomic, proteomic, and epigenomic profiles.

  7. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  8. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  9. Application of benchmark dose modeling to protein expression data in the development and analysis of mode of action/adverse outcome pathways for testicular toxicity.

    Science.gov (United States)

    Chepelev, Nikolai L; Meek, M E Bette; Yauk, Carole Lyn

    2014-11-01

    Reliable quantification of gene and protein expression has potential to contribute significantly to the characterization of hypothesized modes of action (MOA) or adverse outcome pathways for critical effects of toxicants. Quantitative analysis of gene expression by benchmark dose (BMD) modeling has been facilitated by the development of effective software tools. In contrast, protein expression is still generally quantified by a less robust effect level (no or lowest [adverse] effect levels) approach, which minimizes its potential utility in the consideration of dose-response and temporal concordance for key events in hypothesized MOAs. BMD modeling is applied here to toxicological data on testicular toxicity to investigate its potential utility in analyzing protein expression relevant to the proposed MOA to inform human health risk assessment. The results illustrate how the BMD analysis of protein expression in animal tissues in response to toxicant exposure: (1) complements other toxicity data, and (2) contributes to consideration of the empirical concordance of dose-response relationships, as part of the weight of evidence for hypothesized MOAs to facilitate consideration and application in regulatory risk assessment. Lack of BMD analysis in proteomics has likely limited its use for these purposes. This paper illustrates the added value of BMD modeling to support and strengthen hypothetical MOAs as a basis to facilitate the translation and uptake of the results of proteomic research into risk assessment. Copyright © 2014 Her Majesty the Queen in Right of Canada. Journal of Applied Toxicology © 2014 John Wiley & Sons, Ltd.

  10. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  11. Categorical Regression and Benchmark Dose Software 3.0

    Science.gov (United States)

    The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...

  12. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  13. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  14. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  15. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  16. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    International Nuclear Information System (INIS)

    Davis, J. Allen; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-01-01

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.

  17. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  18. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  19. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    Science.gov (United States)

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  20. SPOC Benchmark Case: SNRE Model

    Energy Technology Data Exchange (ETDEWEB)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    2016-02-01

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations of the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.

  1. Model based energy benchmarking for glass furnace

    International Nuclear Information System (INIS)

    Sardeshpande, Vishal; Gaitonde, U.N.; Banerjee, Rangan

    2007-01-01

    Energy benchmarking of processes is important for setting energy efficiency targets and planning energy management strategies. Most approaches used for energy benchmarking are based on statistical methods by comparing with a sample of existing plants. This paper presents a model based approach for benchmarking of energy intensive industrial processes and illustrates this approach for industrial glass furnaces. A simulation model for a glass furnace is developed using mass and energy balances, and heat loss equations for the different zones and empirical equations based on operating practices. The model is checked with field data from end fired industrial glass furnaces in India. The simulation model enables calculation of the energy performance of a given furnace design. The model results show the potential for improvement and the impact of different operating and design preferences on specific energy consumption. A case study for a 100 TPD end fired furnace is presented. An achievable minimum energy consumption of about 3830 kJ/kg is estimated for this furnace. The useful heat carried by glass is about 53% of the heat supplied by the fuel. Actual furnaces operating at these production scales have a potential for reduction in energy consumption of about 20-25%

  2. Dose Rate Experiment at JET for Benchmarking the Calculation Direct One Step Method

    International Nuclear Information System (INIS)

    Angelone, M.; Petrizzi, L.; Pillon, M.; Villari, R.; Popovichev, S.

    2006-01-01

    Neutrons produced by D-D and D-T plasmas induce the activation of tokamak materials and of components. The development of reliable methods to assess dose rates is a key issue for maintenance and operating nuclear machines, in normal and off-normal conditions. In the frame of the EFDA Fusion Technology work programme, a computational tool based upon MCNP Monte Carlo code has been developed to predict the dose rate after shutdown: it is called Direct One Step Method (D1S). The D1S is an innovative approach in which the decay gammas are coupled to the neutrons as in the prompt case and they are transported in one single step in the same run. Benchmarking of this new tool with experimental data taken in a complex geometry like that of a tokamak is a fundamental step to test the reliability of the D1S method. A dedicated benchmark experiment was proposed for the 2005-2006 experimental campaign of JET. Two irradiation positions have been selected for the benchmark: one inner position inside the vessel, not far from the plasma, called the 2 upper irradiation end (IE2), where neutron fluence is relatively high. The second position is just outside a vertical port in an external position (EX). Here the neutron flux is lower and the dose rate to be measured is not very far from the residual background. Passive detectors are used for in-vessel measurements: the high sensitivity Thermo Luminescent Dosimeters (TLDs) GR-200A (natural LiF), which ensure measurements down to environmental dose level. An active detector of Geiger-Muller (GM) type is used for out of vessel dose rate measurement. Before their use the detectors were calibrated in a secondary gamma-ray standard (Cs-137 and Co-60) facility in term of air-kerma. The background measurement was carried-out in the period July -September 2005 in the outside position EX using the GM tube and in September 2005 inside the vacuum vessel using TLD detectors located in the 2 Upper irradiation end IE2. In the present work

  3. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  4. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  5. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    Science.gov (United States)

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  6. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  7. The current state of knowledge on the use of the benchmark dose concept in risk assessment.

    Science.gov (United States)

    Sand, Salomon; Victorin, Katarina; Filipsson, Agneta Falk

    2008-05-01

    This review deals with the current state of knowledge on the use of the benchmark dose (BMD) concept in health risk assessment of chemicals. The BMD method is an alternative to the traditional no-observed-adverse-effect level (NOAEL) and has been presented as a methodological improvement in the field of risk assessment. The BMD method has mostly been employed in the USA but is presently given higher attention also in Europe. The review presents a number of arguments in favor of the BMD, relative to the NOAEL. In addition, it gives a detailed overview of the several procedures that have been suggested and applied for BMD analysis, for quantal as well as continuous data. For quantal data the BMD is generally defined as corresponding to an additional or extra risk of 5% or 10%. For continuous endpoints it is suggested that the BMD is defined as corresponding to a percentage change in response relative to background or relative to the dynamic range of response. Under such definitions, a 5% or 10% change can be considered as default. Besides how to define the BMD and its lower bound, the BMDL, the question of how to select the dose-response model to be used in the BMD and BMDL determination is highlighted. Issues of study design and comparison of dose-response curves and BMDs are also covered. Copyright (c) 2007 John Wiley & Sons, Ltd.

  8. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study.

    Science.gov (United States)

    De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M

    2017-02-01

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.

  9. Benchmarking pediatric cranial CT protocols using a dose tracking software system: a multicenter study

    Energy Technology Data Exchange (ETDEWEB)

    Bondt, Timo de; Parizel, Paul M. [Antwerp University Hospital and University of Antwerp, Department of Radiology, Antwerp (Belgium); Mulkens, Tom [H. Hart Hospital, Department of Radiology, Lier (Belgium); Zanca, Federica [GE Healthcare, DoseWatch, Buc (France); KU Leuven, Imaging and Pathology Department, Leuven (Belgium); Pyfferoen, Lotte; Casselman, Jan W. [AZ St. Jan Brugge-Oostende AV Hospital, Department of Radiology, Brugge (Belgium)

    2017-02-15

    To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. (orig.)

  10. Benchmark problems for repository siting models

    International Nuclear Information System (INIS)

    Ross, B.; Mercer, J.W.; Thomas, S.D.; Lester, B.H.

    1982-12-01

    This report describes benchmark problems to test computer codes used in siting nuclear waste repositories. Analytical solutions, field problems, and hypothetical problems are included. Problems are included for the following types of codes: ground-water flow in saturated porous media, heat transport in saturated media, ground-water flow in saturated fractured media, heat and solute transport in saturated porous media, solute transport in saturated porous media, solute transport in saturated fractured media, and solute transport in unsaturated porous media

  11. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  12. Conceptual Models, Choices, and Benchmarks for Building Quality Work Cultures.

    Science.gov (United States)

    Acker-Hocevar, Michele

    1996-01-01

    The two models in Florida's Educational Quality Benchmark System represent a new way of thinking about developing schools' work culture. The Quality Performance System Model identifies nine dimensions of work within a quality system. The Change Process Model provides a theoretical framework for changing existing beliefs, attitudes, and behaviors…

  13. Does Your Terrestrial Model Capture Key Arctic-Boreal Relationships?: Functional Benchmarks in the ABoVE Model Benchmarking System

    Science.gov (United States)

    Stofferahn, E.; Fisher, J. B.; Hayes, D. J.; Schwalm, C. R.; Huntzinger, D. N.; Hantson, W.

    2017-12-01

    The Arctic-Boreal Region (ABR) is a major source of uncertainties for terrestrial biosphere model (TBM) simulations. These uncertainties are precipitated by a lack of observational data from the region, affecting the parameterizations of cold environment processes in the models. Addressing these uncertainties requires a coordinated effort of data collection and integration of the following key indicators of the ABR ecosystem: disturbance, vegetation / ecosystem structure and function, carbon pools and biogeochemistry, permafrost, and hydrology. We are continuing to develop the model-data integration framework for NASA's Arctic Boreal Vulnerability Experiment (ABoVE), wherein data collection is driven by matching observations and model outputs to the ABoVE indicators via the ABoVE Grid and Projection. The data are used as reference datasets for a benchmarking system which evaluates TBM performance with respect to ABR processes. The benchmarking system utilizes two types of performance metrics to identify model strengths and weaknesses: standard metrics, based on the International Land Model Benchmarking (ILaMB) system, which relate a single observed variable to a single model output variable, and functional benchmarks, wherein the relationship of one variable to one or more variables (e.g. the dependence of vegetation structure on snow cover, the dependence of active layer thickness (ALT) on air temperature and snow cover) is ascertained in both observations and model outputs. This in turn provides guidance to model development teams for reducing uncertainties in TBM simulations of the ABR.

  14. Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.

    Science.gov (United States)

    Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H

    2005-01-01

    A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.

  15. An international pooled analysis for obtaining a benchmark dose for environmental lead exposure in children

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Bellinger, David; Lanphear, Bruce

    2013-01-01

    Lead is a recognized neurotoxicant, but estimating effects at the lowest measurable levels is difficult. An international pooled analysis of data from seven cohort studies reported an inverse and supra-linear relationship between blood lead concentrations and IQ scores in children. The lack...... of a clear threshold presents a challenge to the identification of an acceptable level of exposure. The benchmark dose (BMD) is defined as the dose that leads to a specific known loss. As an alternative to elusive thresholds, the BMD is being used increasingly by regulatory authorities. Using the pooled data...... yielding lower confidence limits (BMDLs) of about 0.1-1.0 μ g/dL for the dose leading to a loss of one IQ point. We conclude that current allowable blood lead concentrations need to be lowered and further prevention efforts are needed to protect children from lead toxicity....

  16. Discussion of OECD LWR Uncertainty Analysis in Modelling Benchmark

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Royer, E.; Gillford, J.

    2013-01-01

    The demand for best estimate calculations in nuclear reactor design and safety evaluations has increased in recent years. Uncertainty quantification has been highlighted as part of the best estimate calculations. The modelling aspects of uncertainty and sensitivity analysis are to be further developed and validated on scientific grounds in support of their performance and application to multi-physics reactor simulations. The Organization for Economic Co-operation and Development (OECD) / Nuclear Energy Agency (NEA) Nuclear Science Committee (NSC) has endorsed the creation of an Expert Group on Uncertainty Analysis in Modelling (EGUAM). Within the framework of activities of EGUAM/NSC the OECD/NEA initiated the Benchmark for Uncertainty Analysis in Modelling for Design, Operation, and Safety Analysis of Light Water Reactor (OECD LWR UAM benchmark). The general objective of the benchmark is to propagate the predictive uncertainties of code results through complex coupled multi-physics and multi-scale simulations. The benchmark is divided into three phases with Phase I highlighting the uncertainty propagation in stand-alone neutronics calculations, while Phase II and III are focused on uncertainty analysis of reactor core and system respectively. This paper discusses the progress made in Phase I calculations, the Specifications for Phase II and the incoming challenges in defining Phase 3 exercises. The challenges of applying uncertainty quantification to complex code systems, in particular the time-dependent coupled physics models are the large computational burden and the utilization of non-linear models (expected due to the physics coupling). (authors)

  17. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  18. Benchmark Simulation Model No 2 in Matlab-Simulink

    DEFF Research Database (Denmark)

    Vrecko, Darko; Gernaey, Krist; Rosen, Christian

    2006-01-01

    In this paper, implementation of the Benchmark Simulation Model No 2 (BSM2) within Matlab-Simulink is presented. The BSM2 is developed for plant-wide WWTP control strategy evaluation on a long-term basis. It consists of a pre-treatment process, an activated sludge process and sludge treatment...

  19. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  20. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  1. Modelling the benchmark spot curve for the Serbian

    Directory of Open Access Journals (Sweden)

    Drenovak Mikica

    2010-01-01

    Full Text Available The objective of this paper is to estimate Serbian benchmark spot curves using the Svensson parametric model. The main challenges that we tackle are: sparse data, different currency denominations of short and longer term maturities, and infrequent transactions in the short-term market segment vs daily traded medium and long-term market segment. We find that the model is flexible enough to account for most of the data variability. The model parameters are interpreted in economic terms.

  2. Benchmark data set for wheat growth models

    DEFF Research Database (Denmark)

    Asseng, S; Ewert, F.; Martre, P

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, max...... analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario....

  3. Using the benchmark dose (BMD) methodology to determine an appropriate reduction of certain ingredients in food products.

    Science.gov (United States)

    Bi, Jian

    2010-01-01

    As the desire to promote health increases, reductions of certain ingredients, for example, sodium, sugar, and fat in food products, are widely requested. However, the reduction is not risk free in sensory and marketing aspects. Over reduction may change the taste and influence the flavor of a product and lead to a decrease in consumer's overall liking or purchase intent for the product. This article uses the benchmark dose (BMD) methodology to determine an appropriate reduction. Calculations of BMD and one-sided lower confidence limit of BMD are illustrated. The article also discusses how to calculate BMD and BMDL for over dispersed binary data in replicated testing based on a corrected beta-binomial model. USEPA Benchmark Dose Software (BMDS) were used and S-Plus programs were developed. The method discussed in the article is originally used to determine an appropriate reduction of certain ingredients, for example, sodium, sugar, and fat in food products, considering both health reason and sensory or marketing risk.

  4. Benchmarking Deep Learning Models on Large Healthcare Datasets.

    Science.gov (United States)

    Purushotham, Sanjay; Meng, Chuizheng; Che, Zhengping; Liu, Yan

    2018-06-04

    Deep learning models (aka Deep Neural Networks) have revolutionized many fields including computer vision, natural language processing, speech recognition, and is being increasingly used in clinical healthcare applications. However, few works exist which have benchmarked the performance of the deep learning models with respect to the state-of-the-art machine learning models and prognostic scoring systems on publicly available healthcare datasets. In this paper, we present the benchmarking results for several clinical prediction tasks such as mortality prediction, length of stay prediction, and ICD-9 code group prediction using Deep Learning models, ensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA scores. We used the Medical Information Mart for Intensive Care III (MIMIC-III) (v1.4) publicly available dataset, which includes all patients admitted to an ICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the benchmarking tasks. Our results show that deep learning models consistently outperform all the other approaches especially when the 'raw' clinical time series data is used as input features to the models. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Developing and modeling of the 'Laguna Verde' BWR CRDA benchmark

    International Nuclear Information System (INIS)

    Solis-Rodarte, J.; Fu, H.; Ivanov, K.N.; Matsui, Y.; Hotta, A.

    2002-01-01

    Reactivity initiated accidents (RIA) and design basis transients are one of the most important aspects related to nuclear power reactor safety. These events are re-evaluated whenever core alterations (modifications) are made as part of the nuclear safety analysis performed to a new design. These modifications usually include, but are not limited to, power upgrades, longer cycles, new fuel assembly and control rod designs, etc. The results obtained are compared with pre-established bounding analysis values to see if the new core design fulfills the requirements of safety constraints imposed on the design. The control rod drop accident (CRDA) is the design basis transient for the reactivity events of BWR technology. The CRDA is a very localized event depending on the control rod insertion position and the fuel assemblies surrounding the control rod falling from the core. A numerical benchmark was developed based on the CRDA RIA design basis accident to further asses the performance of coupled 3D neutron kinetics/thermal-hydraulics codes. The CRDA in a BWR is a mostly neutronic driven event. This benchmark is based on a real operating nuclear power plant - unit 1 of the Laguna Verde (LV1) nuclear power plant (NPP). The definition of the benchmark is presented briefly together with the benchmark specifications. Some of the cross-sections were modified in order to make the maximum control rod worth greater than one dollar. The transient is initiated at steady-state by dropping the control rod with maximum worth at full speed. The 'Laguna Verde' (LV1) BWR CRDA transient benchmark is calculated using two coupled codes: TRAC-BF1/NEM and TRAC-BF1/ENTREE. Neutron kinetics and thermal hydraulics models were developed for both codes. Comparison of the obtained results is presented along with some discussion of the sensitivity of results to some modeling assumptions

  6. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Benchmarking the minimum Electron Beam (eBeam) dose required for the sterilization of space foods

    Science.gov (United States)

    Bhatia, Sohini S.; Wall, Kayley R.; Kerth, Chris R.; Pillai, Suresh D.

    2018-02-01

    As manned space missions extend in length, the safety, nutrition, acceptability, and shelf life of space foods are of paramount importance to NASA. Since food and mealtimes play a key role in reducing stress and boredom of prolonged missions, the quality of food in terms of appearance, flavor, texture, and aroma can have significant psychological ramifications on astronaut performance. The FDA, which oversees space foods, currently requires a minimum dose of 44 kGy for irradiated space foods. The underlying hypothesis was that commercial sterility of space foods could be achieved at a significantly lower dose, and this lowered dose would positively affect the shelf life of the product. Electron beam processed beef fajitas were used as an example NASA space food to benchmark the minimum eBeam dose required for sterility. A 15 kGy dose was able to achieve an approximately 10 log reduction in Shiga-toxin-producing Escherichia coli bacteria, and a 5 log reduction in Clostridium sporogenes spores. Furthermore, accelerated shelf life testing (ASLT) to determine sensory and quality characteristics under various conditions was conducted. Using Multidimensional gas-chromatography-olfactometry-mass spectrometry (MDGC-O-MS), numerous volatiles were shown to be dependent on the dose applied to the product. Furthermore, concentrations of off -flavor aroma compounds such as dimethyl sulfide were decreased at the reduced 15 kGy dose. The results suggest that the combination of conventional cooking combined with eBeam processing (15 kGy) can achieve the safety and shelf-life objectives needed for long duration space-foods.

  8. Experimental Benchmarking of Fire Modeling Simulations. Final Report

    International Nuclear Information System (INIS)

    Greiner, Miles; Lopez, Carlos

    2003-01-01

    A series of large-scale fire tests were performed at Sandia National Laboratories to simulate a nuclear waste transport package under severe accident conditions. The test data were used to benchmark and adjust the Container Analysis Fire Environment (CAFE) computer code. CAFE is a computational fluid dynamics fire model that accurately calculates the heat transfer from a large fire to a massive engulfed transport package. CAFE will be used in transport package design studies and risk analyses

  9. Project W-320 thermal hydraulic model benchmarking and baselining

    International Nuclear Information System (INIS)

    Sathyanarayana, K.

    1998-01-01

    Project W-320 will be retrieving waste from Tank 241-C-106 and transferring the waste to Tank 241-AY-102. Waste in both tanks must be maintained below applicable thermal limits during and following the waste transfer. Thermal hydraulic process control models will be used for process control of the thermal limits. This report documents the process control models and presents a benchmarking of the models with data from Tanks 241-C-106 and 241-AY-102. Revision 1 of this report will provide a baselining of the models in preparation for the initiation of sluicing

  10. Benchmarking and validation of a Geant4-SHADOW Monte Carlo simulation for dose calculations in microbeam radiation therapy.

    Science.gov (United States)

    Cornelius, Iwan; Guatelli, Susanna; Fournier, Pauline; Crosbie, Jeffrey C; Sanchez Del Rio, Manuel; Bräuer-Krisch, Elke; Rosenfeld, Anatoly; Lerch, Michael

    2014-05-01

    Microbeam radiation therapy (MRT) is a synchrotron-based radiotherapy modality that uses high-intensity beams of spatially fractionated radiation to treat tumours. The rapid evolution of MRT towards clinical trials demands accurate treatment planning systems (TPS), as well as independent tools for the verification of TPS calculated dose distributions in order to ensure patient safety and treatment efficacy. Monte Carlo computer simulation represents the most accurate method of dose calculation in patient geometries and is best suited for the purpose of TPS verification. A Monte Carlo model of the ID17 biomedical beamline at the European Synchrotron Radiation Facility has been developed, including recent modifications, using the Geant4 Monte Carlo toolkit interfaced with the SHADOW X-ray optics and ray-tracing libraries. The code was benchmarked by simulating dose profiles in water-equivalent phantoms subject to irradiation by broad-beam (without spatial fractionation) and microbeam (with spatial fractionation) fields, and comparing against those calculated with a previous model of the beamline developed using the PENELOPE code. Validation against additional experimental dose profiles in water-equivalent phantoms subject to broad-beam irradiation was also performed. Good agreement between codes was observed, with the exception of out-of-field doses and toward the field edge for larger field sizes. Microbeam results showed good agreement between both codes and experimental results within uncertainties. Results of the experimental validation showed agreement for different beamline configurations. The asymmetry in the out-of-field dose profiles due to polarization effects was also investigated, yielding important information for the treatment planning process in MRT. This work represents an important step in the development of a Monte Carlo-based independent verification tool for treatment planning in MRT.

  11. Netherlands contribution to the EC project: Benchmark exercise on dose estimation in a regulatory context

    International Nuclear Information System (INIS)

    Stolk, D.J.

    1987-04-01

    On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs

  12. Benchmarking novel approaches for modelling species range dynamics.

    Science.gov (United States)

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches

  13. Nutrient cycle benchmarks for earth system land model

    Science.gov (United States)

    Zhu, Q.; Riley, W. J.; Tang, J.; Zhao, L.

    2017-12-01

    Projecting future biosphere-climate feedbacks using Earth system models (ESMs) relies heavily on robust modeling of land surface carbon dynamics. More importantly, soil nutrient (particularly, nitrogen (N) and phosphorus (P)) dynamics strongly modulate carbon dynamics, such as plant sequestration of atmospheric CO2. Prevailing ESM land models all consider nitrogen as a potentially limiting nutrient, and several consider phosphorus. However, including nutrient cycle processes in ESM land models potentially introduces large uncertainties that could be identified and addressed by improved observational constraints. We describe the development of two nutrient cycle benchmarks for ESM land models: (1) nutrient partitioning between plants and soil microbes inferred from 15N and 33P tracers studies and (2) nutrient limitation effects on carbon cycle informed by long-term fertilization experiments. We used these benchmarks to evaluate critical hypotheses regarding nutrient cycling and their representation in ESMs. We found that a mechanistic representation of plant-microbe nutrient competition based on relevant functional traits best reproduced observed plant-microbe nutrient partitioning. We also found that for multiple-nutrient models (i.e., N and P), application of Liebig's law of the minimum is often inaccurate. Rather, the Multiple Nutrient Limitation (MNL) concept better reproduces observed carbon-nutrient interactions.

  14. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    Science.gov (United States)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  15. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.

    1995-11-01

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  16. TU Electric reactor physics model verification: Power reactor benchmark

    International Nuclear Information System (INIS)

    Willingham, C.E.; Killgore, M.R.

    1988-01-01

    Power reactor benchmark calculations using the advanced code package CASMO-3/SIMULATE-3 have been performed for six cycles of Prairie Island Unit 1. The reload fuel designs for the selected cycles included gadolinia as a burnable absorber, natural uranium axial blankets and increased water-to-fuel ratio. The calculated results for both startup reactor physics tests (boron endpoints, control rod worths, and isothermal temperature coefficients) and full power depletion results were compared to measured plant data. These comparisons show that the TU Electric reactor physics models accurately predict important measured parameters for power reactors

  17. Pescara benchmark: overview of modelling, testing and identification

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L

    2011-01-01

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  18. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    Domm, T.D.; Underwood, R.S.

    1999-01-01

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  19. International collaborative fire modeling project (ICFMP). Summary of benchmark

    International Nuclear Information System (INIS)

    Roewekamp, Marina; Klein-Hessling, Walter; Dreisbach, Jason; McGrattan, Kevin; Miles, Stewart; Plys, Martin; Riese, Olaf

    2008-09-01

    This document was developed in the frame of the 'International Collaborative Project to Evaluate Fire Models for Nuclear Power Plant Applications' (ICFMP). The objective of this collaborative project is to share the knowledge and resources of various organizations to evaluate and improve the state of the art of fire models for use in nuclear power plant fire safety, fire hazard analysis and fire risk assessment. The project is divided into two phases. The objective of the first phase is to evaluate the capabilities of current fire models for fire safety analysis in nuclear power plants. The second phase will extend the validation database of those models and implement beneficial improvements to the models that are identified in the first phase of ICFMP. In the first phase, more than 20 expert institutions from six countries were represented in the collaborative project. This Summary Report gives an overview on the results of the first phase of the international collaborative project. The main objective of the project was to evaluate the capability of fire models to analyze a variety of fire scenarios typical for nuclear power plants (NPP). The evaluation of the capability of fire models to analyze these scenarios was conducted through a series of in total five international Benchmark Exercises. Different types of models were used by the participating expert institutions from five countries. The technical information that will be useful for fire model users, developers and further experts is summarized in this document. More detailed information is provided in the corresponding technical reference documents for the ICFMP Benchmark Exercises No. 1 to 5. The objective of these exercises was not to compare the capabilities and strengths of specific models, address issues specific to a model, nor to recommend specific models over others. This document is not intended to provide guidance to users of fire models. Guidance on the use of fire models is currently being

  20. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    International Nuclear Information System (INIS)

    Newhauser, Wayne D.; Rechner, Laura; Mirkovic, Dragan; Yepes, Pablo; Koch, Nicholas C.; Titt, Uwe; Fontenot, Jonas D.; Zhang, Rui

    2013-01-01

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beamline. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. - Highlights: • We compared measurements and Monte Carlo predictions of dose perturbations caused by the metal objects in proton beams. • Different Monte Carlo codes were used, including MCNPX, GEANT4 and Fast Dose Calculator. • Good agreement was found between measurements and Monte Carlo simulations. • The modification of multiple Coulomb scattering model in MCNPX code yielded improved accuracy. • Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy

  1. Benchmarking of LOFT LRTS-COBRA-FRAP safety analysis model

    International Nuclear Information System (INIS)

    Hanson, G.H.; Atkinson, S.A.; Wadkins, R.P.

    1982-05-01

    The purpose of this work was to check out the LOFT LRTS/COBRA-IV/FRAP-T5 safety-analysis models against test data obtained during a LOFT operational transient in which there was a power and fuel-temperature rise. LOFT Experiment L6-3 was an excessive-load-increase anticipated transient test in which the main steam-flow-control valve was driven from its operational position to full-open in seven seconds. The resulting cooldown and reactivity-increase transients provide a good benchmark for the reactivity-and-power-prediction capability of the LRTS calculations, and for the fuel-bundle and fuel-rod temperature-response analysis capability of the LOFT COBRA-IV and FRAP-T5 models

  2. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Shao, Kan; Gift, Jeffrey S.; Setzer, R. Woodrow

    2013-01-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  3. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  4. Benchmarking of a Markov multizone model of contaminant transport.

    Science.gov (United States)

    Jones, Rachael M; Nicas, Mark

    2014-10-01

    A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). © The Author 2014. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. Benchmarking nuclear models for Gamow–Teller response

    International Nuclear Information System (INIS)

    Litvinova, E.; Brown, B.A.; Fang, D.-L.; Marketin, T.; Zegers, R.G.T.

    2014-01-01

    A comparative study of the nuclear Gamow–Teller response (GTR) within conceptually different state-of-the-art approaches is presented. Three nuclear microscopic models are considered: (i) the recently developed charge-exchange relativistic time blocking approximation (RTBA) based on the covariant density functional theory, (ii) the shell model (SM) with an extended “jj77” model space and (iii) the non-relativistic quasiparticle random-phase approximation (QRPA) with a Brueckner G-matrix effective interaction. We study the physics cases where two or all three of these models can be applied. The Gamow–Teller response functions are calculated for 208 Pb, 132 Sn and 78 Ni within both RTBA and QRPA. The strengths obtained for 208 Pb are compared to data that enable a firm model benchmarking. For the nucleus 132 Sn, also SM calculations are performed within the model space truncated at the level of a particle–hole (ph) coupled to vibration configurations. This allows a consistent comparison to the RTBA where ph⊗phonon coupling is responsible for the spreading width and considerable quenching of the GTR. Differences between the models and perspectives of their future developments are discussed.

  6. Benchmarking nuclear models for Gamow–Teller response

    Energy Technology Data Exchange (ETDEWEB)

    Litvinova, E., E-mail: elena.litvinova@wmich.edu [Department of Physics, Western Michigan University, Kalamazoo, MI 49008-5252 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Brown, B.A. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Fang, D.-L. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI 48824-1321 (United States); Marketin, T. [Physics Department, Faculty of Science, University of Zagreb (Croatia); Zegers, R.G.T. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Joint Institute for Nuclear Astrophysics, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2014-03-07

    A comparative study of the nuclear Gamow–Teller response (GTR) within conceptually different state-of-the-art approaches is presented. Three nuclear microscopic models are considered: (i) the recently developed charge-exchange relativistic time blocking approximation (RTBA) based on the covariant density functional theory, (ii) the shell model (SM) with an extended “jj77” model space and (iii) the non-relativistic quasiparticle random-phase approximation (QRPA) with a Brueckner G-matrix effective interaction. We study the physics cases where two or all three of these models can be applied. The Gamow–Teller response functions are calculated for {sup 208}Pb, {sup 132}Sn and {sup 78}Ni within both RTBA and QRPA. The strengths obtained for {sup 208}Pb are compared to data that enable a firm model benchmarking. For the nucleus {sup 132}Sn, also SM calculations are performed within the model space truncated at the level of a particle–hole (ph) coupled to vibration configurations. This allows a consistent comparison to the RTBA where ph⊗phonon coupling is responsible for the spreading width and considerable quenching of the GTR. Differences between the models and perspectives of their future developments are discussed.

  7. Analogue experiments as benchmarks for models of lava flow emplacement

    Science.gov (United States)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    During an effusive volcanic eruption, the crisis management is mainly based on the prediction of lava flow advance and its velocity. The spreading of a lava flow, seen as a gravity current, depends on its "effective rheology" and on the effusion rate. Fast-computing models have arisen in the past decade in order to predict in near real time lava flow path and rate of advance. This type of model, crucial to mitigate volcanic hazards and organize potential evacuation, has been mainly compared a posteriori to real cases of emplaced lava flows. The input parameters of such simulations applied to natural eruptions, especially effusion rate and topography, are often not known precisely, and are difficult to evaluate after the eruption. It is therefore not straightforward to identify the causes of discrepancies between model outputs and observed lava emplacement, whereas the comparison of models with controlled laboratory experiments appears easier. The challenge for numerical simulations of lava flow emplacement is to model the simultaneous advance and thermal structure of viscous lava flows. To provide original constraints later to be used in benchmark numerical simulations, we have performed lab-scale experiments investigating the cooling of isoviscous gravity currents. The simplest experimental set-up is as follows: silicone oil, whose viscosity, around 5 Pa.s, varies less than a factor of 2 in the temperature range studied, is injected from a point source onto a horizontal plate and spreads axisymmetrically. The oil is injected hot, and progressively cools down to ambient temperature away from the source. Once the flow is developed, it presents a stationary radial thermal structure whose characteristics depend on the input flow rate. In addition to the experimental observations, we have developed in Garel et al., JGR, 2012 a theoretical model confirming the relationship between supply rate, flow advance and stationary surface thermal structure. We also provide

  8. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  9. A resource for benchmarking the usefulness of protein structure models.

    Science.gov (United States)

    Carbajo, Daniel; Tramontano, Anna

    2012-08-02

    Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by non-academics: No.

  10. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel; Tramontano, Anna

    2012-01-01

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  11. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  12. Benchmark on residual stress modeling in fracture mechanics assessment

    International Nuclear Information System (INIS)

    Marie, S.; Deschanels, H.; Chapuliot, S.; Le Delliou, P.

    2014-01-01

    In the frame of development in analytical defect assessment methods for the RSE-M and RCC-MRx codes, new work on the consideration of residual stresses is initiated by AREVA, CEA and EDF. The first step of this work is the realization of a database of F.E. reference cases. To validate assumptions and develop a good practice guideline for the consideration of residual stresses in finite element calculations, a benchmark between AREVA, CEA and EDF is going-on. A first application presented in this paper focuses on the analysis of the crack initiation of aged duplex stainless steel pipes submitted to an increasing pressure loading. Residual stresses are related to pipe fabrication process and act as shell bending condition. Two tests were performed: the first with an internal longitudinal semi-elliptical crack and the second with an external crack. The analysis first focuses on the ability to accurately estimate the measured pressure at the crack initiation of the two tests. For that purpose, the comparison of results obtained with different methods of taking into account the residual stresses (i.e. thermal fields or initial strain field). It then validates post-treatment procedures for J or G determination, and finally compares of the results obtained by the different partners. It is then shown that the numerical models can integrate properly the impact of residual stresses on the crack initiation pressure. Then, an excellent agreement is obtained between the different numerical evaluations of G provided by the participants to the benchmark so that best practice and reference F.E. solutions for residual stresses consideration can be provided based on that work. (authors)

  13. A framework for implementation of organ effect models in TOPAS with benchmarks extended to proton therapy

    International Nuclear Information System (INIS)

    Ramos-Méndez, J; Faddegon, B; Perl, J; Schümann, J; Paganetti, H; Shin, J

    2015-01-01

    The aim of this work was to develop a framework for modeling organ effects within TOPAS (TOol for PArticle Simulation), a wrapper of the Geant4 Monte Carlo toolkit that facilitates particle therapy simulation. The DICOM interface for TOPAS was extended to permit contour input, used to assign voxels to organs. The following dose response models were implemented: The Lyman–Kutcher–Burman model, the critical element model, the population based critical volume model, the parallel-serial model, a sigmoid-based model of Niemierko for normal tissue complication probability and tumor control probability (TCP), and a Poisson-based model for TCP. The framework allows easy manipulation of the parameters of these models and the implementation of other models.As part of the verification, results for the parallel-serial and Poisson model for x-ray irradiation of a water phantom were compared to data from the AAPM Task Group 166. When using the task group dose-volume histograms (DVHs), results were found to be sensitive to the number of points in the DVH, with differences up to 2.4%, some of which are attributable to differences between the implemented models. New results are given with the point spacing specified. When using Monte Carlo calculations with TOPAS, despite the relatively good match to the published DVH’s, differences up to 9% were found for the parallel-serial model (for a maximum DVH difference of 2%) and up to 0.5% for the Poisson model (for a maximum DVH difference of 0.5%). However, differences of 74.5% (in Rectangle1), 34.8% (in PTV) and 52.1% (in Triangle) for the critical element, critical volume and the sigmoid-based models were found respectively.We propose a new benchmark for verification of organ effect models in proton therapy. The benchmark consists of customized structures in the spread out Bragg peak plateau, normal tissue, tumor, penumbra and in the distal region. The DVH’s, DVH point spacing, and results of the organ effect models are

  14. Indoor Modelling Benchmark for 3D Geometry Extraction

    Science.gov (United States)

    Thomson, C.; Boehm, J.

    2014-06-01

    A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.

  15. Dose mapping simulation using the MCNP code for the Syrian gamma irradiation facility and benchmarking

    International Nuclear Information System (INIS)

    Khattab, K.; Boush, M.; Alkassiri, H.

    2013-01-01

    Highlights: • The MCNP4C was used to calculate the gamma ray dose rate spatial distribution in for the SGIF. • Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter was conducted as well. • Good agreements were noticed between the calculated and measured results. • The maximum relative differences were less than 7%, 4% and 4% in the x, y and z directions respectively. - Abstract: A three dimensional model for the Syrian gamma irradiation facility (SGIF) is developed in this paper to calculate the gamma ray dose rate spatial distribution in the irradiation room at the 60 Co source board using the MCNP-4C code. Measurement of the gamma ray dose rate spatial distribution using the Chlorobenzene dosimeter is conducted as well to compare the calculated and measured results. Good agreements are noticed between the calculated and measured results with maximum relative differences less than 7%, 4% and 4% in the x, y and z directions respectively. This agreement indicates that the established model is an accurate representation of the SGIF and can be used in the future to make the calculation design for a new irradiation facility

  16. Studi Model Benchmark Mcnp6 Dalam Perhitungan Reaktivitas Batang Kendali Htr-10

    OpenAIRE

    Jupiter S.Pane, Zuhair, Suwoto, Putranto Ilham Yazid

    2016-01-01

    STUDI MODEL BENCHMARK MCNP6 DALAM PERHITUNGAN REAKTIVITAS BATANG KENDALI HTR-10. Dalam operasi reaktor nuklir, sistem batang kendali memainkan peranan yang sangat penting karena didesain untuk mengendalikan reaktivitas teras dan memadamkan reaktor. Nilai reaktivitas batang kendali harus diprediksi secara akurat melalui eksperimen dan perhitungan. Makalah ini mendiskusikan model Benchmark dalam perhitungan reaktivitas batang kendali reaktor HTR-10. Perhitungan dikerjakan dengan program transpo...

  17. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    Science.gov (United States)

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement). © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.

  18. Benchmarking the MCNP code for Monte Carlo modelling of an in vivo neutron activation analysis system.

    Science.gov (United States)

    Natto, S A; Lewis, D G; Ryde, S J

    1998-01-01

    The Monte Carlo computer code MCNP (version 4A) has been used to develop a personal computer-based model of the Swansea in vivo neutron activation analysis (IVNAA) system. The model included specification of the neutron source (252Cf), collimators, reflectors and shielding. The MCNP model was 'benchmarked' against fast neutron and thermal neutron fluence data obtained experimentally from the IVNAA system. The Swansea system allows two irradiation geometries using 'short' and 'long' collimators, which provide alternative dose rates for IVNAA. The data presented here relate to the short collimator, although results of similar accuracy were obtained using the long collimator. The fast neutron fluence was measured in air at a series of depths inside the collimator. The measurements agreed with the MCNP simulation within the statistical uncertainty (5-10%) of the calculations. The thermal neutron fluence was measured and calculated inside the cuboidal water phantom. The depth of maximum thermal fluence was 3.2 cm (measured) and 3.0 cm (calculated). The width of the 50% thermal fluence level across the phantom at its mid-depth was found to be the same by both MCNP and experiment. This benchmarking exercise has given us a high degree of confidence in MCNP as a tool for the design of IVNAA systems.

  19. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  20. Fitting and benchmarking of Monte Carlo output parameters for iridium-192 high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Acquah, F.G.

    2011-01-01

    Brachytherapy, the use of radioactive sources for the treatment of tumours is an important tool in radiation oncology. Accurate calculations of dose delivered to malignant and normal tissues are the main responsibility of the Medical Physics staff. With the use of Treatment Planning System (TPS) computers now becoming a standard practice in the Radiation Oncology Departments, Independent calculations to certify the results of these commercial TPSs are important part of a good quality management system for brachytherapy implants. There are inherent errors in the dose distributions produced by these TPSs due to its failure to account for heterogeneity in the calculation algorithms and Monte Carlo (MC) method seems to be the panacea for these corrections. In this study, a fit functional form using MC output parameters was performed to reduce dose calculation uncertainty using the Matlab software curve fitting applications. This includes the modification of the AAPM TG-43 parameters to accommodate the new developments for a rapid brachytherapy dose rate calculation. Analytical computations were performed to hybridize the anisotropy function, F(r,θ) and radial dose function, g(r) into a single new function f(r,θ) for the Nucletron microSelectron High Dose Rate 'new or v2' (mHDRv2) 192 Ir brachytherapy source. In order to minimize computation time and to improve the accuracy of manual calculations, the dosimetry function f(r,θ) used fewer parameters and formulas for the fit. Using MC outputs as the standard, the percentage errors for the fits were calculated and used to evaluate the average and maximum uncertainties. Dose rate deviation between the MC data and fit were also quantified as errors(E), which showed minimal values. These results showed that the dosimetry parameters from this study as compared to those of MC outputs parameters were in good agreement and better than the results obtained from literature. The work confirms a lot of promise in building robust

  1. A resource for benchmarking the usefulness of protein structure models

    Directory of Open Access Journals (Sweden)

    Carbajo Daniel

    2012-08-01

    Full Text Available Abstract Background Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. Results This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. Conclusions The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. Implementation, availability and requirements Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php. Operating system(s: Platform independent. Programming language: Perl-BioPerl (program; mySQL, Perl DBI and DBD modules (database; php, JavaScript, Jmol scripting (web server. Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet and PSAIA. License: Free. Any

  2. Benchmarking Continuum Solvent Models for Keto-Enol Tautomerizations.

    Science.gov (United States)

    McCann, Billy W; McFarland, Stuart; Acevedo, Orlando

    2015-08-13

    Experimental free energies of tautomerization, ΔGT, were used to benchmark the gas-phase predictions of 17 different quantum mechanical methods and eight basis sets for seven keto-enol tautomer pairs dominated by their enolic form. The G4 method and M06/6-31+G(d,p) yielded the most accurate results, with mean absolute errors (MAE's) of 0.95 and 0.71 kcal/mol, respectively. Using these two theory levels, the solution-phase ΔGT values for 23 unique tautomer pairs composed of aliphatic ketones, β-dicarbonyls, and heterocycles were computed in multiple protic and aprotic solvents. The continuum solvation models, namely, polarizable continuum model (PCM), polarizable conductor calculation model (CPCM), and universal solvation model (SMD), gave relatively similar MAE's of ∼1.6-1.7 kcal/mol for G4 and ∼1.9-2.0 kcal/mol with M06/6-31+G(d,p). Partitioning the tautomer pairs into their respective molecular types, that is, aliphatic ketones, β-dicarbonyls, and heterocycles, and separating out the aqueous versus nonaqueous results finds G4/PCM utilizing the UA0 cavity to be the overall most accurate combination. Free energies of activation, ΔG(‡), for the base-catalyzed keto-enol interconversion of 2-nitrocyclohexanone were also computed using six bases and five solvents. The M06/6-31+G(d,p) reproduced the ΔG(‡) with MAE's of 1.5 and 1.8 kcal/mol using CPCM and SMD, respectively, for all combinations of base and solvent. That specific enolization was previously proposed to proceed via a concerted mechanism in less polar solvents but shift to a stepwise mechanism in more polar solvents. However, the current calculations suggest that the stepwise mechanism operates in all solvents.

  3. Dose assessment models. Annex A

    International Nuclear Information System (INIS)

    1982-01-01

    The models presented in this chapter have been separated into 2 general categories: environmental transport models which describe the movement of radioactive materials through all sectors of the environment after their release, and dosimetric models to calculate the absorbed dose following an intake of radioactive materials or exposure to external irradiation. Various sections of this chapter also deal with atmospheric transport models, terrestrial models, and aquatic models.

  4. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different

  5. Theory-motivated benchmark models and superpartners at the Fermilab Tevatron

    International Nuclear Information System (INIS)

    Kane, G.L.; Nelson, Brent D.; Wang Liantao; Wang, Ting T.; Lykken, J.; Mrenna, Stephen

    2003-01-01

    Recently published benchmark models have contained rather heavy superpartners. To test the robustness of this result, several benchmark models have been constructed based on theoretically well-motivated approaches, particularly string-based ones. These include variations on anomaly- and gauge-mediated models, as well as gravity mediation. The resulting spectra often have light gauginos that are produced in significant quantities at the Fermilab Tevatron collider, or will be at a 500 GeV linear collider. The signatures also provide interesting challenges for the CERN LHC. In addition, these models are capable of accounting for electroweak symmetry breaking with less severe cancellations among soft supersymmetry breaking parameters than previous benchmark models

  6. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  7. Benchmark Simulation Model No 2 – finalisation of plant layout and default control strategy

    DEFF Research Database (Denmark)

    Nopens, I.; Benedetti, L.; Jeppsson, U.

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted...... in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on “Benchmarking of control strategies for WWTPs” have focused on an extension of the benchmark simulation model. This extension aims...... be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given....

  8. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  9. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  10. Benchmarking of computer codes and approaches for modeling exposure scenarios

    International Nuclear Information System (INIS)

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided

  11. Bone and marrow dose modeling

    International Nuclear Information System (INIS)

    Stabin, Michael G.

    2004-01-01

    Nuclear medicine therapy is being used increasingly in the treatment of cancer (thyroid, leukemia/lymphoma with RIT, primary and secondary bone malignancies, and neuroblastomas). In all cases it is marrow toxicity that limits the amount of treatment that can be administered safely. Marrow dose calculations are more difficult than for many major organs because of the intricate association of bone and soft tissue elements. In RIT, there appears to be no consensus on how to calculate that dose accurately, or of individual patients ability to tolerate planned therapy. Available dose models are designed after an idealized average, healthy individual. Patient-specific methods are applied in evaluation of biokinetic data, and need to be developed for treatment of the physical data (dose conversion factors) as well: age, prior patient therapy, disease status. Contributors to marrow dose: electrons and photons

  12. Refined hazard characterization of 3-MCPD using benchmark dose modeling

    NARCIS (Netherlands)

    Rietjens, I.M.C.M.; Scholz, G.; Berg, van den I.; Schilter, B.; Slob, W.

    2012-01-01

    3-Monochloropropane-1,2-diol (3-MCPD)-esters represent a newly identified class of food-borne process contaminants of possible health concern. Due to hydrolysis 3-MCPD esters constitute a potentially significant source of free 3-MCPD exposure and their preliminary risk assessment was based on

  13. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  14. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  15. Comparison of Nordic dose models

    International Nuclear Information System (INIS)

    Thykier-Nielsen, S.

    1978-04-01

    A comparison is made between the models used in the four Nordic countries, Finland, Norway, Sweden and Denmark, for calculation of concentrations and doses from releases of radioactive material to the atmosphere. The comparison is limited to the near-zone models, i.e. the models for calculation of concentrations and doses within 50 km from the release point, and it comprises the following types of calculation: a. Concentrations of airborne material, b. External gamma doses from a plume, c. External gamma doses from radioactive material deposited on the ground. All models are based on the gaussian dispersion model (the gaussian plume model). Unit releases of specific isotopes under specific meteorological conditions are assumed. On the basis of the calculation results from the models, it is concluded that there are no essential differences. The difference between the calculation results only exceeds a factor of 3 in special cases. It thus lies within the known limits of uncertainty for the gaussian plume model. (author)

  16. Benchmarking residual dose rates in a NuMI-like environment

    Energy Technology Data Exchange (ETDEWEB)

    Igor L. Rakhno et al.

    2001-11-02

    Activation of various structural and shielding materials is an important issue for many applications. A model developed recently to calculate residual activity of arbitrary composite materials for arbitrary irradiation and cooling times is presented in the paper. Measurements have been performed at the Fermi National Accelerator Laboratory using a 120 GeV proton beam to study induced radioactivation of materials used for beam line components and shielding. The calculated residual dose rates for the samples studied behind the target and outside of the thick shielding are presented and compared with the measured ones. Effects of energy spectra, sample material and dimensions, their distance from the shielding, and gaps between the shielding modules and walls as well as between the modules themselves were studied in detail.

  17. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry

    International Nuclear Information System (INIS)

    Sohrabpour, M.; Hassanzadeh, M.; Shahriari, M.; Sharifzadeh, M.

    2002-01-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators

  18. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  19. What is a food and what is a medicinal product in the European Union? Use of the benchmark dose (BMD) methodology to define a threshold for "pharmacological action".

    Science.gov (United States)

    Lachenmeier, Dirk W; Steffen, Christian; el-Atma, Oliver; Maixner, Sibylle; Löbell-Behrends, Sigrid; Kohl-Himmelseher, Matthias

    2012-11-01

    The decision criterion for the demarcation between foods and medicinal products in the EU is the significant "pharmacological action". Based on six examples of substances with ambivalent status, the benchmark dose (BMD) method is evaluated to provide a threshold for pharmacological action. Using significant dose-response models from literature clinical trial data or epidemiology, the BMD values were 63mg/day for caffeine, 5g/day for alcohol, 6mg/day for lovastatin, 769mg/day for glucosamine sulfate, 151mg/day for Ginkgo biloba extract, and 0.4mg/day for melatonin. The examples for caffeine and alcohol validate the approach because intake above BMD clearly exhibits pharmacological action. Nevertheless, due to uncertainties in dose-response modelling as well as the need for additional uncertainty factors to consider differences in sensitivity within the human population, a "borderline range" on the dose-response curve remains. "Pharmacological action" has proven to be not very well suited as binary decision criterion between foods and medicinal product. The European legislator should rethink the definition of medicinal products, as the current situation based on complicated case-by-case decisions on pharmacological action leads to an unregulated market flooded with potentially illegal food supplements. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Dose modeling in ultraviolet phototherapy

    International Nuclear Information System (INIS)

    Grimes, David Robert; Robbins, Chris; O'Hare, Neil John

    2010-01-01

    Purpose: Ultraviolet phototherapy is widely used in the treatment of numerous skin conditions. This treatment is well established and largely beneficial to patients on both physical and psychological levels; however, overexposure to ultraviolet radiation (UVR) can have detrimental effects, such as erythemal responses and ocular damage in addition to the potentially carcinogenic nature of UVR. For these reasons, it is essential to control and quantify the radiation dose incident upon the patient to ensure that it is both biologically effective and has the minimal possible impact on the surrounding unaffected tissue. Methods: To date, there has been little work on dose modeling, and the output of artificial UVR sources is an area where research has been recommended. This work characterizes these sources by formalizing an approach from first principles and experimentally examining this model. Results: An implementation of a line source model is found to give impressive accuracy and quantifies the output radiation well. Conclusions: This method could potentially serve as a basis for a full computational dose model for quantifying patient dose.

  1. The ACCENT-protocol: a framework for benchmarking and model evaluation

    Directory of Open Access Journals (Sweden)

    V. Grewe

    2012-05-01

    Full Text Available We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732 and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

  2. The ACCENT-protocol: a framework for benchmarking and model evaluation

    Science.gov (United States)

    Grewe, V.; Moussiopoulos, N.; Builtjes, P.; Borrego, C.; Isaksen, I. S. A.; Volz-Thomas, A.

    2012-05-01

    We summarise results from a workshop on "Model Benchmarking and Quality Assurance" of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure of how to perform a model evaluation. This includes eight steps and examples from global model applications which are given for illustration. The first and important step is concerning the purpose of the model application, i.e. the addressed underlying scientific or political question. We give examples to demonstrate that there is no model evaluation per se, i.e. without a focused purpose. Model evaluation is testing, whether a model is fit for its purpose. The following steps are deduced from the purpose and include model requirements, input data, key processes and quantities, benchmark data, quality indicators, sensitivities, as well as benchmarking and grading. We define "benchmarking" as the process of comparing the model output against either observational data or high fidelity model data, i.e. benchmark data. Special focus is given to the uncertainties, e.g. in observational data, which have the potential to lead to wrong conclusions in the model evaluation if not considered carefully.

  3. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs

    DEFF Research Database (Denmark)

    Jeppsson, Ulf; Rosen, Christian; Alex, Jens

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also...... worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently...... the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant...

  4. Altered operant responding for motor reinforcement and the determination of benchmark doses following perinatal exposure to low-level 2,3,7,8-tetrachlorodibenzo-p-dioxin.

    Science.gov (United States)

    Markowski, V P; Zareba, G; Stern, S; Cox, C; Weiss, B

    2001-06-01

    Pregnant Holtzman rats were exposed to a single oral dose of 0, 20, 60, or 180 ng/kg 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on the 18th day of gestation. Their adult female offspring were trained to respond on a lever for brief opportunities to run in specially designed running wheels. Once they had begun responding on a fixed-ratio 1 (FR1) schedule of reinforcement, the fixed-ratio requirement for lever pressing was increased at five-session intervals to values of FR2, FR5, FR10, FR20, and FR30. We examined vaginal cytology after each behavior session to track estrous cyclicity. Under each of the FR values, perinatal TCDD exposure produced a significant dose-related reduction in the number of earned opportunities to run, the lever response rate, and the total number of revolutions in the wheel. Estrous cyclicity was not affected. Because of the consistent dose-response relationship at all FR values, we used the behavioral data to calculate benchmark doses based on displacements from modeled zero-dose performance of 1% (ED(01)) and 10% (ED(10)), as determined by a quadratic fit to the dose-response function. The mean ED(10) benchmark dose for earned run opportunities was 10.13 ng/kg with a 95% lower bound of 5.77 ng/kg. The corresponding ED(01) was 0.98 ng/kg with a 95% lower bound of 0.83 ng/kg. The mean ED(10) for total wheel revolutions was calculated as 7.32 ng/kg with a 95% lower bound of 5.41 ng/kg. The corresponding ED(01) was 0.71 ng/kg with a 95% lower bound of 0.60. These values should be viewed from the perspective of current human body burdens, whose average value, based on TCDD toxic equivalents, has been calculated as 13 ng/kg.

  5. Towards a public, standardized, diagnostic benchmarking system for land surface models

    Directory of Open Access Journals (Sweden)

    G. Abramowitz

    2012-06-01

    Full Text Available This work examines different conceptions of land surface model benchmarking and the importance of internationally standardized evaluation experiments that specify data sets, variables, metrics and model resolutions. It additionally demonstrates how essential the definition of a priori expectations of model performance can be, based on the complexity of a model and the amount of information being provided to it, and gives an example of how these expectations might be quantified. Finally, the Protocol for the Analysis of Land Surface models (PALS is introduced – a free, online land surface model benchmarking application that is structured to meet both of these goals.

  6. Urban contamination and dose model

    International Nuclear Information System (INIS)

    Robertson, E.; Barry, P.J.

    1995-10-01

    Nuclear power reactors and other nuclear facilities are being built near or even within urban centres. Accidental releases of radionuclides to the atmosphere in built-up areas result in radiological exposure pathways that differ from those caused by releases in rural environments. Other than inhalation, exposure pathways involve external radiation from the plume while it passes and from radioactivity deposited onto the many and varied surfaces after it has passed. Radiation fields inside buildings are attenuated but many people are potentially exposed so while individual doses may be relatively low, population integrated doses may be high enough to cause concern. It is important, therefore, to assess the potential exposures and to estimate the cost-effectiveness of dose reduction measures in urban environments. This report describes a model developed to carry out such assessments. The model draws heavily on experience gained in European cities after their contamination fallout from the Chernobyl accident. Input is time integrated concentrations of specific radionuclides in urban air, obtained either by direct measurement or by prediction using an atmospheric dispersion model. The code includes default values for site specific variables and transfer parameters but the user is invited if desired to enter other values from the keyboard. Output is the time integrated dose rates for individuals selected because of the characteristic living, working and recreational habits. An accompanying manual documents the technical background on which the model is based and leads a first-time suer through various steps and operations encountered while the model is running. (author). 60 refs., 10 tabs., 1 fig

  7. Model of organ dose combination

    International Nuclear Information System (INIS)

    Valley, J.-F.; Lerch, P.

    1977-01-01

    The ICRP recommendations are based on the limitation of the dose to each organ. In the application and for a unique source the critical organ concept allows to limit the calculation and represents the irradiation status of an individuum. When several sources of radiation are involved the derivation of the dose contribution of each source to each organ is necessary. In order to represent the irradiation status a new parameter is to be defined. Propositions have been made by some authors, in particular by Jacobi introducing at this level biological parameters like the incidence rate of detriment and its severity. The new concept is certainly richer than a simple dose notion. However, in the actual situation of knowledge about radiation effects an intermediate parameter, using only physical concepts and the maximum permissible doses to the organs, seems more appropriate. The model, which is a generalization of the critical organ concept and shall be extended in the future to take the biological effects into account, will be presented [fr

  8. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  9. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    Science.gov (United States)

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  10. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Science.gov (United States)

    2012-06-19

    ... environment, the EPA routinely conducts risk assessments on chemical agents that may be toxic to humans. A key component of the risk assessment process involves evaluating the dose-response relationship between exposure... BMD methodology for human health risk assessments. The document discusses computation of BMD values...

  11. Benchmark studies of induced radioactivity and remanent dose rates produced in LHC materials

    International Nuclear Information System (INIS)

    Brugger, M.; Mayer, S.; Roesler, S.; Ulrici, L.; Khater, H.; Prinz, A.; Vincke, H.

    2005-01-01

    Samples of materials that will be used for elements of the LHC machine as well as for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy Reference Field facility. The materials included various types of steel, copper, titanium, concrete and marble as well as light materials such as carbon composites and boron nitride. Emphasis was put on an accurate recording of the irradiation conditions, such as irradiation profile and intensity, and on a detailed determination of the elemental composition of the samples. After the irradiation, the specific activity induced in the samples as well as the remanent dose rate were measured at different cooling times ranging from about 20 minutes to two months. Furthermore, the irradiation experiment was simulated using the FLUKA Monte Carlo code and specific activities. In addition, dose rates were calculated. The latter was based on a new method simulating the production of various isotopes and the electromagnetic cascade induced by radioactive decay at a certain cooling time. In general, solid agreement was found, which engenders confidence in the predictive power of the applied codes and tools for the estimation of the radioactive nuclide inventory of the LHC machine as well as the calculation of remanent doses to personnel during interventions. (authors)

  12. Impact of Genomics Platform and Statistical Filtering on Transcriptional Benchmark Doses (BMD and Multiple Approaches for Selection of Chemical Point of Departure (PoD.

    Directory of Open Access Journals (Sweden)

    A Francina Webster

    Full Text Available Many regulatory agencies are exploring ways to integrate toxicogenomic data into their chemical risk assessments. The major challenge lies in determining how to distill the complex data produced by high-content, multi-dose gene expression studies into quantitative information. It has been proposed that benchmark dose (BMD values derived from toxicogenomics data be used as point of departure (PoD values in chemical risk assessments. However, there is limited information regarding which genomics platforms are most suitable and how to select appropriate PoD values. In this study, we compared BMD values modeled from RNA sequencing-, microarray-, and qPCR-derived gene expression data from a single study, and explored multiple approaches for selecting a single PoD from these data. The strategies evaluated include several that do not require prior mechanistic knowledge of the compound for selection of the PoD, thus providing approaches for assessing data-poor chemicals. We used RNA extracted from the livers of female mice exposed to non-carcinogenic (0, 2 mg/kg/day, mkd and carcinogenic (4, 8 mkd doses of furan for 21 days. We show that transcriptional BMD values were consistent across technologies and highly predictive of the two-year cancer bioassay-based PoD. We also demonstrate that filtering data based on statistically significant changes in gene expression prior to BMD modeling creates more conservative BMD values. Taken together, this case study on mice exposed to furan demonstrates that high-content toxicogenomics studies produce robust data for BMD modelling that are minimally affected by inter-technology variability and highly predictive of cancer-based PoD doses.

  13. Mechanism-based risk assessment strategy for drug-induced cholestasis using the transcriptional benchmark dose derived by toxicogenomics.

    Science.gov (United States)

    Kawamoto, Taisuke; Ito, Yuichi; Morita, Osamu; Honda, Hiroshi

    2017-01-01

    Cholestasis is one of the major causes of drug-induced liver injury (DILI), which can result in withdrawal of approved drugs from the market. Early identification of cholestatic drugs is difficult due to the complex mechanisms involved. In order to develop a strategy for mechanism-based risk assessment of cholestatic drugs, we analyzed gene expression data obtained from the livers of rats that had been orally administered with 12 known cholestatic compounds repeatedly for 28 days at three dose levels. Qualitative analyses were performed using two statistical approaches (hierarchical clustering and principle component analysis), in addition to pathway analysis. The transcriptional benchmark dose (tBMD) and tBMD 95% lower limit (tBMDL) were used for quantitative analyses, which revealed three compound sub-groups that produced different types of differential gene expression; these groups of genes were mainly involved in inflammation, cholesterol biosynthesis, and oxidative stress. Furthermore, the tBMDL values for each test compound were in good agreement with the relevant no observed adverse effect level. These results indicate that our novel strategy for drug safety evaluation using mechanism-based classification and tBMDL would facilitate the application of toxicogenomics for risk assessment of cholestatic DILI.

  14. Benchmarking biological nutrient removal in wastewater treatment plants: influence of mathematical model assumptions

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Gernaey, Krist V.; Jeppsson, Ulf

    2012-01-01

    This paper examines the effect of different model assumptions when describing biological nutrient removal (BNR) by the activated sludge models (ASM) 1, 2d & 3. The performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) benchmark wastewater treatment plant...

  15. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  16. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4

    International Nuclear Information System (INIS)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-01-01

    The expanding clinical use of low-energy photon emitting 125 I and 103 Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst ±5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately ±2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV

  17. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    Science.gov (United States)

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  18. Benchmarking in pathology: development of an activity-based costing model.

    Science.gov (United States)

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  19. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    International Nuclear Information System (INIS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2007-01-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  20. 2016 International Land Model Benchmarking (ILAMB) Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Forrest M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Koven, Charles D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Keppel-Aleks, Gretchen [Univ. of Michigan, Ann Arbor, MI (United States); Lawrence, David M. [National Center for Atmospheric Research, Boulder, CO (United States); Riley, William J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Randerson, James T. [Univ. of California, Irvine, CA (United States); Ahlström, Anders [Stanford Univ., Stanford, CA (United States); Lund Univ., Lund (Sweden); Abramowitz, Gabriel [Univ. of New South Wales, Sydney, NSW (Australia); Baldocchi, Dennis D. [Univ. of California, Berkeley, CA (United States); Best, Martin J. [UK Met Office, Exeter, EX1 3PB (United Kingdom); Bond-Lamberty, Benjamin [Joint Global Change Research Institute, Pacific Northwest National Lab. (PNNL), College Park, MD (United States); De Kauwe, Martin G. [Macquarie Univ., NSW (Australia); Denning, A. Scott [Colorado State Univ., Fort Collins, CO (United States); Desai, Ankur R. [Univ. of Wisconsin, Madison, WI (United States); Eyring, Veronika [Deutsches Zentrum fuer Luft- und Raumfahrt (DLR), Oberpfaffenhofen (Germany); Fisher, Joshua B. [California Inst. of Technology (CalTech), Pasadena, CA (United States). Jet Propulsion Lab.; Fisher, Rosie A. [National Center for Atmospheric Research, Boulder, CO (United States); Gleckler, Peter J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hugelius, Gustaf [Stockholm Univ. (Sweden); Jain, Atul K. [Univ. of Illinois, Urbana, IL (United States); Kiang, Nancy Y. [NASA Goddard Institute for Space Studies, Columbia Univ., New York, NY (United States); Kim, Hyungjum [University of Tokyo, Bunkyo-ku, Tokyo (Japan); Koster, Randal D. [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Kumar, Sujay V. [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Li, Hongyi [Tsinghua Univ., Beijing (China). Dept. of Hydraulic Engineering; Luo, Yiqi [Univ. of Oklahoma, Norman, OK (United States); Mao, Jiafu [Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States); McDowell, Nathan G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mishra, Umakant [Argonne National Lab. (ANL), Argonne, IL (United States); Moorcroft, Paul R. [Harvard Univ., Cambridge, MA (United States); Pau, George S.H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ricciuto, Daniel M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Schaefer, Kevin [Univ. of Colorado, Boulder, CO (United States). National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences; Schwalm, Christopher R. [Woods Hole Research Center, Falmouth, MA (United States); Serbin, Shawn P. [Brookhaven National Lab. (BNL), Upton, NY (United States); Shevliakova, Elena [Geophysical Fluid Dynamics Laboratory, Princeton Univ., Princeton, NJ (United States); Slater, Andrew G. [Univ. of Colorado, Boulder, CO (United States). National Snow and Ice Data Center, Cooperative Institute for Research in Environmental Sciences; Tang, Jinyun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Mathew [Univ. of Edinburgh, Scotland (United Kingdom). School of GeoSciences and NERC National Centre for Earth Observation; Xia, Jianyang [Univ. of Oklahoma, Norman, OK (United States); East China Normal Univ. (ECNU), Shanghai (China). Tiantong National Forest Ecosystem Observation and Research Station, School of Ecological and Environmental Sciences; Xu, Chonggang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Joseph, Renu [US Department of Energy, Germantown, MD (United States); Koch, Dorothy [US Department of Energy, Germantown, MD (United States)

    2017-04-01

    As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.

  1. 2016 International Land Model Benchmarking (ILAMB) Workshop Report

    Science.gov (United States)

    Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.; hide

    2016-01-01

    As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.

  2. Creating a benchmark of vertical axis wind turbines in dynamic stall for validating numerical models

    DEFF Research Database (Denmark)

    Castelein, D.; Ragni, D.; Tescione, G.

    2015-01-01

    An experimental campaign using Particle Image Velocimetry (2C-PIV) technique has been conducted on a H-type Vertical Axis Wind Turbine (VAWT) to create a benchmark for validating and comparing numerical models. The turbine is operated at tip speed ratios (TSR) of 4.5 and 2, at an average chord...

  3. Structural modeling and fuzzy-logic based diagnosis of a ship propulsion benchmark

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.; Katebi, S.D.

    2000-01-01

    An analysis of structural model of a ship propulsion benchmark leads to identifying the subsystems with inherent redundant information. For a nonlinear part of the system, a Fuzzy logic based FD algorithm with adaptive threshold is employed. The results illustrate the applicability of structural...

  4. A model library for simulation and benchmarking of integrated urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, R.; Flores Alsina, Xavier; Kroll, J. S.

    2017-01-01

    This paper presents a freely distributed, open-source toolbox to predict the behaviour of urban wastewater systems (UWS). The proposed library is used to develop a system-wide Benchmark Simulation Model (BSM-UWS) for evaluating (local/global) control strategies in urban wastewater systems (UWS...

  5. The Accent-protocol: a framework for benchmarking and model evaluation

    NARCIS (Netherlands)

    Builtjes, P.J.H.; Grewe, V.; Moussiopoulos, N.; Borrego, C.; Isaksen, I.S.A.; Volz-Thomas, A.

    2011-01-01

    We summarise results from a workshop on “Model Benchmarking and Quality Assurance” of the EU-Network of Excellence ACCENT, including results from other activities (e.g. COST Action 732) and publications. A formalised evaluation protocol is presented, i.e. a generic formalism describing the procedure

  6. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1)

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predict...

  7. Developing of Indicators of an E-Learning Benchmarking Model for Higher Education Institutions

    Science.gov (United States)

    Sae-Khow, Jirasak

    2014-01-01

    This study was the development of e-learning indicators used as an e-learning benchmarking model for higher education institutes. Specifically, it aimed to: 1) synthesize the e-learning indicators; 2) examine content validity by specialists; and 3) explore appropriateness of the e-learning indicators. Review of related literature included…

  8. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    Science.gov (United States)

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  9. Uncertainty in Earth System Models: Benchmarks for Ocean Model Performance and Validation

    Science.gov (United States)

    Ogunro, O. O.; Elliott, S.; Collier, N.; Wingenter, O. W.; Deal, C.; Fu, W.; Hoffman, F. M.

    2017-12-01

    The mean ocean CO2 sink is a major component of the global carbon budget, with marine reservoirs holding about fifty times more carbon than the atmosphere. Phytoplankton play a significant role in the net carbon sink through photosynthesis and drawdown, such that about a quarter of anthropogenic CO2 emissions end up in the ocean. Biology greatly increases the efficiency of marine environments in CO2 uptake and ultimately reduces the impact of the persistent rise in atmospheric concentrations. However, a number of challenges remain in appropriate representation of marine biogeochemical processes in Earth System Models (ESM). These threaten to undermine the community effort to quantify seasonal to multidecadal variability in ocean uptake of atmospheric CO2. In a bid to improve analyses of marine contributions to climate-carbon cycle feedbacks, we have developed new analysis methods and biogeochemistry metrics as part of the International Ocean Model Benchmarking (IOMB) effort. Our intent is to meet the growing diagnostic and benchmarking needs of ocean biogeochemistry models. The resulting software package has been employed to validate DOE ocean biogeochemistry results by comparison with observational datasets. Several other international ocean models contributing results to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) were analyzed simultaneously. Our comparisons suggest that the biogeochemical processes determining CO2 entry into the global ocean are not well represented in most ESMs. Polar regions continue to show notable biases in many critical biogeochemical and physical oceanographic variables. Some of these disparities could have first order impacts on the conversion of atmospheric CO2 to organic carbon. In addition, single forcing simulations show that the current ocean state can be partly explained by the uptake of anthropogenic emissions. Combined effects of two or more of these forcings on ocean biogeochemical cycles and ecosystems

  10. DEVELOPING A MODEL TO ENHANCE LABOR PRODUCTIVITY USING BRIDGE CONSTRUCTION BENCHMARK DATA

    Directory of Open Access Journals (Sweden)

    Seonghoon Kim

    2013-07-01

    Full Text Available The Labor Working Status Monitoring (LWSM Model that incorporates the WRITE and the industry benchmark data was developed through the five steps to enhance labor producitivty in bridge construction operations. The first step of the development process was to conduct a literature review, followed by the second step which was to develop the WRITE. During the development, the authors identified the necessary hardware and software for the WRITE and outlined a schematic to show the connection of major hardware components. The third step was to develop the LWSM Model for monitoring the on-site construction labor working status by comparing data from the WRITE with the industry benchmark data. A survey methodology was used to acquire industry benchmark data from bridge construction experts. The fourth step was to demonstrate the implementation of the LWSM Model at a bridge construction site. During this phase, labor working status data collected using the WRITE was compared with the benchmark data to form the basis for the project managers and engineers to make efficiency improvement decisions. Finally, research findings and recommendations for future research were outlined. The success of this research made several contributions to the advancement of bridge construction. First, it advances the application of wireless technology in construction management. Second, it provides an advanced technology for project managers and engineers to share labor working status information among project participants. Finally, using the developed technology, project managers and engineers can quickly identify labor efficiency problems and take action to address these problems.

  11. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  12. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  13. An integrated control-oriented modelling for HVAC performance benchmarking

    NARCIS (Netherlands)

    Satyavada, Harish; Baldi, S.

    2016-01-01

    Energy efficiency in building heating, ventilating and air conditioning (HVAC) equipment requires the development of accurate models for testing HVAC control strategies and corresponding energy consumption. In order to make the HVAC control synthesis computationally affordable, such

  14. Example Plant Model for an International Benchmark Study on DI and C PSA

    International Nuclear Information System (INIS)

    Shin, Sung Min; Park, Jinkyun; Jang, Wondea; Kang, Hyun Gook

    2016-01-01

    In this context the risk quantification due to these digitalized safety systems became more important. Although there are many challenges to address about this issue, many countries agreed with the necessity of research on reliability quantification of DI and C system. Based on the agreement of several countries, one of internal research association is planning a benchmark study on this issue by sharing an example digitalized plant model and let each participating member develop its own probabilistic safety assessment (PSA) model of digital I and C systems. Although the DI and C systems are being applied to NPPs, of which modeling method to quantify its reliability still ambiguous. Therefore, an internal research association is planning a benchmark study to address this issue by sharing an example digitalized plant model and let each member develop their own PSA model for DI and C systems

  15. Irrigation in dose assessments models

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, Ulla; Barkefors, Catarina [Studsvik RadWaste AB, Nykoeping (Sweden)

    2004-05-01

    SKB has carried out several safety analyses for repositories for radioactive waste, one of which was SR 97, a multi-site study concerned with a future deep bedrock repository for high-level waste. In case of future releases due to unforeseen failure of the protective multiple barrier system, radionuclides may be transported with groundwater and may reach the biosphere. Assessments of doses have to be carried out with a long-term perspective. Specific models are therefore employed to estimate consequences to man. It has been determined that the main pathway for nuclides from groundwater or surface water to soil is via irrigation. Irrigation may cause contamination of crops directly by e.g. interception or rain-splash, and indirectly via root-uptake from contaminated soil. The exposed people are in many safety assessments assumed to be self-sufficient, i.e. their food is produced locally where the concentration of radionuclides may be the highest. Irrigation therefore plays an important role when estimating consequences. The present study is therefore concerned with a more extensive analysis of the role of irrigation for possible future doses to people living in the area surrounding a repository. Current irrigation practices in Sweden are summarised, showing that vegetables and potatoes are the most common crops for irrigation. In general, however, irrigation is not so common in Sweden. The irrigation model used in the latest assessments is described. A sensitivity analysis is performed showing that, as expected, interception of irrigation water and retention on vegetation surfaces are important parameters. The parameters used to describe this are discussed. A summary is also given how irrigation is proposed to be handled in the international BIOMASS (BIOsphere Modelling and ASSessment) project and in models like TAME and BIOTRAC. Similarities and differences are pointed out. Some numerical results are presented showing that surface contamination in general gives the

  16. Irrigation in dose assessments models

    International Nuclear Information System (INIS)

    Bergstroem, Ulla; Barkefors, Catarina

    2004-05-01

    SKB has carried out several safety analyses for repositories for radioactive waste, one of which was SR 97, a multi-site study concerned with a future deep bedrock repository for high-level waste. In case of future releases due to unforeseen failure of the protective multiple barrier system, radionuclides may be transported with groundwater and may reach the biosphere. Assessments of doses have to be carried out with a long-term perspective. Specific models are therefore employed to estimate consequences to man. It has been determined that the main pathway for nuclides from groundwater or surface water to soil is via irrigation. Irrigation may cause contamination of crops directly by e.g. interception or rain-splash, and indirectly via root-uptake from contaminated soil. The exposed people are in many safety assessments assumed to be self-sufficient, i.e. their food is produced locally where the concentration of radionuclides may be the highest. Irrigation therefore plays an important role when estimating consequences. The present study is therefore concerned with a more extensive analysis of the role of irrigation for possible future doses to people living in the area surrounding a repository. Current irrigation practices in Sweden are summarised, showing that vegetables and potatoes are the most common crops for irrigation. In general, however, irrigation is not so common in Sweden. The irrigation model used in the latest assessments is described. A sensitivity analysis is performed showing that, as expected, interception of irrigation water and retention on vegetation surfaces are important parameters. The parameters used to describe this are discussed. A summary is also given how irrigation is proposed to be handled in the international BIOMASS (BIOsphere Modelling and ASSessment) project and in models like TAME and BIOTRAC. Similarities and differences are pointed out. Some numerical results are presented showing that surface contamination in general gives the

  17. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    Science.gov (United States)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  18. Benchmarking GW against exact diagonalization for semiempirical models

    DEFF Research Database (Denmark)

    Kaasbjerg, Kristen; Thygesen, Kristian Sommer

    2010-01-01

    We calculate ground-state total energies and single-particle excitation energies of seven pi-conjugated molecules described with the semiempirical Pariser-Parr-Pople model using self-consistent many-body perturbation theory at the GW level and exact diagonalization. For the total energies GW capt...... (Hubbard models) where correlation effects dominate over screening/relaxation effects. Finally we illustrate the important role of the derivative discontinuity of the true exchange-correlation functional by computing the exact Kohn-Sham levels of benzene....

  19. Modeling E-learning quality assurance benchmarking in higher education

    NARCIS (Netherlands)

    Alsaif, Fatimah; Clementking, Arockisamy

    2014-01-01

    Online education programs have been growing rapidly. While it is somehow difficult to specifically quantify quality, many recommendations have been suggested to specify and demonstrate quality of online education touching on common areas of program enhancement and administration. To design a model

  20. RANS Modeling of Benchmark Shockwave / Boundary Layer Interaction Experiments

    Science.gov (United States)

    Georgiadis, Nick; Vyas, Manan; Yoder, Dennis

    2010-01-01

    This presentation summarizes the computations of a set of shock wave / turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock / boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Three turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Shear Stress Transport wavenumber-angular frequency two-equation model, and an explicit algebraic stress wavenumber-angular frequency formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.!

  1. Models of asthma: density-equalizing mapping and output benchmarking

    Directory of Open Access Journals (Sweden)

    Fischer Tanja C

    2008-02-01

    Full Text Available Abstract Despite the large amount of experimental studies already conducted on bronchial asthma, further insights into the molecular basics of the disease are required to establish new therapeutic approaches. As a basis for this research different animal models of asthma have been developed in the past years. However, precise bibliometric data on the use of different models do not exist so far. Therefore the present study was conducted to establish a data base of the existing experimental approaches. Density-equalizing algorithms were used and data was retrieved from a Thomson Institute for Scientific Information database. During the period from 1900 to 2006 a number of 3489 filed items were connected to animal models of asthma, the first being published in the year 1968. The studies were published by 52 countries with the US, Japan and the UK being the most productive suppliers, participating in 55.8% of all published items. Analyzing the average citation per item as an indicator for research quality Switzerland ranked first (30.54/item and New Zealand ranked second for countries with more than 10 published studies. The 10 most productive journals included 4 with a main focus allergy and immunology and 4 with a main focus on the respiratory system. Two journals focussed on pharmacology or pharmacy. In all assigned subject categories examined for a relation to animal models of asthma, immunology ranked first. Assessing numbers of published items in relation to animal species it was found that mice were the preferred species followed by guinea pigs. In summary it can be concluded from density-equalizing calculations that the use of animal models of asthma is restricted to a relatively small number of countries. There are also differences in the use of species. These differences are based on variations in the research focus as assessed by subject category analysis.

  2. Testing of the PELSHIE shielding code using Benchmark problems and other special shielding models

    International Nuclear Information System (INIS)

    Language, A.E.; Sartori, D.E.; De Beer, G.P.

    1981-08-01

    The PELSHIE shielding code for gamma rays from point and extended sources was written in 1971 and a revised version was published in October 1979. At Pelindaba the program is used extensively due to its flexibility and ease of use for a wide range of problems. The testing of PELSHIE results with the results of a range of models and so-called Benchmark problems is desirable to determine possible weaknesses in PELSHIE. Benchmark problems, experimental data, and shielding models, some of which were resolved by the discrete-ordinates method with the ANISN and DOT 3.5 codes, were used for the efficiency test. The description of the models followed the pattern of a classical shielding problem. After the intercomparison with six different models, the usefulness of the PELSHIE code was quantitatively determined [af

  3. Benchmarking Measures of Network Controllability on Canonical Graph Models

    Science.gov (United States)

    Wu-Yan, Elena; Betzel, Richard F.; Tang, Evelyn; Gu, Shi; Pasqualetti, Fabio; Bassett, Danielle S.

    2018-03-01

    The control of networked dynamical systems opens the possibility for new discoveries and therapies in systems biology and neuroscience. Recent theoretical advances provide candidate mechanisms by which a system can be driven from one pre-specified state to another, and computational approaches provide tools to test those mechanisms in real-world systems. Despite already having been applied to study network systems in biology and neuroscience, the practical performance of these tools and associated measures on simple networks with pre-specified structure has yet to be assessed. Here, we study the behavior of four control metrics (global, average, modal, and boundary controllability) on eight canonical graphs (including Erdős-Rényi, regular, small-world, random geometric, Barábasi-Albert preferential attachment, and several modular networks) with different edge weighting schemes (Gaussian, power-law, and two nonparametric distributions from brain networks, as examples of real-world systems). We observe that differences in global controllability across graph models are more salient when edge weight distributions are heavy-tailed as opposed to normal. In contrast, differences in average, modal, and boundary controllability across graph models (as well as across nodes in the graph) are more salient when edge weight distributions are less heavy-tailed. Across graph models and edge weighting schemes, average and modal controllability are negatively correlated with one another across nodes; yet, across graph instances, the relation between average and modal controllability can be positive, negative, or nonsignificant. Collectively, these findings demonstrate that controllability statistics (and their relations) differ across graphs with different topologies and that these differences can be muted or accentuated by differences in the edge weight distributions. More generally, our numerical studies motivate future analytical efforts to better understand the mathematical

  4. MCNP HPGe detector benchmark with previously validated Cyltran model.

    Science.gov (United States)

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  5. Benchmark of the neutronic model used in Maanshan compact simulator

    International Nuclear Information System (INIS)

    Hu, C.-H.; Gone, J.-K.; Ko, H.-T.

    2004-01-01

    The Maanshan compact simulator has adopted a three dimensional kinetic model CONcERT, which was developed by GP International Inc. (GPI) in 1991 for real-time neutronic analysis. Maanshan Nuclear Power Plant utilizes a Westinghouse nuclear steam supply system with three-loop pressurized water reactor. There are 157 fuel assemblies and 52 full-length Rod Cluster Control Assemblies in the reactor core. The control of excess reactivity and power peaking is provided by soluble boron in moderator and burnable absorber rods in fuel assemblies. The neutronic model of CONcERT is based on solving a modified time-dependent two-group diffusion equations coupled to the equations of six-group delayed neutron precursor concentrations. The validation of CONcERT for the Maanshan plant is separated into two groups. The first group compared (1) boron endpoints for different control bank inserted conditions, (2) control rod differential and integral worths and (3) temperature coefficients with the measurements in the Low Power Physical Test (LPPT). The second group compared critical boron concentration and power distribution in high power condition with the measurements. In addition, xenon and samarium equilibrium worths at different power levels as well as the time dependent changes of their worth after the reactor scram are illustrated. (author)

  6. TSD-DOSE : a radiological dose assessment model for treatment, storage, and disposal facilities

    International Nuclear Information System (INIS)

    Pfingston, M.

    1998-01-01

    In May 1991, the U.S. Department of Energy (DOE), Office of Waste Operations, issued a nationwide moratorium on shipping slightly radioactive mixed waste from DOE facilities to commercial treatment, storage, and disposal (TSD) facilities. Studies were subsequently conducted to evaluate the radiological impacts associated with DOE's prior shipments through DOE's authorized release process under DOE Order 5400.5. To support this endeavor, a radiological assessment computer code--TSD-DOSE (Version 1.1)--was developed and issued by DOE in 1997. The code was developed on the basis of detailed radiological assessments performed for eight commercial hazardous waste TSD facilities. It was designed to utilize waste-specific and site-specific data to estimate potential radiological doses to on-site workers and the off-site public from waste handling operations at a TSD facility. The code has since been released for use by DOE field offices and was recently used by DOE to evaluate the release of septic waste containing residual radioactive material to a TSD facility licensed under the Resource Conservation and Recovery Act. Revisions to the code were initiated in 1997 to incorporate comments received from users and to increase TSD-DOSE's capability, accuracy, and flexibility. These updates included incorporation of the method used to estimate external radiation doses from DOE's RESRAD model and expansion of the source term to include 85 radionuclides. In addition, a detailed verification and benchmarking analysis was performed

  7. Benchmarking of Computational Models for NDE and SHM of Composites

    Science.gov (United States)

    Wheeler, Kevin; Leckey, Cara; Hafiychuk, Vasyl; Juarez, Peter; Timucin, Dogan; Schuet, Stefan; Hafiychuk, Halyna

    2016-01-01

    Ultrasonic wave phenomena constitute the leading physical mechanism for nondestructive evaluation (NDE) and structural health monitoring (SHM) of solid composite materials such as carbon-fiber-reinforced polymer (CFRP) laminates. Computational models of ultrasonic guided-wave excitation, propagation, scattering, and detection in quasi-isotropic laminates can be extremely valuable in designing practically realizable NDE and SHM hardware and software with desired accuracy, reliability, efficiency, and coverage. This paper presents comparisons of guided-wave simulations for CFRP composites implemented using three different simulation codes: two commercial finite-element analysis packages, COMSOL and ABAQUS, and a custom code implementing the Elastodynamic Finite Integration Technique (EFIT). Comparisons are also made to experimental laser Doppler vibrometry data and theoretical dispersion curves.

  8. Interactions of model biomolecules. Benchmark CC calculations within MOLCAS

    Energy Technology Data Exchange (ETDEWEB)

    Urban, Miroslav [Slovak University of Technology in Bratislava, Faculty of Materials Science and Technology in Trnava, Institute of Materials Science, Bottova 25, SK-917 24 Trnava, Slovakia and Department of Physical and Theoretical Chemistry, Faculty of Natural Scie (Slovakia); Pitoňák, Michal; Neogrády, Pavel; Dedíková, Pavlína [Department of Physical and Theoretical Chemistry, Faculty of Natural Sciences, Comenius University, Mlynská dolina, SK-842 15 Bratislava (Slovakia); Hobza, Pavel [Institute of Organic Chemistry and Biochemistry and Center for Complex Molecular Systems and biomolecules, Academy of Sciences of the Czech Republic, Prague (Czech Republic)

    2015-01-22

    We present results using the OVOS approach (Optimized Virtual Orbitals Space) aimed at enhancing the effectiveness of the Coupled Cluster calculations. This approach allows to reduce the total computer time required for large-scale CCSD(T) calculations about ten times when the original full virtual space is reduced to about 50% of its original size without affecting the accuracy. The method is implemented in the MOLCAS computer program. When combined with the Cholesky decomposition of the two-electron integrals and suitable parallelization it allows calculations which were formerly prohibitively too demanding. We focused ourselves to accurate calculations of the hydrogen bonded and the stacking interactions of the model biomolecules. Interaction energies of the formaldehyde, formamide, benzene, and uracil dimers and the three-body contributions in the cytosine – guanine tetramer are presented. Other applications, as the electron affinity of the uracil affected by solvation are also shortly mentioned.

  9. Experimental data and dose-response models

    International Nuclear Information System (INIS)

    Ullrich, R.L.

    1985-01-01

    Dose-response relationships for radiation carcinogenesis have been of interest to biologists, modelers, and statisticians for many years. Despite his interest there are few instances in which there are sufficient experimental data to allow the fitting of various dose-response models. In those experimental systems for which data are available the dose-response curves for tumor induction for the various systems cannot be described by a single model. Dose-response models which have been observed following acute exposures to gamma rays include threshold, quadratic, and linear models. Data on sex, age, and environmental influences of dose suggest a strong role of host factors on the dose response. With decreasing dose rate the effectiveness of gamma ray irradiation tends to decrease in essentially every instance. In those cases in which the high dose rate dose response could be described by a quadratic model, the effect of dose rate is consistent with predictions based on radiation effects on the induction of initial events. Whether the underlying reasons for the observed dose-rate effect is a result of effects on the induction of initial events or is due to effects on the subsequent steps in the carcinogenic process is unknown. Information on the dose response for tumor induction for high LET (linear energy transfer) radiations such as neutrons is even more limited. The observed dose and dose rate data for tumor induction following neutron exposure are complex and do not appear to be consistent with predictions based on models for the induction of initial events

  10. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  11. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Directory of Open Access Journals (Sweden)

    Tina Gerl

    Full Text Available Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper

  12. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Science.gov (United States)

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents

  13. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking

    Science.gov (United States)

    Kreibich, Heidi; Franco, Guillermo; Marechal, David

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily

  14. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  15. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  16. Comparison of three-dimensional ocean general circulation models on a benchmark problem

    International Nuclear Information System (INIS)

    Chartier, M.

    1990-12-01

    A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method

  17. Benchmarking of MCAM 4.0 with the ITER 3D Model

    International Nuclear Information System (INIS)

    Ying Li; Lei Lu; Aiping Ding; Haimin Hu; Qin Zeng; Shanliang Zheng; Yican Wu

    2006-01-01

    Monte Carlo particle transport simulations are widely employed in fields such as nuclear engineering, radio-therapy and space science. Describing and verifying the 3D geometry of fusion devices, however, are among the most complex tasks of MCNP calculation problems in nuclear analysis. The manual modeling of a complex geometry for MCNP code, though a common practice, is an extensive, time-consuming, and error prone task. An efficient solution is to shift the geometric modeling into Computer Aided Design(CAD) systems and to use an interface for MCNP to convert the CAD model to MCNP file. The advantage of this approach lies in the fact that it allows access to full features of modern CAD systems facilitating the geometric modeling and utilizing the existing CAD models. MCAM(MCNP Automatic Modeling System) is an integrated tool for CAD model preprocessing, accurate bi-directional conversion between CAD/MCNP models, neutronics property processing and geometric modeling developed by FDS team in ASIPP and Hefei University of Technology. MCAM4.0 has been extended and enhanced to support various CAD file formats and the preprocessing of CAD model, such as healing, automatic model reconstruction, overlap detection and correction, automatic void modeling. The ITER international benchmark model is provided by ITER international team to compare the CAD/MCNP programs being developed in the ITER participant teams. It is created in CATIA/V5, which has been chosen as the CAD system for ITER design, including all the important parts and components of the ITER device. The benchmark model contains vast curve surfaces, which can fully test the ability of MCNP/CAD codes. The whole processing procedure of this model will be presented in this paper, which includes the geometric model processing, neutroics property processing, converting to MCNP input file, calculating with MCNP and analysis. The nuclear analysis results of the model will be given in the end. Although these preliminary

  18. Model-Based Engineering and Manufacturing CAD/CAM Benchmark.; FINAL

    International Nuclear Information System (INIS)

    Domm, T.C.; Underwood, R.S.

    1999-01-01

    The Benchmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus for Y-12 modernization efforts. The companies visited included several large established companies and a new, small, high-tech machining firm. As a result of this effort, changes are recommended that will enable Y-12 to become a more modern, responsive, cost-effective manufacturing facility capable of supporting the needs of the Nuclear Weapons Complex (NWC) into the 21st century. The benchmark team identified key areas of interest, both focused and general. The focus areas included Human Resources, Information Management, Manufacturing Software Tools, and Standards/Policies and Practices. Areas of general interest included Infrastructure, Computer Platforms and Networking, and Organizational Structure. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were somewhere between 3-D solid modeling and surfaced wire-frame models. The manufacturing computer tools were varied, with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) from a common model. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system than a single computer-aided manufacturing (CAM) system. The Internet was a technology that all companies were looking to either transport information more easily throughout the corporation or as a conduit for

  19. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    Science.gov (United States)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  20. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    Science.gov (United States)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  1. A rod-airfoil experiment as a benchmark for broadband noise modeling

    Energy Technology Data Exchange (ETDEWEB)

    Jacob, M.C. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Universite Claude Bernard/Lyon I, Villeurbanne Cedex (France); Boudet, J.; Michard, M. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Casalino, D. [Ecole Centrale de Lyon, Laboratoire de Mecanique des Fluides et d' Acoustique, Ecully Cedex (France); Fluorem SAS, Ecully Cedex (France)

    2005-07-01

    A low Mach number rod-airfoil experiment is shown to be a good benchmark for numerical and theoretical broadband noise modeling. The benchmarking approach is applied to a sound computation from a 2D unsteady-Reynolds-averaged Navier-Stokes (U-RANS) flow field, where 3D effects are partially compensated for by a spanwise statistical model and by a 3D large eddy simulation. The experiment was conducted in the large anechoic wind tunnel of the Ecole Centrale de Lyon. Measurements taken included particle image velocity (PIV) around the airfoil, single hot wire, wall pressure coherence, and far field pressure. These measurements highlight the strong 3D effects responsible for spectral broadening around the rod vortex shedding frequency in the subcritical regime, and the dominance of the noise generated around the airfoil leading edge. The benchmarking approach is illustrated by two examples: the validation of a stochastical noise generation model applied to a 2D U-RANS computation; the assessment of a 3D LES computation using a new subgrid scale (SGS) model coupled to an advanced-time Ffowcs-Williams and Hawkings sound computation. (orig.)

  2. Looking Past Primary Productivity: Benchmarking System Processes that Drive Ecosystem Level Responses in Models

    Science.gov (United States)

    Cowdery, E.; Dietze, M.

    2017-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes

  3. Peculiarity by Modeling of the Control Rod Movement by the Kalinin-3 Benchmark

    International Nuclear Information System (INIS)

    Nikonov, S. P.; Velkov, K.; Pautz, A.

    2010-01-01

    The paper presents an important part of the results of the OECD/NEA benchmark transient 'Switching off one main circulation pump at nominal power' analyzed as a boundary condition problem by the coupled system code ATHLET-BIPR-VVER. Some observations and comparisons with measured data for integral reactor parameters are discussed. Special attention is paid on the modeling and comparisons performed for the control rod movement and the reactor power history. (Authors)

  4. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  5. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    Science.gov (United States)

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  6. LHC benchmark scenarios for the real Higgs singlet extension of the standard model

    International Nuclear Information System (INIS)

    Robens, Tania; Stefaniak, Tim

    2016-01-01

    We present benchmark scenarios for searches for an additional Higgs state in the real Higgs singlet extension of the Standard Model in Run 2 of the LHC. The scenarios are selected such that they fulfill all relevant current theoretical and experimental constraints, but can potentially be discovered at the current LHC run. We take into account the results presented in earlier work and update the experimental constraints from relevant LHC Higgs searches and signal rate measurements. The benchmark scenarios are given separately for the low-mass and high-mass region, i.e. the mass range where the additional Higgs state is lighter or heavier than the discovered Higgs state at around 125 GeV. They have also been presented in the framework of the LHC Higgs Cross Section Working Group. (orig.)

  7. Dose reconstruction modeling for medical radiation workers

    International Nuclear Information System (INIS)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin

    2017-01-01

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  8. Dose reconstruction modeling for medical radiation workers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin [Dept. of Preventive Medicine, Korea University, Seoul (Korea, Republic of)

    2017-04-15

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  9. Benchmark of the local drift-kinetic models for neoclassical transport simulation in helical plasmas

    Science.gov (United States)

    Huang, B.; Satake, S.; Kanno, R.; Sugama, H.; Matsuoka, S.

    2017-02-01

    The benchmarks of the neoclassical transport codes based on the several local drift-kinetic models are reported here. Here, the drift-kinetic models are zero orbit width (ZOW), zero magnetic drift, DKES-like, and global, as classified in Matsuoka et al. [Phys. Plasmas 22, 072511 (2015)]. The magnetic geometries of Helically Symmetric Experiment, Large Helical Device (LHD), and Wendelstein 7-X are employed in the benchmarks. It is found that the assumption of E ×B incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models when E ×B is sufficiently large compared to the magnetic drift velocities. For example, Mp≤0.4 where Mp is the poloidal Mach number. On the other hand, when E ×B and the magnetic drift velocities are comparable, the tangential magnetic drift, which is included in both the global and ZOW models, fills the role of suppressing unphysical peaking of neoclassical radial-fluxes found in the other local models at Er≃0 . In low collisionality plasmas, in particular, the tangential drift effect works well to suppress such unphysical behavior of the radial transport caused in the simulations. It is demonstrated that the ZOW model has the advantage of mitigating the unphysical behavior in the several magnetic geometries, and that it also implements the evaluation of bootstrap current in LHD with the low computation cost compared to the global model.

  10. Modelling simple helically delivered dose distributions

    International Nuclear Information System (INIS)

    Fenwick, John D; Tome, Wolfgang A; Kissick, Michael W; Mackie, T Rock

    2005-01-01

    In a previous paper, we described quality assurance procedures for Hi-Art helical tomotherapy machines. Here, we develop further some ideas discussed briefly in that paper. Simple helically generated dose distributions are modelled, and relationships between these dose distributions and underlying characteristics of Hi-Art treatment systems are elucidated. In particular, we describe the dependence of dose levels along the central axis of a cylinder aligned coaxially with a Hi-Art machine on fan beam width, couch velocity and helical delivery lengths. The impact on these dose levels of angular variations in gantry speed or output per linear accelerator pulse is also explored

  11. Catchment & sewer network simulation model to benchmark control strategies within urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, Ramesh; Flores Alsina, Xavier; Fu, Guangtao

    2016-01-01

    This paper aims at developing a benchmark simulation model to evaluate control strategies for the urban catchment and sewer network. Various modules describing wastewater generation in the catchment, its subsequent transport and storage in the sewer system are presented. Global/local overflow based...... evaluation criteria describing the cumulative and acute effects are presented. Simulation results show that the proposed set of models is capable of generating daily, weekly and seasonal variations as well as describing the effect of rain events on wastewater characteristics. Two sets of case studies...

  12. Springback study in aluminum alloys based on the Demeri Benchmark Test : influence of material model

    International Nuclear Information System (INIS)

    Greze, R.; Laurent, H.; Manach, P. Y.

    2007-01-01

    Springback is a serious problem in sheet metal forming. Its origin lies in the elastic recovery of materials after a deep drawing operation. Springback modifies the final shape of the part when removed from the die after forming. This study deals with Springback in an Al5754-O aluminum alloy. An experimental test similar to the Demeri Benchmark Test has been developed. The experimentally measured Springback is compared to predicted Springback simulation using Abaqus software. Several material models are analyzed, all models using isotropic hardening of Voce type and plasticity criteria such as Von Mises and Hill48's yield criterion

  13. Comparison of the results of the fifth dynamic AER benchmark-a benchmark for coupled thermohydraulic system/three-dimensional hexagonal kinetic core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1998-01-01

    The fifth dynamic benchmark was defined at seventh AER-Symposium, held in Hoernitz, Germany in 1997. It is the first benchmark for coupled thermohydraulic system/three-dimensional hexagonal neutron kinetic core models. In this benchmark the interaction between the components of a WWER-440 NPP with the reactor core has been investigated. The initiating event is a symmetrical break of the main steam header at the end of the first fuel cycle and hot shutdown conditions with one control rod group stucking. This break causes an overcooling of the primary circuit. During this overcooling the scram reactivity is compensated and the scrammed reactor becomes re critical. The calculation was continued until the highly-borated water from the high pressure injection system terminated the power excursion. Each participant used own best-estimate nuclear cross section data. Only the initial subcriticality at the beginning of the transient was given. Solutions were received from Kurchatov Institute Russia with the code BIPR8/ATHLET, VTT Energy Finland with HEXTRAN/SMABRE, NRI Rez Czech Republic with DYN3/ATHLET, KFKI Budapest Hungary with KIKO3D/ATHLET and from FZR Germany with the code DYN3D/ATHLET.In this paper the results are compared. Beside the comparison of global results, the behaviour of several thermohydraulic and neutron kinetic parameters is presented to discuss the revealed differences between the solutions.(Authors)

  14. Modelling of macrosegregation in steel ingots: benchmark validation and industrial application

    International Nuclear Information System (INIS)

    Li Wensheng; Shen Houfa; Liu Baicheng; Shen Bingzhen

    2012-01-01

    The paper presents the recent progress made by the authors on modelling of macrosegregation in steel ingots. A two-phase macrosegregation model was developed that incorporates descriptions of heat transfer, melt convection, solute transport, and solid movement on the process scale with microscopic relations for grain nucleation and growth. The formation of pipe shrinkage at the ingot top is also taken into account in the model. Firstly, a recently proposed numerical benchmark test of macrosegregation was used to verify the model. Then, the model was applied to predict the macrosegregation in a benchmark industrial-scale steel ingot. The predictions were validated against experimental data from the literature. Furthermore, macrosegregation experiment of an industrial 53-t steel ingot was performed. The simulation results were compared with the measurements. It is indicated that the typical macrosegregation patterns encountered in steel ingots, including a positively segregated zone in the hot top and a negative segregation in the bottom part of the ingot, are well reproduced with the model.

  15. Benchmarking of wind farm scale wake models in the EERA - DTOC project

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Hansen, Kurt Schaldemose; Barthelmie, R.J.

    2013-01-01

    -flow to combine wind farm (micro) and cluster (meso) scale wake models. For this purpose, a benchmark campaign is organized on the existing wind farm wake models available within the project, in order to identify which model would be the most appropriate for this coupling. A number of standardized wake cases......Designing offshore wind farms next to existing or planned wind farm clusters has recently become a common practice in the North Sea. These types of projects face unprecedented challenges in term of wind energy siting. The currently ongoing European project FP7 EERA - DTOC (Design Tool for Offshore...... wind farm Clusters) is aiming at providing a new type of model work-flow to address this issue. The wake modeling part of the EERA - DTOC project is to improve the fundamental understanding of wind turbine wakes and modeling. One of these challenges is to create a new kind of wake modeling work...

  16. Benchmarking density functional tight binding models for barrier heights and reaction energetics of organic molecules.

    Science.gov (United States)

    Gruden, Maja; Andjeklović, Ljubica; Jissy, Akkarapattiakal Kuriappan; Stepanović, Stepan; Zlatar, Matija; Cui, Qiang; Elstner, Marcus

    2017-09-30

    Density Functional Tight Binding (DFTB) models are two to three orders of magnitude faster than ab initio and Density Functional Theory (DFT) methods and therefore are particularly attractive in applications to large molecules and condensed phase systems. To establish the applicability of DFTB models to general chemical reactions, we conduct benchmark calculations for barrier heights and reaction energetics of organic molecules using existing databases and several new ones compiled in this study. Structures for the transition states and stable species have been fully optimized at the DFTB level, making it possible to characterize the reliability of DFTB models in a more thorough fashion compared to conducting single point energy calculations as done in previous benchmark studies. The encouraging results for the diverse sets of reactions studied here suggest that DFTB models, especially the most recent third-order version (DFTB3/3OB augmented with dispersion correction), in most cases provide satisfactory description of organic chemical reactions with accuracy almost comparable to popular DFT methods with large basis sets, although larger errors are also seen for certain cases. Therefore, DFTB models can be effective for mechanistic analysis (e.g., transition state search) of large (bio)molecules, especially when coupled with single point energy calculations at higher levels of theory. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Science.gov (United States)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  18. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    Science.gov (United States)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  19. Hand rub dose needed for a single disinfection varies according to product: A bias in benchmarking using indirect hand hygiene indicator

    Directory of Open Access Journals (Sweden)

    Raphaële Girard

    2012-12-01

    Results: Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2 ml to more than 3 ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5 ml to more than 3 ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking.

  20. Bayesian Dose-Response Modeling in Sparse Data

    Science.gov (United States)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a

  1. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  2. Development of Multivariable Models to Predict and Benchmark Transfusion in Elective Surgery Supporting Patient Blood Management.

    Science.gov (United States)

    Hayn, Dieter; Kreiner, Karl; Ebner, Hubert; Kastner, Peter; Breznik, Nada; Rzepka, Angelika; Hofmann, Axel; Gombotz, Hans; Schreier, Günter

    2017-06-14

    Blood transfusion is a highly prevalent procedure in hospitalized patients and in some clinical scenarios it has lifesaving potential. However, in most cases transfusion is administered to hemodynamically stable patients with no benefit, but increased odds of adverse patient outcomes and substantial direct and indirect cost. Therefore, the concept of Patient Blood Management has increasingly gained importance to pre-empt and reduce transfusion and to identify the optimal transfusion volume for an individual patient when transfusion is indicated. It was our aim to describe, how predictive modeling and machine learning tools applied on pre-operative data can be used to predict the amount of red blood cells to be transfused during surgery and to prospectively optimize blood ordering schedules. In addition, the data derived from the predictive models should be used to benchmark different hospitals concerning their blood transfusion patterns. 6,530 case records obtained for elective surgeries from 16 centers taking part in two studies conducted in 2004-2005 and 2009-2010 were analyzed. Transfused red blood cell volume was predicted using random forests. Separate models were trained for overall data, for each center and for each of the two studies. Important characteristics of different models were compared with one another. Our results indicate that predictive modeling applied prior surgery can predict the transfused volume of red blood cells more accurately (correlation coefficient cc = 0.61) than state of the art algorithms (cc = 0.39). We found significantly different patterns of feature importance a) in different hospitals and b) between study 1 and study 2. We conclude that predictive modeling can be used to benchmark the importance of different features on the models derived with data from different hospitals. This might help to optimize crucial processes in a specific hospital, even in other scenarios beyond Patient Blood Management.

  3. Benchmarking computational fluid dynamics models of lava flow simulation for hazard assessment, forecasting, and risk management

    Science.gov (United States)

    Dietterich, Hannah; Lev, Einat; Chen, Jiangzhi; Richardson, Jacob A.; Cashman, Katharine V.

    2017-01-01

    Numerical simulations of lava flow emplacement are valuable for assessing lava flow hazards, forecasting active flows, designing flow mitigation measures, interpreting past eruptions, and understanding the controls on lava flow behavior. Existing lava flow models vary in simplifying assumptions, physics, dimensionality, and the degree to which they have been validated against analytical solutions, experiments, and natural observations. In order to assess existing models and guide the development of new codes, we conduct a benchmarking study of computational fluid dynamics (CFD) models for lava flow emplacement, including VolcFlow, OpenFOAM, FLOW-3D, COMSOL, and MOLASSES. We model viscous, cooling, and solidifying flows over horizontal planes, sloping surfaces, and into topographic obstacles. We compare model results to physical observations made during well-controlled analogue and molten basalt experiments, and to analytical theory when available. Overall, the models accurately simulate viscous flow with some variability in flow thickness where flows intersect obstacles. OpenFOAM, COMSOL, and FLOW-3D can each reproduce experimental measurements of cooling viscous flows, and OpenFOAM and FLOW-3D simulations with temperature-dependent rheology match results from molten basalt experiments. We assess the goodness-of-fit of the simulation results and the computational cost. Our results guide the selection of numerical simulation codes for different applications, including inferring emplacement conditions of past lava flows, modeling the temporal evolution of ongoing flows during eruption, and probabilistic assessment of lava flow hazard prior to eruption. Finally, we outline potential experiments and desired key observational data from future flows that would extend existing benchmarking data sets.

  4. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Development on Dose Assessment Model of Northeast Asia Nuclear Accident Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Yub; Kim, Ju Youl; Kim, Suk Hoon; Lee, Seung Hee; Yoon, Tae Bin [FNC Techology, Yongin (Korea, Republic of)

    2016-05-15

    In order to support the emergency response system, the simulator for overseas nuclear accident is under development including source-term estimation, atmospheric dispersion modeling and dose assessment. The simulator is named NANAS (Northeast Asia Nuclear Accident Simulator). For the source-term estimation, design characteristics of each reactor type should be reflected into the model. Since there are a lot of reactor types in neighboring countries, the representative reactors of China, Japan and Taiwan have been selected and the source-term estimation models for each reactor have been developed, respectively. For the atmospheric dispersion modeling, Lagrangian particle model will be integrated into the simulator for the long range dispersion modeling in Northeast Asia region. In this study, the dose assessment model has been developed considering external and internal exposure. The dose assessment model has been developed as a part of the overseas nuclear accidents simulator which is named NANAS. It addresses external and internal pathways including cloudshine, groundshine and inhalation. Also, it uses the output of atmospheric dispersion model (i.e. the average concentrations of radionuclides in air and ground) and various coefficients (e.g. dose conversion factor and breathing rate) as an input. Effective dose and thyroid dose for each grid in the Korean Peninsula region are printed out as a format of map projection and chart. Verification and validation on the dose assessment model will be conducted in further study by benchmarking with the measured data of Fukushima Daiichi Nuclear Accident.

  6. Nonlinear model of high-dose implantation

    International Nuclear Information System (INIS)

    Danilyuk, A.

    2001-01-01

    The models of high-dose implantation, using the distribution functions, are relatively simple. However, they must take into account the variation of the function of distribution of the implanted ions with increasing dose [1-4]. This variation takes place owing to the fact that the increase of the concentration of the implanted ions results in a change of the properties of the target. High-dose implantation is accompanied by sputtering, volume growth, diffusion, generation of defects, formation of new phases, etc. The variation of the distribution function is determined by many factors and is not known in advance. The variation within the framework of these models [1-4] is taken into account in advance by the introduction of intuitive assumptions on the basis of implicit considerations. Therefore, these attempts should be regarded as incorrect. The model prepared here makes it possible to take into account the sputtering of the target, volume growth and additional declaration on the implanted ions. Without any assumptions in relation to the variation of the distribution function with increasing dose. In our model it is assumed that the type of distribution function for small doses in a pure target substance is the same as in substances with implanted ions. A second assumption relates to the type of the distribution function valid for small doses in the given substances. These functions are determined as a result of a large number of theoretical and experimental investigations and are well-known at the present time. They include the symmetric and nonsymmetric Gauss distribution, the Pearson distribution, and others. We examine implantation with small doses of up to 10 14 - 10 15 cm -2 when the accurately known distribution is valid

  7. How to Use Benchmark and Cross-section Studies to Improve Data Libraries and Models

    Science.gov (United States)

    Wagner, V.; Suchopár, M.; Vrzalová, J.; Chudoba, P.; Svoboda, O.; Tichý, P.; Krása, A.; Majerle, M.; Kugler, A.; Adam, J.; Baldin, A.; Furman, W.; Kadykov, M.; Solnyshkin, A.; Tsoupko-Sitnikov, S.; Tyutyunikov, S.; Vladimirovna, N.; Závorka, L.

    2016-06-01

    Improvements of the Monte Carlo transport codes and cross-section libraries are very important steps towards usage of the accelerator-driven transmutation systems. We have conducted a lot of benchmark experiments with different set-ups consisting of lead, natural uranium and moderator irradiated by relativistic protons and deuterons within framework of the collaboration “Energy and Transmutation of Radioactive Waste”. Unfortunately, the knowledge of the total or partial cross-sections of important reactions is insufficient. Due to this reason we have started extensive studies of different reaction cross-sections. We measure cross-sections of important neutron reactions by means of the quasi-monoenergetic neutron sources based on the cyclotrons at Nuclear Physics Institute in Řež and at The Svedberg Laboratory in Uppsala. Measurements of partial cross-sections of relativistic deuteron reactions were the second direction of our studies. The new results obtained during last years will be shown. Possible use of these data for improvement of libraries, models and benchmark studies will be discussed.

  8. Single toxin dose-response models revisited

    Energy Technology Data Exchange (ETDEWEB)

    Demidenko, Eugene, E-mail: eugened@dartmouth.edu [Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH03756 (United States); Glaholt, SP, E-mail: sglaholt@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States); Kyker-Snowman, E, E-mail: ek2002@wildcats.unh.edu [Department of Natural Resources and the Environment, University of New Hampshire, Durham, NH03824 (United States); Shaw, JR, E-mail: joeshaw@indiana.edu [Indiana University, School of Public & Environmental Affairs, Bloomington, IN47405 (United States); Chen, CY, E-mail: Celia.Y.Chen@dartmouth.edu [Department of Biological Sciences, Dartmouth College, Hanover, NH03755 (United States)

    2017-01-01

    The goal of this paper is to offer a rigorous analysis of the sigmoid shape single toxin dose-response relationship. The toxin efficacy function is introduced and four special points, including maximum toxin efficacy and inflection points, on the dose-response curve are defined. The special points define three phases of the toxin effect on mortality: (1) toxin concentrations smaller than the first inflection point or (2) larger then the second inflection point imply low mortality rate, and (3) concentrations between the first and the second inflection points imply high mortality rate. Probabilistic interpretation and mathematical analysis for each of the four models, Hill, logit, probit, and Weibull is provided. Two general model extensions are introduced: (1) the multi-target hit model that accounts for the existence of several vital receptors affected by the toxin, and (2) model with a nonzero mortality at zero concentration to account for natural mortality. Special attention is given to statistical estimation in the framework of the generalized linear model with the binomial dependent variable as the mortality count in each experiment, contrary to the widespread nonlinear regression treating the mortality rate as continuous variable. The models are illustrated using standard EPA Daphnia acute (48 h) toxicity tests with mortality as a function of NiCl or CuSO{sub 4} toxin. - Highlights: • The paper offers a rigorous study of a sigmoid dose-response relationship. • The concentration with highest mortality rate is rigorously defined. • A table with four special points for five morality curves is presented. • Two new sigmoid dose-response models have been introduced. • The generalized linear model is advocated for estimation of sigmoid dose-response relationship.

  9. An integer programming model and benchmark suite for liner shipping network design

    DEFF Research Database (Denmark)

    Løfstedt, Berit; Alvarez, Jose Fernando; Plum, Christian Edinger Munk

    effective and energy efficient liner shipping networks using operations research is huge and neglected. The implementation of logistic planning tools based upon operations research has enhanced performance of both airlines, railways and general transportation companies, but within the field of liner......Maritime transportation is accountable for 2.7% of the worlds CO2 emissions and the liner shipping industry is committed to a slow steaming policy to provide low cost and environmentally conscious global transport of goods without compromising the level of service. The potential for making cost...... along with a rich integer programming model based on the services, that constitute the fixed schedule of a liner shipping company. The model may be relaxed as well as decomposed. The design of a benchmark suite of data instances to reflect the business structure of a global liner shipping network...

  10. A benchmark for coupled thermohydraulics system/three-dimensional neutron kinetics core models

    International Nuclear Information System (INIS)

    Kliem, S.

    1999-01-01

    During the last years 3D neutron kinetics core models have been coupled to advanced thermohydraulics system codes. These coupled codes can be used for the analysis of the whole reactor system. Although the stand-alone versions of the 3D neutron kinetics core models and of the thermohydraulics system codes generally have a good verification and validation basis, there is a need for additional validation work. This especially concerns the interaction between the reactor core and the other components of a nuclear power plant (NPP). In the framework of the international 'Atomic Energy Research' (AER) association on VVER Reactor Physics and Reactor Safety, a benchmark for these code systems was defined. (orig.)

  11. Benchmarking of the advanced hygrothermal model-hygIRC with mid scale experiments

    Energy Technology Data Exchange (ETDEWEB)

    Maref, W.; Lacasse, M.; Kumaran, K.; Swinton, M.C. [National Research Council of Canada, Ottawa, ON (Canada). Inst. for Research in Construction

    2002-07-01

    An experimental study has been conducted to benchmark an advanced hygrothermal model entitled hygIRC which can be used to estimate the drying response of oriented strand board (OSB) used in timber-frame construction. Three specimens of OSB boards were immersed in water for 5 days and then allowed to stabilise in a sealed tank. A comparison of results from the computer model simulations to those obtained from experimental tests and laboratory measurements showed good agreement in terms of the shape of the drying curve and time taken to reach equilibrium moisture content. In general, it was determined that the drying process is controlled by the vapour permeability of the membrane. The higher the vapour permeability, the faster the rate of drying in a given condition. 11 refs., 1 tab., 9 figs.

  12. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  13. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  14. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  15. Hand rub dose needed for a single disinfection varies according to product: a bias in benchmarking using indirect hand hygiene indicator.

    Science.gov (United States)

    Girard, Raphaële; Aupee, Martine; Erb, Martine; Bettinger, Anne; Jouve, Alice

    2012-12-01

    The 3ml volume currently used as the hand hygiene (HH) measure has been explored as the pertinent dose for an indirect indicator of HH compliance. A multicenter study was conducted in order to ascertain the required dose using different products. The average contact duration before drying was measured and compared with references. Effective hand coverage had to include the whole hand and the wrist. Two durations were chosen as points of reference: 30s, as given by guidelines, and the duration validated by the European standard EN 1500. Each product was to be tested, using standardized procedures, by three nosocomial infection prevention teams, for three different doses (3, 2 and 1.5ml). Data from 27 products and 1706 tests were analyzed. Depending on the product, the dose needed to ensure a 30-s contact duration in 75% of tests ranging from 2ml to more than 3ml, and to ensure a contact duration exceeding the EN 1500 times in 75% of tests ranging from 1.5ml to more than 3ml. The aftermath interpretation is the following: if different products are used, the volume utilized does not give an unbiased estimation of the HH compliance. Other compliance evaluation methods remain necessary for efficient benchmarking. Copyright © 2012 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  16. Benchmark experiments of dose distributions in phantom placed behind iron and concrete shields at the TIARA facility

    International Nuclear Information System (INIS)

    Nakane, Yoshihiro; Sakamoto, Yukio; Tsuda, Shuichi

    2004-01-01

    To verify the calculation methods used for the evaluations of neutron dose at the radiation shielding design of the high-intensity proton accelerator facility (J-PARC), dose distributions in a plastic phantom of 30x30x30 cm 3 slab placed behind iron and concrete test shields were measured by using a tissue equivalent proportional counter for 65-MeV quasi-monoenergetic neutrons generated from the 7 Li(p,n) reactions with 68-MeV protons at the TIARA facility. Dose distributions in the phantom were calculated by using the MCNPX and the NMTC/JAM-MCNP codes with the flux-to-dose conversion coefficients prepared for the shielding design of the facility. The comparison results show the calculated results were in good agreement with the measured ones within 20%. (author)

  17. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  18. Estimation of benchmark dose as the threshold levels of urinary cadmium, based on excretion of total protein, β 2-microglobulin, and N-acetyl-β-D-glucosaminidase in cadmium nonpolluted regions in Japan

    International Nuclear Information System (INIS)

    Kobayashi, Etsuko; Suwazono, Yasushi; Uetani, Mirei; Inaba, Takeya; Oishi, Mitsuhiro; Kido, Teruhiko; Nishijo, Muneko; Nakagawa, Hideaki; Nogawa, Koji

    2006-01-01

    Previously, we investigated the association between urinary cadmium (Cd) concentration and indicators of renal dysfunction, including total protein, β 2 -microglobulin (β 2 -MG), and N-acetyl-β-D-glucosaminidase (NAG). In 2778 inhabitants ≥50 years of age (1114 men, 1664 women) in three different Cd nonpolluted areas in Japan, we showed that a dose-response relationship existed between renal effects and Cd exposure in the general environment without any known Cd pollution. However, we could not estimate the threshold levels of urinary Cd at that time. In the present study, we estimated the threshold levels of urinary Cd as the benchmark dose low (BMDL) using the benchmark dose (BMD) approach. Urinary Cd excretion was divided into 10 categories, and an abnormality rate was calculated for each. Cut-off values for urinary substances were defined as corresponding to the 84% and 95% upper limit values of the target population who have not smoked. Then we calculated the BMD and BMDL using a log-logistic model. The values of BMD and BMDL for all urinary substances could be calculated. The BMDL for the 84% cut-off value of β 2 -MG, setting an abnormal value at 5%, was 2.4 μg/g creatinine (cr) in men and 3.3 μg/g cr in women. In conclusion, the present study demonstrated that the threshold level of urinary Cd could be estimated in people living in the general environment without any known Cd-pollution in Japan, and the value was inferred to be almost the same as that in Belgium, Sweden, and China

  19. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    Energy Technology Data Exchange (ETDEWEB)

    Ramakanth Munipalli; P.-Y.Huang; C.Chandler; C.Rowell; M.-J.Ni; N.Morley; S.Smolentsev; M.Abdou

    2008-06-05

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as “canonical”,) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  20. Physical Model Development and Benchmarking for MHD Flows in Blanket Design

    International Nuclear Information System (INIS)

    Munipalli, Ramakanth; Huang, P.-Y.; Chandler, C.; Rowell, C.; Ni, M.-J.; Morley, N.; Smolentsev, S.; Abdou, M.

    2008-01-01

    An advanced simulation environment to model incompressible MHD flows relevant to blanket conditions in fusion reactors has been developed at HyPerComp in research collaboration with TEXCEL. The goals of this phase-II project are two-fold: The first is the incorporation of crucial physical phenomena such as induced magnetic field modeling, and extending the capabilities beyond fluid flow prediction to model heat transfer with natural convection and mass transfer including tritium transport and permeation. The second is the design of a sequence of benchmark tests to establish code competence for several classes of physical phenomena in isolation as well as in select (termed here as 'canonical',) combinations. No previous attempts to develop such a comprehensive MHD modeling capability exist in the literature, and this study represents essentially uncharted territory. During the course of this Phase-II project, a significant breakthrough was achieved in modeling liquid metal flows at high Hartmann numbers. We developed a unique mathematical technique to accurately compute the fluid flow in complex geometries at extremely high Hartmann numbers (10,000 and greater), thus extending the state of the art of liquid metal MHD modeling relevant to fusion reactors at the present time. These developments have been published in noted international journals. A sequence of theoretical and experimental results was used to verify and validate the results obtained. The code was applied to a complete DCLL module simulation study with promising results.

  1. The MESORAD dose assessment model: Computer code

    International Nuclear Information System (INIS)

    Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.

    1988-10-01

    MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs

  2. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  3. Two NEA sensitivity, 1-D benchmark calculations. Part I: Sensitivity of the dose rate at the outside of a PWR configuration and of the vessel damage

    International Nuclear Information System (INIS)

    Canali, U.; Gonano, G.; Nicks, R.

    1978-01-01

    Within the framework of the coordinated programme of sensitivity analysis studies, the reactor shielding benchmark calculation concerning the shield of a typical Pressurized Water Reactor, as proposed by I.K.E. (Stuttgart) and K.W.U. (Erlangen) has been performed. The direct and adjoint fluxes were calculated using ANISN, the cross-section sensitivity using SWANLAKE. The cross-section library used was EL4, 100 neutron + 19 gamma groups. The following quantities were of interest: neutron damage in the pressure vessel; dose rate outside the concrete shield. SWANLAKE was used to calculate the sensitivity of the above mentioned results to variations in the density of each nuclide present. The contributions of the different cross-section Legendre components are also given. Sensitivity profiles indicate the energy ranges in which a cross-section variation has a greater influence on the results. (author)

  4. Benchmarking of protein descriptor sets in proteochemometric modeling (part 2): modeling performance of 13 amino acid descriptor sets

    Science.gov (United States)

    2013-01-01

    Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that

  5. Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model

    Energy Technology Data Exchange (ETDEWEB)

    Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex (France); Maday, Yvon, E-mail: maday@ann.jussieu.fr [Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions and Institut Universitaire de France, F-75005, Paris (France); Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); Brown Univ, Division of Applied Maths, Providence, RI (United States); Riahi, Mohamed Kamel, E-mail: riahi@cmap.polytechnique.fr [Laboratoire de Recherche Conventionné MANON, CEA/DEN/DANS/DM2S and UPMC-CNRS/LJLL (France); CMAP, Inria-Saclay and X-Ecole Polytechnique, Route de Saclay, 91128 Palaiseau Cedex (France); Salomon, Julien, E-mail: salomon@ceremade.dauphine.fr [CEREMADE, Univ Paris-Dauphine, Pl. du Mal. de Lattre de Tassigny, F-75016, Paris (France)

    2014-12-15

    In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity of the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.

  6. Quo Vadis Benchmark Simulation Models? 8th IWA Symposium on Systems Analysis and Integrated Assessment

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J.; Batstone, D,

    2011-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for WWTPs is coming towards an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, hi...

  7. Volume-Targeted Ventilation in the Neonate: Benchmarking Ventilators on an Active Lung Model.

    Science.gov (United States)

    Krieger, Tobias J; Wald, Martin

    2017-03-01

    Mechanically ventilated neonates have been observed to receive substantially different ventilation after switching ventilator models, despite identical ventilator settings. This study aims at establishing the range of output variability among 10 neonatal ventilators under various breathing conditions. Relative benchmarking test of 10 neonatal ventilators on an active neonatal lung model. Neonatal ICU. Ten current neonatal ventilators. Ventilators were set identically to flow-triggered, synchronized, volume-targeted, pressure-controlled, continuous mandatory ventilation and connected to a neonatal lung model. The latter was configured to simulate three patients (500, 1,500, and 3,500 g) in three breathing modes each (passive breathing, constant active breathing, and variable active breathing). Averaged across all weight conditions, the included ventilators delivered between 86% and 110% of the target tidal volume in the passive mode, between 88% and 126% during constant active breathing, and between 86% and 120% under variable active breathing. The largest relative deviation occurred during the 500 g constant active condition, where the highest output machine produced 147% of the tidal volume of the lowest output machine. All machines deviate significantly in volume output and ventilation regulation. These differences depend on ventilation type, respiratory force, and patient behavior, preventing the creation of a simple conversion table between ventilator models. Universal neonatal tidal volume targets for mechanical ventilation cannot be transferred from one ventilator to another without considering necessary adjustments.

  8. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  9. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  10. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  11. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Badea, Aurelian F., E-mail: aurelian.badea@kit.edu [Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany); Cacuci, Dan G. [Center for Nuclear Science and Energy/Dept. of ME, University of South Carolina, 300 Main Street, Columbia, SC 29208 (United States)

    2017-03-15

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  12. Predictive uncertainty reduction in coupled neutron-kinetics/thermal hydraulics modeling of the BWR-TT2 benchmark

    International Nuclear Information System (INIS)

    Badea, Aurelian F.; Cacuci, Dan G.

    2017-01-01

    Highlights: • BWR Turbine Trip 2 (BWR-TT2) benchmark. • Substantial (up to 50%) reduction of uncertainties in the predicted transient power. • 6660 uncertain model parameters were calibrated. - Abstract: By applying a comprehensive predictive modeling methodology, this work demonstrates a substantial (up to 50%) reduction of uncertainties in the predicted total transient power in the BWR Turbine Trip 2 (BWR-TT2) benchmark while calibrating the numerical simulation of this benchmark, comprising 6090 macroscopic cross sections, and 570 thermal-hydraulics parameters involved in modeling the phase-slip correlation, transient outlet pressure, and total mass flow. The BWR-TT2 benchmark is based on an experiment that was carried out in 1977 in the NPP Peach Bottom 2, involving the closure of the turbine stop valve which caused a pressure wave that propagated with attenuation into the reactor core. The condensation of the steam in the reactor core caused by the pressure increase led to a positive reactivity insertion. The subsequent rise of power was limited by the feedback and the insertion of the control rods. The BWR-TT2 benchmark was modeled with the three-dimensional reactor physics code system DYN3D, by coupling neutron kinetics with two-phase thermal-hydraulics. All 6660 DYN3D model parameters were calibrated by applying a predictive modeling methodology that combines experimental and computational information to produce optimally predicted best-estimate results with reduced predicted uncertainties. Simultaneously, the predictive modeling methodology yields optimally predicted values for the BWR total transient power while reducing significantly the accompanying predicted standard deviations.

  13. PHOTOCHEMISTRY IN TERRESTRIAL EXOPLANET ATMOSPHERES. I. PHOTOCHEMISTRY MODEL AND BENCHMARK CASES

    Energy Technology Data Exchange (ETDEWEB)

    Hu Renyu; Seager, Sara; Bains, William, E-mail: hury@mit.edu [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2012-12-20

    We present a comprehensive photochemistry model for exploration of the chemical composition of terrestrial exoplanet atmospheres. The photochemistry model is designed from the ground up to have the capacity to treat all types of terrestrial planet atmospheres, ranging from oxidizing through reducing, which makes the code suitable for applications for the wide range of anticipated terrestrial exoplanet compositions. The one-dimensional chemical transport model treats up to 800 chemical reactions, photochemical processes, dry and wet deposition, surface emission, and thermal escape of O, H, C, N, and S bearing species, as well as formation and deposition of elemental sulfur and sulfuric acid aerosols. We validate the model by computing the atmospheric composition of current Earth and Mars and find agreement with observations of major trace gases in Earth's and Mars' atmospheres. We simulate several plausible atmospheric scenarios of terrestrial exoplanets and choose three benchmark cases for atmospheres from reducing to oxidizing. The most interesting finding is that atomic hydrogen is always a more abundant reactive radical than the hydroxyl radical in anoxic atmospheres. Whether atomic hydrogen is the most important removal path for a molecule of interest also depends on the relevant reaction rates. We also find that volcanic carbon compounds (i.e., CH{sub 4} and CO{sub 2}) are chemically long-lived and tend to be well mixed in both reducing and oxidizing atmospheres, and their dry deposition velocities to the surface control the atmospheric oxidation states. Furthermore, we revisit whether photochemically produced oxygen can cause false positives for detecting oxygenic photosynthesis, and find that in 1 bar CO{sub 2}-rich atmospheres oxygen and ozone may build up to levels that have conventionally been accepted as signatures of life, if there is no surface emission of reducing gases. The atmospheric scenarios presented in this paper can serve as the

  14. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  15. An analytical model for the study of a small LFR core dynamics: development and benchmark

    International Nuclear Information System (INIS)

    Bortot, S.; Cammi, A.; Lorenzi, S.; Moisseytsev, A.

    2011-01-01

    An analytical model for the study of a small Lead-cooled Fast Reactor (LFR) control-oriented dynamics has been developed aimed at providing a useful, very flexible and straightforward, though accurate, tool allowing relatively quick transient design-basis and stability analyses. A simplified lumped-parameter approach has been adopted to couple neutronics and thermal-hydraulics: the point-kinetics approximation has been employed and an average-temperature heat-exchange model has been implemented. The reactor transient responses following postulated accident initiators such as Unprotected Control Rod Withdrawal (UTOP), Loss of Heat Sink (ULOHS) and Loss of Flow (ULOF) have been studied for a MOX and a metal-fuelled core at the Beginning of Cycle (BoC) and End of Cycle (EoC) configurations. A benchmark analysis has been then performed by means of the SAS4A/SASSYS-1 Liquid Metal Reactor Code System, in which a core model based on three representative channels has been built with the purpose of providing verification for the analytical outcomes and indicating how the latter relate to more realistic one-dimensional calculations. As a general result, responses concerning the main core characteristics (namely, power, reactivity, etc.) have turned out to be mutually consistent in terms of both steady-state absolute figures and transient developments, showing discrepancies of the order of only some percents, thus confirming a very satisfactory agreement. (author)

  16. Antibiotic reimbursement in a model delinked from sales: a benchmark-based worldwide approach.

    Science.gov (United States)

    Rex, John H; Outterson, Kevin

    2016-04-01

    Despite the life-saving ability of antibiotics and their importance as a key enabler of all of modern health care, their effectiveness is now threatened by a rising tide of resistance. Unfortunately, the antibiotic pipeline does not match health needs because of challenges in discovery and development, as well as the poor economics of antibiotics. Discovery and development are being addressed by a range of public-private partnerships; however, correcting the poor economics of antibiotics will need an overhaul of the present business model on a worldwide scale. Discussions are now converging on delinking reward from antibiotic sales through prizes, milestone payments, or insurance-like models in which innovation is rewarded with a fixed series of payments of a predictable size. Rewarding all drugs with the same payments could create perverse incentives to produce drugs that provide the least possible innovation. Thus, we propose a payment model using a graded array of benchmarked rewards designed to encourage the development of antibiotics with the greatest societal value, together with appropriate worldwide access to antibiotics to maximise human health. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Benchmarking Model Variants in Development of a Hardware-in-the-Loop Simulation System

    Science.gov (United States)

    Aretskin-Hariton, Eliot D.; Zinnecker, Alicia M.; Kratz, Jonathan L.; Culley, Dennis E.; Thomas, George L.

    2016-01-01

    Distributed engine control architecture presents a significant increase in complexity over traditional implementations when viewed from the perspective of system simulation and hardware design and test. Even if the overall function of the control scheme remains the same, the hardware implementation can have a significant effect on the overall system performance due to differences in the creation and flow of data between control elements. A Hardware-in-the-Loop (HIL) simulation system is under development at NASA Glenn Research Center that enables the exploration of these hardware dependent issues. The system is based on, but not limited to, the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k). This paper describes the step-by-step conversion from the self-contained baseline model to the hardware in the loop model, and the validation of each step. As the control model hardware fidelity was improved during HIL system development, benchmarking simulations were performed to verify that engine system performance characteristics remained the same. The results demonstrate the goal of the effort; the new HIL configurations have similar functionality and performance compared to the baseline C-MAPSS40k system.

  18. Benchmarking sensitivity of biophysical processes to leaf area changes in land surface models

    Science.gov (United States)

    Forzieri, Giovanni; Duveiller, Gregory; Georgievski, Goran; Li, Wei; Robestson, Eddy; Kautz, Markus; Lawrence, Peter; Ciais, Philippe; Pongratz, Julia; Sitch, Stephen; Wiltshire, Andy; Arneth, Almut; Cescatti, Alessandro

    2017-04-01

    Land surface models (LSM) are widely applied as supporting tools for policy-relevant assessment of climate change and its impact on terrestrial ecosystems, yet knowledge of their performance skills in representing the sensitivity of biophysical processes to changes in vegetation density is still limited. This is particularly relevant in light of the substantial impacts on regional climate associated with the changes in leaf area index (LAI) following the observed global greening. Benchmarking LSMs on the sensitivity of the simulated processes to vegetation density is essential to reduce their uncertainty and improve the representation of these effects. Here we present a novel benchmark system to assess model capacity in reproducing land surface-atmosphere energy exchanges modulated by vegetation density. Through a collaborative effort of different modeling groups, a consistent set of land surface energy fluxes and LAI dynamics has been generated from multiple LSMs, including JSBACH, JULES, ORCHIDEE, CLM4.5 and LPJ-GUESS. Relationships of interannual variations of modeled surface fluxes to LAI changes have been analyzed at global scale across different climatological gradients and compared with satellite-based products. A set of scoring metrics has been used to assess the overall model performances and a detailed analysis in the climate space has been provided to diagnose possible model errors associated to background conditions. Results have enabled us to identify model-specific strengths and deficiencies. An overall best performing model does not emerge from the analyses. However, the comparison with other models that work better under certain metrics and conditions indicates that improvements are expected to be potentially achievable. A general amplification of the biophysical processes mediated by vegetation is found across the different land surface schemes. Grasslands are characterized by an underestimated year-to-year variability of LAI in cold climates

  19. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    Kovtonyuk, A.; Petruzzi, A.; D'Auria, F.

    2015-01-01

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  20. 3-D core modelling of RIA transient: the TMI-1 benchmark

    International Nuclear Information System (INIS)

    Ferraresi, P.; Studer, E.; Avvakumov, A.; Malofeev, V.; Diamond, D.; Bromley, B.

    2001-01-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P N ) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  1. Electron-helium S-wave model benchmark calculations. I. Single ionization and single excitation

    Science.gov (United States)

    Bartlett, Philip L.; Stelbovics, Andris T.

    2010-02-01

    A full four-body implementation of the propagating exterior complex scaling (PECS) method [J. Phys. B 37, L69 (2004)] is developed and applied to the electron-impact of helium in an S-wave model. Time-independent solutions to the Schrödinger equation are found numerically in coordinate space over a wide range of energies and used to evaluate total and differential cross sections for a complete set of three- and four-body processes with benchmark precision. With this model we demonstrate the suitability of the PECS method for the complete solution of the full electron-helium system. Here we detail the theoretical and computational development of the four-body PECS method and present results for three-body channels: single excitation and single ionization. Four-body cross sections are presented in the sequel to this article [Phys. Rev. A 81, 022716 (2010)]. The calculations reveal structure in the total and energy-differential single-ionization cross sections for excited-state targets that is due to interference from autoionization channels and is evident over a wide range of incident electron energies.

  2. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  3. BSM-MBR: a benchmark simulation model to compare control and operational strategies for membrane bioreactors.

    Science.gov (United States)

    Maere, Thomas; Verrecht, Bart; Moerenhout, Stefanie; Judd, Simon; Nopens, Ingmar

    2011-03-01

    A benchmark simulation model for membrane bioreactors (BSM-MBR) was developed to evaluate operational and control strategies in terms of effluent quality and operational costs. The configuration of the existing BSM1 for conventional wastewater treatment plants was adapted using reactor volumes, pumped sludge flows and membrane filtration for the water-sludge separation. The BSM1 performance criteria were extended for an MBR taking into account additional pumping requirements for permeate production and aeration requirements for membrane fouling prevention. To incorporate the effects of elevated sludge concentrations on aeration efficiency and costs a dedicated aeration model was adopted. Steady-state and dynamic simulations revealed BSM-MBR, as expected, to out-perform BSM1 for effluent quality, mainly due to complete retention of solids and improved ammonium removal from extensive aeration combined with higher biomass levels. However, this was at the expense of significantly higher operational costs. A comparison with three large-scale MBRs showed BSM-MBR energy costs to be realistic. The membrane aeration costs for the open loop simulations were rather high, attributed to non-optimization of BSM-MBR. As proof of concept two closed loop simulations were run to demonstrate the usefulness of BSM-MBR for identifying control strategies to lower operational costs without compromising effluent quality. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Benchmark of the FLUKA model of crystal channeling against the UA9-H8 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Schoofs, P.; Cerutti, F.; Ferrari, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Smirnov, G. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Joint Institute for Nuclear Research (JINR), Dubna (Russian Federation)

    2015-07-15

    Channeling in bent crystals is increasingly considered as an option for the collimation of high-energy particle beams. The installation of crystals in the LHC has taken place during this past year and aims at demonstrating the feasibility of crystal collimation and a possible cleaning efficiency improvement. The performance of CERN collimation insertions is evaluated with the Monte Carlo code FLUKA, which is capable to simulate energy deposition in collimators as well as beam loss monitor signals. A new model of crystal channeling was developed specifically so that similar simulations can be conducted in the case of crystal-assisted collimation. In this paper, most recent results of this model are brought forward in the framework of a joint activity inside the UA9 collaboration to benchmark the different simulation tools available. The performance of crystal STF 45, produced at INFN Ferrara, was measured at the H8 beamline at CERN in 2010 and serves as the basis to the comparison. Distributions of deflected particles are shown to be in very good agreement with experimental data. Calculated dechanneling lengths and crystal performance in the transition region between amorphous regime and volume reflection are also close to the measured ones.

  5. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1).

    Science.gov (United States)

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.

  6. Accident tolerant clad material modeling by MELCOR: Benchmark for SURRY Short Term Station Black Out

    International Nuclear Information System (INIS)

    Wang, Jun; Mccabe, Mckinleigh; Wu, Lei; Dong, Xiaomeng; Wang, Xianmao; Haskin, Troy Christopher; Corradini, Michael L.

    2017-01-01

    Highlights: • Thermo-physical and oxidation kinetics properties calculation and analysis of FeCrAl. • Properties modelling of FeCrAl in MELCOR. • Benchmark calculation of Surry nuclear power plant. - Abstract: Accident tolerant fuel and cladding materials are being investigated to provide a greater resistance to fuel degradation, oxidation and melting if long-term cooling is lost in a Light Water Reactor (LWR) following an accident such as a Station Blackout (SBO) or Loss of Coolant Accident (LOCA). Researchers at UW-Madison are analyzing an SBO sequence and examining the effect of a loss of auxiliary feedwater (AFW) with the MELCOR systems code. Our research work considers accident tolerant cladding materials (e.g., FeCrAl alloy) and their effect on the accident behavior. We first gathered the physical properties of this alternative cladding material via literature review and compared it to the usual zirconium alloys used in LWRs. We then developed a model for the Surry reactor for a Short-term SBO sequence and examined the effect of replacing FeCrAl for Zircaloy cladding. The analysis uses MELCOR, Version 1.8.6 YR, which is developed by Idaho National Laboratory in collaboration with MELCOR developers at Sandia National Laboratories. This version allows the user to alter the cladding material considered, and our study examines the behavior of the FeCrAl alloy as a substitute for Zircaloy. Our benchmark comparisons with the Sandia National Laboratory’s analysis of Surry using MELCOR 1.8.6 and the more recent MELCOR 2.1 indicate good overall agreement through the early phases of the accident progression. When FeCrAl is substituted for Zircaloy to examine its performance, we confirmed that FeCrAl slows the accident progression and reduce the amount of hydrogen generated. Our analyses also show that this special version of MELCOR can be used to evaluate other potential ATF cladding materials, e.g., SiC as well as innovative coatings on zirconium cladding

  7. Accident tolerant clad material modeling by MELCOR: Benchmark for SURRY Short Term Station Black Out

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jun, E-mail: jwang564@wisc.edu [College of Engineering, The University of Wisconsin-Madison, Madison 53706 (United States); Mccabe, Mckinleigh [College of Engineering, The University of Wisconsin-Madison, Madison 53706 (United States); Wu, Lei [Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084 (China); Dong, Xiaomeng [College of Engineering, The University of Wisconsin-Madison, Madison 53706 (United States); Wang, Xianmao [Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084 (China); Haskin, Troy Christopher [College of Engineering, The University of Wisconsin-Madison, Madison 53706 (United States); Corradini, Michael L., E-mail: corradini@engr.wisc.edu [College of Engineering, The University of Wisconsin-Madison, Madison 53706 (United States)

    2017-03-15

    Highlights: • Thermo-physical and oxidation kinetics properties calculation and analysis of FeCrAl. • Properties modelling of FeCrAl in MELCOR. • Benchmark calculation of Surry nuclear power plant. - Abstract: Accident tolerant fuel and cladding materials are being investigated to provide a greater resistance to fuel degradation, oxidation and melting if long-term cooling is lost in a Light Water Reactor (LWR) following an accident such as a Station Blackout (SBO) or Loss of Coolant Accident (LOCA). Researchers at UW-Madison are analyzing an SBO sequence and examining the effect of a loss of auxiliary feedwater (AFW) with the MELCOR systems code. Our research work considers accident tolerant cladding materials (e.g., FeCrAl alloy) and their effect on the accident behavior. We first gathered the physical properties of this alternative cladding material via literature review and compared it to the usual zirconium alloys used in LWRs. We then developed a model for the Surry reactor for a Short-term SBO sequence and examined the effect of replacing FeCrAl for Zircaloy cladding. The analysis uses MELCOR, Version 1.8.6 YR, which is developed by Idaho National Laboratory in collaboration with MELCOR developers at Sandia National Laboratories. This version allows the user to alter the cladding material considered, and our study examines the behavior of the FeCrAl alloy as a substitute for Zircaloy. Our benchmark comparisons with the Sandia National Laboratory’s analysis of Surry using MELCOR 1.8.6 and the more recent MELCOR 2.1 indicate good overall agreement through the early phases of the accident progression. When FeCrAl is substituted for Zircaloy to examine its performance, we confirmed that FeCrAl slows the accident progression and reduce the amount of hydrogen generated. Our analyses also show that this special version of MELCOR can be used to evaluate other potential ATF cladding materials, e.g., SiC as well as innovative coatings on zirconium cladding

  8. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  9. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  10. Ultraviolet radiation therapy and UVR dose models

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, David Robert, E-mail: davidrobert.grimes@oncology.ox.ac.uk [School of Physical Sciences, Dublin City University, Glasnevin, Dublin 9, Ireland and Cancer Research UK/MRC Oxford Institute for Radiation Oncology, Gray Laboratory, University of Oxford, Old Road Campus Research Building, Oxford OX3 7DQ (United Kingdom)

    2015-01-15

    Ultraviolet radiation (UVR) has been an effective treatment for a number of chronic skin disorders, and its ability to alleviate these conditions has been well documented. Although nonionizing, exposure to ultraviolet (UV) radiation is still damaging to deoxyribonucleic acid integrity, and has a number of unpleasant side effects ranging from erythema (sunburn) to carcinogenesis. As the conditions treated with this therapy tend to be chronic, exposures are repeated and can be high, increasing the lifetime probability of an adverse event or mutagenic effect. Despite the potential detrimental effects, quantitative ultraviolet dosimetry for phototherapy is an underdeveloped area and better dosimetry would allow clinicians to maximize biological effect whilst minimizing the repercussions of overexposure. This review gives a history and insight into the current state of UVR phototherapy, including an overview of biological effects of UVR, a discussion of UVR production, illness treated by this modality, cabin design and the clinical implementation of phototherapy, as well as clinical dose estimation techniques. Several dose models for ultraviolet phototherapy are also examined, and the need for an accurate computational dose estimation method in ultraviolet phototherapy is discussed.

  11. Ultraviolet radiation therapy and UVR dose models

    International Nuclear Information System (INIS)

    Grimes, David Robert

    2015-01-01

    Ultraviolet radiation (UVR) has been an effective treatment for a number of chronic skin disorders, and its ability to alleviate these conditions has been well documented. Although nonionizing, exposure to ultraviolet (UV) radiation is still damaging to deoxyribonucleic acid integrity, and has a number of unpleasant side effects ranging from erythema (sunburn) to carcinogenesis. As the conditions treated with this therapy tend to be chronic, exposures are repeated and can be high, increasing the lifetime probability of an adverse event or mutagenic effect. Despite the potential detrimental effects, quantitative ultraviolet dosimetry for phototherapy is an underdeveloped area and better dosimetry would allow clinicians to maximize biological effect whilst minimizing the repercussions of overexposure. This review gives a history and insight into the current state of UVR phototherapy, including an overview of biological effects of UVR, a discussion of UVR production, illness treated by this modality, cabin design and the clinical implementation of phototherapy, as well as clinical dose estimation techniques. Several dose models for ultraviolet phototherapy are also examined, and the need for an accurate computational dose estimation method in ultraviolet phototherapy is discussed

  12. Proton Exchange Membrane Fuel Cell Engineering Model Powerplant. Test Report: Benchmark Tests in Three Spatial Orientations

    Science.gov (United States)

    Loyselle, Patricia; Prokopius, Kevin

    2011-01-01

    Proton exchange membrane (PEM) fuel cell technology is the leading candidate to replace the aging alkaline fuel cell technology, currently used on the Shuttle, for future space missions. This test effort marks the final phase of a 5-yr development program that began under the Second Generation Reusable Launch Vehicle (RLV) Program, transitioned into the Next Generation Launch Technologies (NGLT) Program, and continued under Constellation Systems in the Exploration Technology Development Program. Initially, the engineering model (EM) powerplant was evaluated with respect to its performance as compared to acceptance tests carried out at the manufacturer. This was to determine the sensitivity of the powerplant performance to changes in test environment. In addition, a series of tests were performed with the powerplant in the original standard orientation. This report details the continuing EM benchmark test results in three spatial orientations as well as extended duration testing in the mission profile test. The results from these tests verify the applicability of PEM fuel cells for future NASA missions. The specifics of these different tests are described in the following sections.

  13. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  14. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  15. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  16. Benchmarking the CEMDATA07 database to model chemical degradation of concrete using GEMS and PHREEQC

    International Nuclear Information System (INIS)

    Jacques, Diederik; Wang, Lian; Martens, Evelien; Mallants, Dirk

    2012-01-01

    Thermodynamic equilibrium modelling of degradation of cement and concrete systems by chemically detrimental reactions as carbonation, sulphate attack and decalcification or leaching processes requires a consistent thermodynamic database with the relevant aqueous species, cement minerals and hydrates. The recent and consistent database CEMDATA07 is used as the basis in the studies of the Belgian near-surface disposal concept being developed by ONDRAF/NIRAS. The database is consistent with the thermodynamic data in the Nagra/PSI-Thermodynamic Database. When used with the GEMS thermodynamic code, thermodynamic modelling can be performed at temperatures different from the standard temperature of 25 C. GEMS calculates thermodynamic equilibrium by minimizing the Gibbs free energy of the system. Alternatively, thermodynamic equilibrium can also be calculated by solving a nonlinear system of mass balance equations and mass action equations, as is done in PHREEQC. A PHREEQC-database for the cement systems at temperatures different from 25 C is derived from the thermodynamic parameters and models from GEMS. A number of benchmark simulations using PHREEQC and GEM-Selektor were done to verify the implementation of the CEMDATA07 database in PHREEQC-databases. Simulations address a series of reactions that are relevant to the assessment of long-term cement and concrete durability. Verification calculations were performed for different systems with increasing complexity: CaO-SiO 2 -CO 2 , CaO-Al 2 O 3 -SO 3 -CO 2 , and CaO-SiO 2 -Al 2 O 3 -Fe 2 O 3 -MgO-SO 3 -CO 2 . Three types of chemical degradation processes were simulated: (1) carbonation by adding CO 2 to the bulk composition, (2) sulphate attack by adding SO 3 to the bulk composition, and (3) decalcification/leaching by putting the cement solid phase sequentially in contact with pure water. An excellent agreement between the simulations with GEMS and PHREEQC was obtained

  17. DECOVALEX I - Bench-Mark Test 3: Thermo-hydro-mechanical modelling

    International Nuclear Information System (INIS)

    Israelsson, J.

    1995-12-01

    The bench-mark test concerns the excavation of a tunnel, located 500 m below the ground surface, and the establishment of mechanical equilibrium and steady-state fluid flow. Following this, a thermal heating due to the nuclear waste, stored in a borehole below the tunnel, was simulated. The results are reported at (1) 30 days after tunnel excavation, (2) steady state, (3) one year after thermal loading, and (4) at the time of maximum temperature. The problem specification included the excavation and waste geometry, materials properties for intact rock and joints, location of more than 6500 joints observed in the 50 by 50 m area, and calculated hydraulic conductivities. However, due to the large number of joints and the lack of dominating orientations, it was decided to treat the problem as a continuum using the computer code FLAC. The problem was modeled using a vertical symmetry plane through the tunnel and the borehole. Flow equilibrium was obtained approx. 40 days after the opening of the tunnel. Since the hydraulic conductivity was set to be stress dependent, a noticeable difference in the horizontal and vertical conductivity and flow was observed. After 40 days, an oedometer-type consolidation of the model was observed. Approx. 4 years after the initiation of the heat source, a maximum temperature of 171 C was obtained. The stress-dependent hydraulic conductivity and the temperature-dependent dynamic viscosity caused minor changes to the flow pattern. The specified mechanical boundary conditions imply that the tunnel is part of a system of parallel tunnels. However, the fixed temperature at the top boundary maintains the temperature below the temperature anticipated for an equivalent repository. The combination of mechanical and hydraulic boundary conditions cause the model to behave like an oedometer test in which the consolidation rate goes asymptotically to zero. 17 refs, 55 figs, 22 tabs

  18. Benchmarking Brown Dwarf Models With a Non-irradiated Transiting Brown Dwarf in Praesepe

    Science.gov (United States)

    Beatty, Thomas; Marley, Mark; Line, Michael; Gizis, John

    2018-05-01

    We wish to use 9.4 hours of Spitzer time to observe two eclipses, one each at 3.6um and 4.5um, of the transiting brown dwarf AD 3116b. AD 3116b is a 54.2+/-4.3 MJ, 1.08+/-0.07 RJ object on a 1.98 day orbit about a 3200K M-dwarf. Uniquely, AD 3116 and its host star are both members of Praesepe, a 690+/-60 Myr old open cluster. AD 3116b is thus one of two transiting brown dwarfs for which we have a robust isochronal age that is not dependent upon brown dwarf evolutionary models, and the youngest brown dwarf for which this is the case. Importantly, the flux AD 3116b receives from its host star is only 0.7% of its predicted internal luminosity (Saumon & Marley 2008). This makes AD 3116b the first known transiting brown dwarf that simultaneously has a well-defined age, and that receives a negligible amount of external irradiation, and a unique laboratory to test radius and luminosity predictions from brown dwarf evolutionary models. Our goal is to measure the emission from the brown dwarf. AD 3116b should have large, 25 mmag, eclipse depths in the Spitzer bandpasses, and we expect to measure them with a precision of +/-0.50 mmag at 3.6um and +/-0.54 mmag at 4.5um. This will allow us to make measure AD 3116b?s internal effective temperature to +/-40K. We will also use the upcoming Gaia DR2 parallaxes to measure AD 3116b's absolute IRAC magnitudes and color, and hence determine the cloud properties of the atmosphere. As the only known brown dwarf with an independently measured mass, radius, and age, Spitzer measurements of AD 3116b's luminosity and clouds will provide a critical benchmark for brown dwarf observation and theory.

  19. Optimized dose distribution of a high dose rate vaginal cylinder

    International Nuclear Information System (INIS)

    Li Zuofeng; Liu, Chihray; Palta, Jatinder R.

    1998-01-01

    Purpose: To present a comparison of optimized dose distributions for a set of high-dose-rate (HDR) vaginal cylinders calculated by a commercial treatment-planning system with benchmark calculations using Monte-Carlo-calculated dosimetry data. Methods and Materials: Optimized dose distributions using both an isotropic and an anisotropic dose calculation model were obtained for a set of HDR vaginal cylinders. Mathematical optimization techniques available in the computer treatment-planning system were used to calculate dwell times and positions. These dose distributions were compared with benchmark calculations with TG43 formalism and using Monte-Carlo-calculated data. The same dwell times and positions were used for a quantitative comparison of dose calculated with three dose models. Results: The isotropic dose calculation model can result in discrepancies as high as 50%. The anisotropic dose calculation model compared better with benchmark calculations. The differences were more significant at the apex of the vaginal cylinder, which is typically used as the prescription point. Conclusion: Dose calculation models available in a computer treatment-planning system must be evaluated carefully to ensure their correct application. It should also be noted that when optimized dose distribution at a distance from the cylinder surface is calculated using an accurate dose calculation model, the vaginal mucosa dose becomes significantly higher, and therefore should be carefully monitored

  20. An Evaluation of Fault Tolerant Wind Turbine Control Schemes applied to a Benchmark Model

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob

    2014-01-01

    Reliability and availability of modern wind turbines increases in importance as the ratio in the world's power supply increases. This is important in order to increase the energy generated per unit and their lowering cost of energy and as well to ensure availability of generated power, which helps...... on this benchmark and is especially good accommodating sensors faults. The two other evaluated solutions do also well accommodating sensors faults, but have some issues which should be worked on, before they can be considered as a full solution to the benchmark problem....

  1. A novel dose uncertainty model and its application for dose verification

    International Nuclear Information System (INIS)

    Jin Hosang; Chung Heetaek; Liu Chihray; Palta, Jatinder; Suh, Tae-Suk; Kim, Siyong

    2005-01-01

    Based on statistical approach, a novel dose uncertainty model was introduced considering both nonspatial and spatial dose deviations. Non-space-oriented uncertainty is mainly caused by dosimetric uncertainties, and space-oriented dose uncertainty is the uncertainty caused by all spatial displacements. Assuming these two parts are independent, dose difference between measurement and calculation is a linear combination of nonspatial and spatial dose uncertainties. Two assumptions were made: (1) the relative standard deviation of nonspatial dose uncertainty is inversely proportional to the dose standard deviation σ, and (2) the spatial dose uncertainty is proportional to the gradient of dose. The total dose uncertainty is a quadratic sum of the nonspatial and spatial uncertainties. The uncertainty model provides the tolerance dose bound for comparison between calculation and measurement. In the statistical uncertainty model based on a Gaussian distribution, a confidence level of 3σ theoretically confines 99.74% of measurements within the bound. By setting the confidence limit, the tolerance bound for dose comparison can be made analogous to that of existing dose comparison methods (e.g., a composite distribution analysis, a γ test, a χ evaluation, and a normalized agreement test method). However, the model considers the inherent dose uncertainty characteristics of the test points by taking into account the space-specific history of dose accumulation, while the previous methods apply a single tolerance criterion to the points, although dose uncertainty at each point is significantly different from others. Three types of one-dimensional test dose distributions (a single large field, a composite flat field made by two identical beams, and three-beam intensity-modulated fields) were made to verify the robustness of the model. For each test distribution, the dose bound predicted by the uncertainty model was compared with simulated measurements. The simulated

  2. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  3. Neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) MCNP ''Benchmark CAD Model'' with the ATTILA discrete ordinance code

    International Nuclear Information System (INIS)

    Youssef, M.Z.; Feder, R.; Davis, I.

    2007-01-01

    The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)

  4. Benchmarking of thermalhydraulic loop models for lead-alloy-cooled advanced nuclear energy systems. Phase I: Isothermal forced convection case

    International Nuclear Information System (INIS)

    2012-06-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of the Fuel Cycle (WPFC) has been established to co-ordinate scientific activities regarding various existing and advanced nuclear fuel cycles, including advanced reactor systems, associated chemistry and flowsheets, development and performance of fuel and materials and accelerators and spallation targets. The WPFC has different expert groups to cover a wide range of scientific issues in the field of nuclear fuel cycle. The Task Force on Lead-Alloy-Cooled Advanced Nuclear Energy Systems (LACANES) was created in 2006 to study thermal-hydraulic characteristics of heavy liquid metal coolant loop. The objectives of the task force are to (1) validate thermal-hydraulic loop models for application to LACANES design analysis in participating organisations, by benchmarking with a set of well-characterised lead-alloy coolant loop test data, (2) establish guidelines for quantifying thermal-hydraulic modelling parameters related to friction and heat transfer by lead-alloy coolant and (3) identify specific issues, either in modelling and/or in loop testing, which need to be addressed via possible future work. Nine participants from seven different institutes participated in the first phase of the benchmark. This report provides details of the benchmark specifications, method and code characteristics and results of the preliminary study: pressure loss coefficient and Phase-I. A comparison and analysis of the results will be performed together with Phase-II

  5. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    Science.gov (United States)

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung

  6. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    Science.gov (United States)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung

  7. Comparative Modeling and Benchmarking Data Sets for Human Histone Deacetylases and Sirtuin Families

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Kebede, Eyob Hailu; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Histone Deacetylases (HDACs) are an important class of drug targets for the treatment of cancers, neurodegenerative diseases and other types of diseases. Virtual screening (VS) has become fairly effective approaches for drug discovery of novel and highly selective Histone Deacetylases Inhibitors (HDACIs). To facilitate the process, we constructed the Maximal Unbiased Benchmarking Data Sets for HDACs (MUBD-HDACs) using our recently published methods that were originally developed for building unbiased benchmarking sets for ligand-based virtual screening (LBVS). The MUBD-HDACs covers all 4 Classes including Class III (Sirtuins family) and 14 HDACs isoforms, composed of 631 inhibitors and 24,609 unbiased decoys. Its ligand sets have been validated extensively as chemically diverse, while the decoy sets were shown to be property-matching with ligands and maximal unbiased in terms of “artificial enrichment” and “analogue bias”. We also conducted comparative studies with DUD-E and DEKOIS 2.0 sets against HDAC2 and HDAC8 targets, and demonstrate that our MUBD-HDACs is unique in that it can be applied unbiasedly to both LBVS and SBVS approaches. In addition, we defined a novel metric, i.e. NLBScore, to detect the “2D bias” and “LBVS favorable” effect within the benchmarking sets. In summary, MUBD-HDACs is the only comprehensive and maximal-unbiased benchmark data sets for HDACs (including Sirtuins) that is available so far. MUBD-HDACs is freely available at http://www.xswlab.org/. PMID:25633490

  8. Benchmarking in European Higher Education: A Step beyond Current Quality Models

    Science.gov (United States)

    Burquel, Nadine; van Vught, Frans

    2010-01-01

    This paper presents the findings of a two-year EU-funded project (DG Education and Culture) "Benchmarking in European Higher Education", carried out from 2006 to 2008 by a consortium led by the European Centre for Strategic Management of Universities (ESMU), with the Centre for Higher Education Development, UNESCO-CEPES, and the…

  9. Introduction of new road pavement response modelling software by means of benchmarking

    CSIR Research Space (South Africa)

    Maina, JW

    2008-07-01

    Full Text Available . Newly developed Finite Element Method for Pavement Analysis (FEMPA) software, which is currently only available for use in a research environment, is also benchmarked against these other packages. The results show that both the GAMES and FEMPA packages...

  10. A benchmark simulation model to describe plant-wide phosphorus transformations in WWTPs

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, D.; Kazadi-Mbamba, C.

    It is more than 10 years since the publication of the BSM1 technical report (Copp, 2002). The main objective of BSM1 was to create a platform for benchmarking C and N removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed for the evaluati...

  11. Comparative modeling and benchmarking data sets for human histone deacetylases and sirtuin families.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Kebede, Eyob Hailu; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-02-23

    Histone deacetylases (HDACs) are an important class of drug targets for the treatment of cancers, neurodegenerative diseases, and other types of diseases. Virtual screening (VS) has become fairly effective approaches for drug discovery of novel and highly selective histone deacetylase inhibitors (HDACIs). To facilitate the process, we constructed maximal unbiased benchmarking data sets for HDACs (MUBD-HDACs) using our recently published methods that were originally developed for building unbiased benchmarking sets for ligand-based virtual screening (LBVS). The MUBD-HDACs cover all four classes including Class III (Sirtuins family) and 14 HDAC isoforms, composed of 631 inhibitors and 24609 unbiased decoys. Its ligand sets have been validated extensively as chemically diverse, while the decoy sets were shown to be property-matching with ligands and maximal unbiased in terms of "artificial enrichment" and "analogue bias". We also conducted comparative studies with DUD-E and DEKOIS 2.0 sets against HDAC2 and HDAC8 targets and demonstrate that our MUBD-HDACs are unique in that they can be applied unbiasedly to both LBVS and SBVS approaches. In addition, we defined a novel metric, i.e. NLBScore, to detect the "2D bias" and "LBVS favorable" effect within the benchmarking sets. In summary, MUBD-HDACs are the only comprehensive and maximal-unbiased benchmark data sets for HDACs (including Sirtuins) that are available so far. MUBD-HDACs are freely available at http://www.xswlab.org/ .

  12. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  13. Isobio software: biological dose distribution and biological dose volume histogram from physical dose conversion using linear-quadratic-linear model.

    Science.gov (United States)

    Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit

    2017-02-01

    To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  14. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, Scott E., E-mail: sedavids@utmb.edu [Radiation Oncology, The University of Texas Medical Branch, Galveston, Texas 77555 (United States); Cui, Jing [Radiation Oncology, University of Southern California, Los Angeles, California 90033 (United States); Kry, Stephen; Ibbott, Geoffrey S.; Followill, David S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Deasy, Joseph O. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vicic, Milos [Department of Applied Physics, University of Belgrade, Belgrade 11000 (Serbia); White, R. Allen [Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2016-08-15

    points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  15. Towards a plant-wide Benchmark Simulation Model with simultaneous nitrogen and phosphorus removal wastewater treatment processes

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, David; Batstone, Damien

    It is more than 10 years since the publication of the Benchmark Simulation Model No 1 (BSM1) manual (Copp, 2002). The main objective of BSM1 was creating a platform for benchmarking carbon and nitrogen removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM....... This extension aims at facilitating simultaneous carbon, nitrogen and phosphorus (P) removal process development and performance evaluation at a plant-wide level. The main motivation of the work is that numerous wastewater treatment plants (WWTPs) pursue biological phosphorus removal as an alternative...... to chemical P removal based on precipitation using metal salts, such as Fe or Al. This paper identifies and discusses important issues that need to be addressed to upgrade the BSM2 to BSM2-P, for example: 1) new influent wastewater characteristics; 2) new (bio) chemical processes to account for; 3...

  16. VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2015-10-01

    Full Text Available ABSTRACT VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS. The coupled neutronic and thermal-hydraulic (T/H code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR ejection at peripheral core using a full core geometry model, the C1 and C2 cases.  By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM and the improved quasistatic method (IQS. All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16% occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4% for C2 case.  All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. Keywords: nodal method, coupled neutronic and thermal-hydraulic code, PWR, transient case, control rod ejection.   ABSTRAK VALIDASI MODEL GEOMETRI TERAS PENUH PAKET PROGRAM NODAL3 DALAM PROBLEM BENCHMARK GAYUT WAKTU PWR. Paket program kopel neutronik dan termohidraulika (T/H, NODAL3, telah divalidasi dengan beberapa kasus benchmark statis PWR dan kasus benchmark gayut waktu PWR NEACRP.  Akan tetapi, paket program NODAL3 belum divalidasi dalam kasus benchmark gayut waktu akibat penarikan sebuah perangkat batang kendali (CR di tepi teras menggunakan model geometri teras penuh, yaitu kasus C1 dan C2. Dengan penelitian ini, akurasi paket program

  17. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  18. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum

    CERN Document Server

    Abercrombie, Daniel; Akilli, Ece; Alcaraz Maestre, Juan; Allen, Brandon; Alvarez Gonzalez, Barbara; Andrea, Jeremy; Arbey, Alexandre; Azuelos, Georges; Azzi, Patrizia; Backovic, Mihailo; Bai, Yang; Banerjee, Swagato; Beacham, James; Belyaev, Alexander; Boveia, Antonio; Brennan, Amelia Jean; Buchmueller, Oliver; Buckley, Matthew R.; Busoni, Giorgio; Buttignol, Michael; Cacciapaglia, Giacomo; Caputo, Regina; Carpenter, Linda; Filipe Castro, Nuno; Gomez Ceballos, Guillelmo; Cheng, Yangyang; Chou, John Paul; Cortes Gonzalez, Arely; Cowden, Chris; D'Eramo, Francesco; De Cosa, Annapaola; De Gruttola, Michele; De Roeck, Albert; De Simone, Andrea; Deandrea, Aldo; Demiragli, Zeynep; DiFranzo, Anthony; Doglioni, Caterina; du Pree, Tristan; Erbacher, Robin; Erdmann, Johannes; Fischer, Cora; Flaecher, Henning; Fox, Patrick J.; Fuks, Benjamin; Genest, Marie-Helene; Gomber, Bhawna; Goudelis, Andreas; Gramling, Johanna; Gunion, John; Hahn, Kristian; Haisch, Ulrich; Harnik, Roni; Harris, Philip C.; Hoepfner, Kerstin; Hoh, Siew Yan; Hsu, Dylan George; Hsu, Shih-Chieh; Iiyama, Yutaro; Ippolito, Valerio; Jacques, Thomas; Ju, Xiangyang; Kahlhoefer, Felix; Kalogeropoulos, Alexis; Kaplan, Laser Seymour; Kashif, Lashkar; Khoze, Valentin V.; Khurana, Raman; Kotov, Khristian; Kovalskyi, Dmytro; Kulkarni, Suchita; Kunori, Shuichi; Kutzner, Viktor; Lee, Hyun Min; Lee, Sung-Won; Liew, Seng Pei; Lin, Tongyan; Lowette, Steven; Madar, Romain; Malik, Sarah; Maltoni, Fabio; Martinez Perez, Mario; Mattelaer, Olivier; Mawatari, Kentarou; McCabe, Christopher; Megy, Theo; Morgante, Enrico; Mrenna, Stephen; Narayanan, Siddharth M.; Nelson, Andy; Novaes, Sergio F.; Padeken, Klaas Ole; Pani, Priscilla; Papucci, Michele; Paulini, Manfred; Paus, Christoph; Pazzini, Jacopo; Penning, Bjorn; Peskin, Michael E.; Pinna, Deborah; Procura, Massimiliano; Qazi, Shamona F.; Racco, Davide; Re, Emanuele; Riotto, Antonio; Rizzo, Thomas G.; Roehrig, Rainer; Salek, David; Sanchez Pineda, Arturo; Sarkar, Subir; Schmidt, Alexander; Schramm, Steven Randolph; Shepherd, William; Singh, Gurpreet; Soffi, Livia; Srimanobhas, Norraphat; Sung, Kevin; Tait, Tim M.P.; Theveneaux-Pelzer, Timothee; Thomas, Marc; Tosi, Mia; Trocino, Daniele; Undleeb, Sonaina; Vichi, Alessandro; Wang, Fuquan; Wang, Lian-Tao; Wang, Ren-Jie; Whallon, Nikola; Worm, Steven; Wu, Mengqing; Wu, Sau Lan; Yang, Hongtao; Yang, Yong; Yu, Shin-Shan; Zaldivar, Bryan; Zanetti, Marco; Zhang, Zhiqing; Zucchetta, Alberto

    2015-01-01

    This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches. A prioritized, compact set of benchmark models is proposed, accompanied by studies of the parameter space of these models and a repository of generator implementations. This report also addresses how to apply the Effective Field Theory formalism for collider searches and present the results of such interpretations.

  19. Should the IDC-9 Trauma Mortality Prediction Model become the new paradigm for benchmarking trauma outcomes?

    Science.gov (United States)

    Haider, Adil H; Villegas, Cassandra V; Saleem, Taimur; Efron, David T; Stevens, Kent A; Oyetunji, Tolulope A; Cornwell, Edward E; Bowman, Stephen; Haack, Sara; Baker, Susan P; Schneider, Eric B

    2012-06-01

    Optimum quantification of injury severity remains an imprecise science with a need for improvement. The accuracy of the criterion standard Injury Severity Score (ISS) worsens as a patient's injury severity increases, especially among patients with penetrating trauma. The objective of this study was to comprehensively compare the mortality prediction ability of three anatomic injury severity indices: the ISS, the New ISS (NISS), and the DRG International Classification of Diseases-9th Rev.-Trauma Mortality Prediction Model (TMPM-ICD-9), a recently developed contemporary injury assessment model. Retrospective analysis of patients in the National Trauma Data Bank from 2007 to 2008. The TMPM-ICD-9 values were computed and compared with the ISS and NISS for each patient using in-hospital mortality after trauma as the outcome measure. Discrimination and calibration were compared using the area under the receiver operator characteristic curve. Subgroup analysis was performed to compare each score across varying ranges of injury severity and across different types of injury. A total of 533,898 patients were identified with a crude mortality rate of 4.7%. The ISS and NISS performed equally in the groups with minor (ISS, 1-8) and moderate (ISS, 9-15) injuries, regardless of the injury type. However, in the populations with severe (ISS, 16-24) and very severe (ISS, ≥ 25) injuries for all injury types, the NISS predicted mortality better than the ISS did. The TMPM-ICD-9 outperformed both the NISS and ISS almost consistently. The NISS and TMPM-ICD-9 are both superior predictors of mortality as compared with the ISS. The immediate adoption of NISS for evaluating trauma outcomes using trauma registry data is recommended. The TMPM-ICD-9 may be an even better measure of human injury, and its use in administrative or nonregistry data is suggested. Further research on its attributes is recommended because it has the potential to become the basis for benchmarking trauma outcomes

  20. Benchmarking nuclear models of FLUKA and GEANT4 for carbon ion therapy

    CERN Document Server

    Bohlen, TT; Quesada, J M; Bohlen, T T; Cerutti, F; Gudowska, I; Ferrari, A; Mairani, A

    2010-01-01

    As carbon ions, at therapeutic energies, penetrate tissue, they undergo inelastic nuclear reactions and give rise to significant yields of secondary fragment fluences. Therefore, an accurate prediction of these fluences resulting from the primary carbon interactions is necessary in the patient's body in order to precisely simulate the spatial dose distribution and the resulting biological effect. In this paper, the performance of nuclear fragmentation models of the Monte Carlo transport codes, FLUKA and GEANT4, in tissue-like media and for an energy regime relevant for therapeutic carbon ions is investigated. The ability of these Monte Carlo codes to reproduce experimental data of charge-changing cross sections and integral and differential yields of secondary charged fragments is evaluated. For the fragment yields, the main focus is on the consideration of experimental approximations and uncertainties such as the energy measurement by time-of-flight. For GEANT4, the hadronic models G4BinaryLightIonReaction a...

  1. Modeling of radiation doses from chronic aqueous releases

    International Nuclear Information System (INIS)

    Watts, J.R.

    1976-01-01

    A general model and corresponding computer code were developed to calculate personnel dose estimates from chronic releases via aqueous pathways. Potential internal dose pathways are consumption of water, fish, crustacean, and mollusk. Dose prediction from consumption of fish, crustacean, or mollusk is based on the calculated radionuclide content of the water and applicable bioaccumulation factor. 70-year dose commitments are calculated for whole body, bone, lower large intestine of the gastrointestinal tract, and six internal organs. In addition, the code identifies the largest dose contributor and the dose percentages for each organ-radionuclide combination in the source term. The 1974 radionuclide release data from the Savannah River Plant were used to evaluate the dose models. The dose predicted from the model was compared to the dose calculated from radiometric analysis of water and fish samples. The whole body dose from water consumption was 0.45 mrem calculated from monitoring data and 0.61 mrem predicted from the model. Tritium contributed 99 percent of this dose. The whole body dose from fish consumption was 0.20 mrem calculated from monitoring data and 0.14 mrem from the model. Cesium-134,137 was the principal contributor to the 70-year whole body dose from fish consumption

  2. Recommendations on dose buildup factors used in models for calculating gamma doses for a plume

    International Nuclear Information System (INIS)

    Hedemann Jensen, P.; Thykier-Nielsen, S.

    1980-09-01

    Calculations of external γ-doses from radioactivity released to the atmosphere have been made using different dose buildup factor formulas. Some of the dose buildup factor formulas are used by the Nordic countries in their respective γ-dose models. A comparison of calculated γ-doses using these dose buildup factors shows that the γ-doses can be significantly dependent on the buildup factor formula used in the calculation. Increasing differences occur for increasing plume height, crosswind distance, and atmospheric stability and also for decreasing downwind distance. It is concluded that the most accurate γ-dose can be calculated by use of Capo's polynomial buildup factor formula. Capo-coefficients have been calculated and shown in this report for γ-energies below the original lower limit given by Capo. (author)

  3. Benchmarking of the saturated-zone module associated with three risk assessment models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Whelan, Gene; Mcdonald, J P.; Gnanapragasam, Emmanuel K.; Laniak, Gerard F.; Lew, Christine S.; Mills, William B.; Yu, C

    1998-01-01

    A comprehensive benchmarking is being performed between three multimedia risk assessment models: RESRAD, MMSOILS, and MEPAS. Each multimedia model is comprised of a suite of modules (e.g., groundwater, air, surface water, exposure, and risk/hazard), all of which can impact the estimation of human-health risk. As a component of the comprehensive benchmarking exercise, the saturated-zone modules of each model were applied to an environmental release scenario, where uranium-234 was released from the waste site to a saturated zone. Uranium-234 time-varying emission rates exiting from the source and concentrations at three downgradient locations (0 m, 150 m, and 1500 m) are compared for each multimedia model. Time-Varying concentrations for uranium-234 decay products (i.e., thorium-230, radium-226, and lead-210) at the 1500-m location are also presented. Different results are reported for RESRAD, MMSOILS, and MEPAS, which are solely due to the assumptions and mathematical constructs inherently built into each model, thereby impacting the potential risks predicted by each model. Although many differences were identified between the models, differences that impacted these benchmarking results the most are as follows: (1) RESRAD transports its contaminants by pure translation, and MMSOILS and MEPAS solve the one-dimensional advective, three-dimensional dispersive equation. (2) Due to the manner in which the retardation factor is defined, RESRAD contaminant velocities will always be faster than MMSOILS or MEPAS. (3) RESRAD uses a dilution factor to account for a withdrawal well; MMSOILS and MEPAS were designed to calculate in-situ concentrations at a receptor location. (4) RESRAD allows for decay products to travel at different velocities, while MEPAS assumes the decay products travel at the same speed as their parents. MMSOILS does not account for decay products and assumes degradation/decay only in the aqueous phase

  4. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  5. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  6. Validation of full core geometry model of the NODAL3 code in the PWR transient Benchmark problems

    International Nuclear Information System (INIS)

    T-M Sembiring; S-Pinem; P-H Liem

    2015-01-01

    The coupled neutronic and thermal-hydraulic (T/H) code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR) ejection at peripheral core using a full core geometry model, the C1 and C2 cases. By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM) and the improved quasistatic method (IQS). All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16 % occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4 % for C2 case. All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. (author)

  7. Total dose and dose rate models for bipolar transistors in circuit simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Phillip Montgomery; Wix, Steven D.

    2013-05-01

    The objective of this work is to develop a model for total dose effects in bipolar junction transistors for use in circuit simulation. The components of the model are an electrical model of device performance that includes the effects of trapped charge on device behavior, and a model that calculates the trapped charge densities in a specific device structure as a function of radiation dose and dose rate. Simulations based on this model are found to agree well with measurements on a number of devices for which data are available.

  8. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    Energy Technology Data Exchange (ETDEWEB)

    Fasso, A. [SLAC; Ferrari, A. [CERN; Ferrari, A. [HZDR, Dresden; Mokhov, N. V. [Fermilab; Mueller, S. E. [HZDR, Dresden; Nelson, W. R. [SLAC; Roesler, S. [CERN; Sanami, t.; Striganov, S. I. [Fermilab; Versaci, R. [Unlisted, CZ

    2016-12-01

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, and with the SLAC data.

  9. Assessment of model chemistries for hydrofluoropolyethers: A DFT/M08-HX benchmark study

    DEFF Research Database (Denmark)

    da Franca E S C Viegas, Luis Pedro

    2017-01-01

    a good accuracy and considerable reduction in computational cost with respect to the benchmark, being more than three times faster than M08-HX/aug-pcseg-2//M08-HX/aug-pcseg-1. This cost-effective approach will be essential in future work when studying larger hydrofluoropolyethers, where the computational......n this work, we report the first detailed theoretical comparative conformational investigation between two different classes of hydrofluoropolyethers: dihydro- and dimethoxyfluoropolyethers. The main objective was to determine a cost-effective computational methodology that could accurately...

  10. Extending the benchmark simulation model no2 with processes for nitrous oxide production and side-stream nitrogen removal

    DEFF Research Database (Denmark)

    Boiocchi, Riccardo; Sin, Gürkan; Gernaey, Krist V.

    2015-01-01

    In this work the Benchmark Simulation Model No.2 is extended with processes for nitrous oxide production and for side-stream partial nitritation/Anammox (PN/A) treatment. For these extensions the Activated Sludge Model for Greenhouse gases No.1 was used to describe the main waterline, whereas...... the Complete Autotrophic Nitrogen Removal (CANR) model was used to describe the side-stream (PN/A) treatment. Comprehensive simulations were performed to assess the extended model. Steady-state simulation results revealed the following: (i) the implementation of a continuous CANR side-stream reactor has...... increased the total nitrogen removal by 10%; (ii) reduced the aeration demand by 16% compared to the base case, and (iii) the activity of ammonia-oxidizing bacteria is most influencing nitrous oxide emissions. The extended model provides a simulation platform to generate, test and compare novel control...

  11. Development of the model MAAP5-DOSE for dose analysis in Cofrentes NPP

    International Nuclear Information System (INIS)

    Gonzalez, C.; Diaz, P.; Ibanez, L.; Lamela, B.; Serrano, C.

    2013-01-01

    Iberdrola Ingenieria y Construccion has developed a model of Cofrentes NPP with code MAAP5-DOSE in order to be able to assess in realistic conditions the the expected dose in points and radiological consequences of severe accident of local action.

  12. True dose from incorporated activities. Models for internal dosimetry

    International Nuclear Information System (INIS)

    Breustedt, B.; Eschner, W.; Nosske, D.

    2012-01-01

    The assessment of doses after incorporation of radionuclides cannot use direct measurements of the doses, as for example dosimetry in external radiation fields. The only observables are activities in the body or in excretions. Models are used to calculate the doses based on the measured activities. The incorporated activities and the resulting doses can vary by more than seven orders of magnitude between occupational and medical exposures. Nevertheless the models and calculations applied in both cases are similar. Since the models for the different applications have been developed independently by ICRP and MIRD different terminologies have been used. A unified terminology is being developed. (orig.)

  13. Mesorad dose assessment model. Volume 1. Technical basis

    International Nuclear Information System (INIS)

    Scherpelz, R.I.; Bander, T.J.; Athey, G.F.; Ramsdell, J.V.

    1986-03-01

    MESORAD is a dose assessment model for emergency response applications. Using release data for as many as 50 radionuclides, the model calculates: (1) external doses resulting from exposure to radiation emitted by radionuclides contained in elevated or deposited material; (2) internal dose commitment resulting from inhalation; and (3) total whole-body doses. External doses from airborne material are calculated using semi-infinite and finite cloud approximations. At each stage in model execution, the appropriate approximation is selected after considering the cloud dimensions. Atmospheric processes are represented in MESORAD by a combination of Lagrangian puff and Gaussian plume dispersion models, a source depletion (deposition velocity) dry deposition model, and a wet deposition model using washout coefficients based on precipitation rates

  14. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  15. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  16. Sparticle mass hierarchies, simplified models from SUGRA unification, and benchmarks for LHC Run-II SUSY searches

    International Nuclear Information System (INIS)

    Francescone, David; Akula, Sujeet; Altunkaynak, Baris; Nath, Pran

    2015-01-01

    Sparticle mass hierarchies contain significant information regarding the origin and nature of supersymmetry breaking. The hierarchical patterns are severely constrained by electroweak symmetry breaking as well as by the astrophysical and particle physics data. They are further constrained by the Higgs boson mass measurement. The sparticle mass hierarchies can be used to generate simplified models consistent with the high scale models. In this work we consider supergravity models with universal boundary conditions for soft parameters at the unification scale as well as supergravity models with nonuniversalities and delineate the list of sparticle mass hierarchies for the five lightest sparticles. Simplified models can be obtained by a truncation of these, retaining a smaller set of lightest particles. The mass hierarchies and their truncated versions enlarge significantly the list of simplified models currently being used in the literature. Benchmarks for a variety of supergravity unified models appropriate for SUSY searches at future colliders are also presented. The signature analysis of two benchmark models has been carried out and a discussion of the searches needed for their discovery at LHC Run-II is given. An analysis of the spin-independent neutralino-proton cross section exhibiting the Higgs boson mass dependence and the hierarchical patterns is also carried out. It is seen that a knowledge of the spin-independent neutralino-proton cross section and the neutralino mass will narrow down the list of the allowed sparticle mass hierarchies. Thus dark matter experiments along with analyses for the LHC Run-II will provide strong clues to the nature of symmetry breaking at the unification scale.

  17. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  18. Development of a flattening filter free multiple source model for use as an independent, Monte Carlo, dose calculation, quality assurance tool for clinical trials.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Popple, Richard; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core-Houston (IROC-H) Quality Assurance Center (formerly the Radiological Physics Center) has reported varying levels of compliance from their anthropomorphic phantom auditing program. IROC-H studies have suggested that one source of disagreement between institution submitted calculated doses and measurement is the accuracy of the institution's treatment planning system dose calculations and heterogeneity corrections used. In order to audit this step of the radiation therapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Varian flattening filter free (FFF) 6 MV and FFF 10 MV therapeutic x-ray beams were commissioned based on central axis depth dose data from a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open-field measurements in a water tank for field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The models were then benchmarked against IROC-H's anthropomorphic head and neck phantom and lung phantom measurements. Validation results, assessed with a ±2%/2 mm gamma criterion, showed average agreement of 99.9% and 99.0% for central axis depth dose data for FFF 6 MV and FFF 10 MV models, respectively. Dose profile agreement using the same evaluation technique averaged 97.8% and 97.9% for the respective models. Phantom benchmarking comparisons were evaluated with a ±3%/2 mm gamma criterion, and agreement averaged 90.1% and 90.8% for the respective models. Multiple source models for Varian FFF 6 MV and FFF 10 MV beams have been developed, validated, and benchmarked for inclusion in an independent dose calculation quality assurance tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  19. Proposal of a probabilistic dose-response model

    International Nuclear Information System (INIS)

    Barrachina, M.

    1997-01-01

    A biologically updated dose-response model is presented as an alternative to the linear-quadratic model currently in use for cancer risk assessment. The new model is based on the probability functions for misrepair and/or unrepair of DNA lesions, in terms of the radiation damage production rate in the cell (supposedly, a stem cell) and its repair-rate constant. The model makes use, interpreting it on the basis of misrepair probabilities, of the ''dose and dose-rate effectiveness factor'' of ICRP, and provides the way for a continuous extrapolation between the high and low dose-rate regions, ratifying the ''linear non-threshold hypothesis'' as the main option. Anyhow, the model throws some doubts about the additive property of the dose. (author)

  20. Use of benchmark dose-volume histograms for selection of the optimal technique between three-dimensional conformal radiation therapy and intensity-modulated radiation therapy in prostate cancer

    International Nuclear Information System (INIS)

    Luo Chunhui; Yang, Claus Chunli; Narayan, Samir; Stern, Robin L.; Perks, Julian; Goldberg, Zelanna; Ryu, Janice; Purdy, James A.; Vijayakumar, Srinivasan

    2006-01-01

    Purpose: The aim of this study was to develop and validate our own benchmark dose-volume histograms (DVHs) of bladder and rectum for both conventional three-dimensional conformal radiation therapy (3D-CRT) and intensity-modulated radiation therapy (IMRT), and to evaluate quantitatively the benefits of using IMRT vs. 3D-CRT in treating localized prostate cancer. Methods and Materials: During the implementation of IMRT for prostate cancer, our policy was to plan each patient with both 3D-CRT and IMRT. This study included 31 patients with T1b to T2c localized prostate cancer, for whom we completed double-planning using both 3D-CRT and IMRT techniques. The target volumes included prostate, either with or without proximal seminal vesicles. Bladder and rectum DVH data were summarized to obtain an average DVH for each technique and then compared using two-tailed paired t test analysis. Results: For 3D-CRT our bladder doses were as follows: mean 28.8 Gy, v60 16.4%, v70 10.9%; rectal doses were: mean 39.3 Gy, v60 21.8%, v70 13.6%. IMRT plans resulted in similar mean dose values: bladder 26.4 Gy, rectum 34.9 Gy, but lower values of v70 for the bladder (7.8%) and rectum (9.3%). These benchmark DVHs have resulted in a critical evaluation of our 3D-CRT techniques over time. Conclusion: Our institution has developed benchmark DVHs for bladder and rectum based on our clinical experience with 3D-CRT and IMRT. We use these standards as well as differences in individual cases to make decisions on whether patients may benefit from IMRT treatment rather than 3D-CRT

  1. Electron-helium S-wave model benchmark calculations. II. Double ionization, single ionization with excitation, and double excitation

    Science.gov (United States)

    Bartlett, Philip L.; Stelbovics, Andris T.

    2010-02-01

    The propagating exterior complex scaling (PECS) method is extended to all four-body processes in electron impact on helium in an S-wave model. Total and energy-differential cross sections are presented with benchmark accuracy for double ionization, single ionization with excitation, and double excitation (to autoionizing states) for incident-electron energies from threshold to 500 eV. While the PECS three-body cross sections for this model given in the preceding article [Phys. Rev. A 81, 022715 (2010)] are in good agreement with other methods, there are considerable discrepancies for these four-body processes. With this model we demonstrate the suitability of the PECS method for the complete solution of the electron-helium system.

  2. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  3. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Multilaboratory particle image velocimetry analysis of the FDA benchmark nozzle model to support validation of computational fluid dynamics simulations.

    Science.gov (United States)

    Hariharan, Prasanna; Giarra, Matthew; Reddy, Varun; Day, Steven W; Manning, Keefe B; Deutsch, Steven; Stewart, Sandy F C; Myers, Matthew R; Berman, Michael R; Burgreen, Greg W; Paterson, Eric G; Malinauskas, Richard A

    2011-04-01

    This study is part of a FDA-sponsored project to evaluate the use and limitations of computational fluid dynamics (CFD) in assessing blood flow parameters related to medical device safety. In an interlaboratory study, fluid velocities and pressures were measured in a nozzle model to provide experimental validation for a companion round-robin CFD study. The simple benchmark nozzle model, which mimicked the flow fields in several medical devices, consisted of a gradual flow constriction, a narrow throat region, and a sudden expansion region where a fluid jet exited the center of the nozzle with recirculation zones near the model walls. Measurements of mean velocity and turbulent flow quantities were made in the benchmark device at three independent laboratories using particle image velocimetry (PIV). Flow measurements were performed over a range of nozzle throat Reynolds numbers (Re(throat)) from 500 to 6500, covering the laminar, transitional, and turbulent flow regimes. A standard operating procedure was developed for performing experiments under controlled temperature and flow conditions and for minimizing systematic errors during PIV image acquisition and processing. For laminar (Re(throat)=500) and turbulent flow conditions (Re(throat)≥3500), the velocities measured by the three laboratories were similar with an interlaboratory uncertainty of ∼10% at most of the locations. However, for the transitional flow case (Re(throat)=2000), the uncertainty in the size and the velocity of the jet at the nozzle exit increased to ∼60% and was very sensitive to the flow conditions. An error analysis showed that by minimizing the variability in the experimental parameters such as flow rate and fluid viscosity to less than 5% and by matching the inlet turbulence level between the laboratories, the uncertainties in the velocities of the transitional flow case could be reduced to ∼15%. The experimental procedure and flow results from this interlaboratory study (available

  5. Robust fuzzy output feedback controller for affine nonlinear systems via T-S fuzzy bilinear model: CSTR benchmark.

    Science.gov (United States)

    Hamdy, M; Hamdan, I

    2015-07-01

    In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Benchmark Modeling of the Near-Field and Far-Field Wave Effects of Wave Energy Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rhinefrank, Kenneth E; Haller, Merrick C; Ozkan-Haller, H Tuba

    2013-01-26

    This project is an industry-led partnership between Columbia Power Technologies and Oregon State University that will perform benchmark laboratory experiments and numerical modeling of the near-field and far-field impacts of wave scattering from an array of wave energy devices. These benchmark experimental observations will help to fill a gaping hole in our present knowledge of the near-field effects of multiple, floating wave energy converters and are a critical requirement for estimating the potential far-field environmental effects of wave energy arrays. The experiments will be performed at the Hinsdale Wave Research Laboratory (Oregon State University) and will utilize an array of newly developed Buoys' that are realistic, lab-scale floating power converters. The array of Buoys will be subjected to realistic, directional wave forcing (1:33 scale) that will approximate the expected conditions (waves and water depths) to be found off the Central Oregon Coast. Experimental observations will include comprehensive in-situ wave and current measurements as well as a suite of novel optical measurements. These new optical capabilities will include imaging of the 3D wave scattering using a binocular stereo camera system, as well as 3D device motion tracking using a newly acquired LED system. These observing systems will capture the 3D motion history of individual Buoys as well as resolve the 3D scattered wave field; thus resolving the constructive and destructive wave interference patterns produced by the array at high resolution. These data combined with the device motion tracking will provide necessary information for array design in order to balance array performance with the mitigation of far-field impacts. As a benchmark data set, these data will be an important resource for testing of models for wave/buoy interactions, buoy performance, and far-field effects on wave and current patterns due to the presence of arrays. Under the proposed project we will initiate

  7. A comparative evaluation of risk-adjustment models for benchmarking amputation-free survival after lower extremity bypass.

    Science.gov (United States)

    Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres

    2016-04-01

    Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model

  8. Integration of models for the Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Napier, B.A.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation dose that individuals could have received as a result of emissions from nuclear operations at Hanford since 1944. The objective of phase 1 of the project was to demonstrate through calculations that adequate models and support data exist or could be developed to allow realistic estimations of doses to individuals from releases of radionuclides to the environment that occurred as long as 45 years ago. Much of the data used in phase 1 was preliminary; therefore, the doses calculated must be considered preliminary approximations. This paper describes the integration of various models that was implemented for initial computer calculations. Models were required for estimating the quantity of radioactive material released, for evaluating its transport through the environment, for estimating human exposure, and for evaluating resultant doses

  9. Simplification of an MCNP model designed for dose rate estimation

    Science.gov (United States)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  10. Simplification of an MCNP model designed for dose rate estimation

    Directory of Open Access Journals (Sweden)

    Laptev Alexander

    2017-01-01

    Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  11. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    Science.gov (United States)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests

  12. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  13. A BENCHMARKING ANALYSIS FOR FIVE RADIONUCLIDE VADOSE ZONE MODELS (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, AND CHAIN 2D) IN SOIL SCREENING LEVEL CALCULATIONS

    Science.gov (United States)

    Five radionuclide vadose zone models with different degrees of complexity (CHAIN, MULTIMED_DP, FECTUZ, HYDRUS, and CHAIN 2D) were selected for use in soil screening level (SSL) calculations. A benchmarking analysis between the models was conducted for a radionuclide (99Tc) rele...

  14. A Generalized QMRA Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  15. Nernst-Planck Based Description of Transport, Coulombic Interactions and Geochemical Reactions in Porous Media: Modeling Approach and Benchmark Experiments

    DEFF Research Database (Denmark)

    Rolle, Massimo; Sprocati, Riccardo; Masi, Matteo

    2018-01-01

    ‐ but also under advection‐dominated flow regimes. To accurately describe charge effects in flow‐through systems, we propose a multidimensional modeling approach based on the Nernst‐Planck formulation of diffusive/dispersive fluxes. The approach is implemented with a COMSOL‐PhreeqcRM coupling allowing us......, and high‐resolution experimental datasets. The latter include flow‐through experiments that have been carried out in this study to explore the effects of electrostatic interactions in fully three‐dimensional setups. The results of the simulations show excellent agreement for all the benchmarks problems...... the quantification and visualization of the specific contributions to the diffusive/dispersive Nernst‐Planck fluxes, including the Fickian component, the term arising from the activity coefficient gradients, and the contribution due to electromigration....

  16. Model for dose-response with alternative change of sign

    International Nuclear Information System (INIS)

    Osovets, S.V.

    1998-01-01

    A new mathematical model of dose-response relationships is proposed, suitable for calculating stochastic effects of low level exposure. The corresponding differential equations are presented as well as their solution. (A.K.)

  17. Benchmarking of fast-running software tools used to model releases during nuclear accidents

    Energy Technology Data Exchange (ETDEWEB)

    Devitt, P.; Viktorov, A., E-mail: Peter.Devitt@cnsc-ccsn.gc.ca, E-mail: Alex.Viktorov@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Fukushima highlighted the importance of effective nuclear accident response. However, its complexity greatly impacted the ability to provide timely and accurate information to national and international stakeholders. Safety recommendations provided by different national and international organizations varied notably. Such differences can partially be attributed to different methods used in the initial assessment of accident progression and the amount of radioactivity release.Therefore, a comparison of methodologies was undertaken by the NEA/CSNI and its highlights are presented here. For this project, the prediction tools used by various emergency response organizations for estimating the source terms and public doses were examined. Those organizations that have a capability to use such tools responded to a questionnaire describing each code's capabilities and main algorithms. Then the project's participants analyzed five accident scenarios to predict the source term, dispersion of releases and public doses. (author)

  18. Dose Assessment Model for Chronic Atmospheric Releases of Tritium

    International Nuclear Information System (INIS)

    Shen Huifang; Yao Rentai

    2010-01-01

    An improved dose assessment model for chronic atmospheric releases of tritium was proposed. The proposed model explicitly considered two chemical forms of tritium.It was based on conservative assumption of transfer of tritiated water (HTO) from air to concentration of HTO and organic beam tritium (OBT) in vegetable and animal products.The concentration of tritium in plant products was calculated based on considering dividedly leafy plant and not leafy plant, meanwhile the concentration contribution of tritium in the different plants from the tritium in soil was taken into account.Calculating the concentration of HTO in animal products, average water fraction of animal products and the average weighted tritium concentration of ingested water based on the fraction of water supplied by each source were considered,including skin absorption, inhalation, drinking water and food.Calculating the annual doses, the ingestion doses were considered, at the same time the contribution of inhalation and skin absorption to the dose was considered. Concentrations in foodstuffs and dose of annual adult calculated with the specific activity model, NEWTRI model and the model proposed by the paper were compared. The results indicate that the model proposed by the paper can predict accurately tritium doses through the food chain from chronic atmospheric releases. (authors)

  19. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  20. Clinical implications of alternative TCP models for nonuniform dose distributions

    International Nuclear Information System (INIS)

    Deasy, J. O.

    1995-01-01

    Several tumor control probability (TCP) models for nonuniform dose distributions were compared, including: (a) a logistic/inter-patient-heterogeneity model, (b) a probit/inter-patient-heterogeneity model, (c) a Poisson/radioresistant-strain/identical-patients model, (d) a Poisson/inter-patient-heterogeneity model and (e) a Poisson/intra-tumor- and inter-patient-heterogeneity model. The models were analyzed in terms of the probability of controlling a single tumor voxel (the voxel control probability, or VCP), as a function of voxel volume and dose. Alternatively, the VCP surface can be thought of as the effect of a small cold spot. The models based on the Poisson equation which include inter-patient heterogeneity ((d) and (e)) have VCP surfaces (VCP as a function of dose and volume) which have a threshold 'waterfall' shape: below the waterfall (in dose), VCP is nearly zero. The threshold dose decreases with decreasing voxel volume. However, models (a), (b), and (c) all show a high probability of controlling a voxel (VCP>50%) with very low dose (e.g., 1 Gy) if the voxel is small (smaller than about 10 -3 of the tumor volume). Model (c) does not have the waterfall shape at low volumes due to the assumption of patient uniformity and a neglect of the effect of the clonogens which are more radiosensitive (and more numerous). Models (a) and (b) deviate from the waterfall shape at low volumes due to numerical differences between the functions used and the Poisson function. Hence, the Possion models which include inter-patient heterogeneities ((d) and (e)) are more sensitive to the effects of small cold spots than the other models considered

  1. Some hybrid models applicable to dose-response relationships

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1992-01-01

    A new type of models of dose-response relationships has been studied as an initial stage to explore a reliable extrapolation of the relationships decided by high dose data to the range of low dose covered by radiation protection. The approach is to use a 'hybrid scale' of linear and logarithmic scales; the first model is that the normalized surviving fraction (ρ S > 0) in a hybrid scale decreases linearly with dose in a linear scale, and the second is that the induction in a log scale increases linearly with the normalized dose (τ D > 0) in a hybrid scale. The hybrid scale may reflect an overall effectiveness of a complex system against adverse events caused by various agents. Some data of leukemia in the atomic bomb survivors and of rodent experiments were used to show the applicability of hybrid scale models. The results proved that proposed models fit these data not less than the popular linear-quadratic models, providing the possible interpretation of shapes of dose-response curves, e.g. shouldered survival curves varied by recovery time. (author)

  2. Embracing model-based designs for dose-finding trials.

    Science.gov (United States)

    Love, Sharon B; Brown, Sarah; Weir, Christopher J; Harbron, Chris; Yap, Christina; Gaschler-Markefski, Birgit; Matcham, James; Caffrey, Louise; McKevitt, Christopher; Clive, Sally; Craddock, Charlie; Spicer, James; Cornelius, Victoria

    2017-07-25

    Dose-finding trials are essential to drug development as they establish recommended doses for later-phase testing. We aim to motivate wider use of model-based designs for dose finding, such as the continual reassessment method (CRM). We carried out a literature review of dose-finding designs and conducted a survey to identify perceived barriers to their implementation. We describe the benefits of model-based designs (flexibility, superior operating characteristics, extended scope), their current uptake, and existing resources. The most prominent barriers to implementation of a model-based design were lack of suitable training, chief investigators' preference for algorithm-based designs (e.g., 3+3), and limited resources for study design before funding. We use a real-world example to illustrate how these barriers can be overcome. There is overwhelming evidence for the benefits of CRM. Many leading pharmaceutical companies routinely implement model-based designs. Our analysis identified barriers for academic statisticians and clinical academics in mirroring the progress industry has made in trial design. Unified support from funders, regulators, and journal editors could result in more accurate doses for later-phase testing, and increase the efficiency and success of clinical drug development. We give recommendations for increasing the uptake of model-based designs for dose-finding trials in academia.

  3. Choline PET based dose-painting in prostate cancer - Modelling of dose effects

    International Nuclear Information System (INIS)

    Niyazi, Maximilian; Bartenstein, Peter; Belka, Claus; Ganswindt, Ute

    2010-01-01

    Several randomized trials have documented the value of radiation dose escalation in patients with prostate cancer, especially in patients with intermediate risk profile. Up to now dose escalation is usually applied to the whole prostate. IMRT and related techniques currently allow for dose escalation in sub-volumes of the organ. However, the sensitivity of the imaging modality and the fact that small islands of cancer are often dispersed within the whole organ may limit these approaches with regard to a clear clinical benefit. In order to assess potential effects of a dose escalation in certain sub-volumes based on choline PET imaging a mathematical dose-response model was developed. Based on different assumptions for α/β, γ50, sensitivity and specificity of choline PET, the influence of the whole prostate and simultaneous integrated boost (SIB) dose on tumor control probability (TCP) was calculated. Based on the given heterogeneity of all potential variables certain representative permutations of the parameters were chosen and, subsequently, the influence on TCP was assessed. Using schedules with 74 Gy within the whole prostate and a SIB dose of 90 Gy the TCP increase ranged from 23.1% (high detection rate of choline PET, low whole prostate dose, high γ50/ASTRO definition for tumor control) to 1.4% TCP gain (low sensitivity of PET, high whole prostate dose, CN + 2 definition for tumor control) or even 0% in selected cases. The corresponding initial TCP values without integrated boost ranged from 67.3% to 100%. According to a large data set of intermediate-risk prostate cancer patients the resulting TCP gains ranged from 22.2% to 10.1% (ASTRO definition) or from 13.2% to 6.0% (CN + 2 definition). Although a simplified mathematical model was employed, the presented model allows for an estimation in how far given schedules are relevant for clinical practice. However, the benefit of a SIB based on choline PET seems less than intuitively expected. Only under the

  4. Categorical regression dose-response modeling

    Science.gov (United States)

    The goal of this training is to provide participants with training on the use of the U.S. EPA’s Categorical Regression soft¬ware (CatReg) and its application to risk assessment. Categorical regression fits mathematical models to toxicity data that have been assigned ord...

  5. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  6. Noise and dose modeling for pediatric CT optimization: preliminary results

    International Nuclear Information System (INIS)

    Miller Clemente, Rafael A.; Perez Diaz, Marlen; Mora Reyes, Yudel; Rodriguez Garlobo, Maikel; Castillo Salazar, Rafael

    2008-01-01

    Full text: A Multiple Linear Regression Model was developed to predict noise and dose in computed tomography pediatric imaging for head and abdominal examinations. Relative values of Noise and Volumetric Computed Tomography Dose Index was used to estimate de model respectively. 54 images of physical phantoms were performed. Independent variables considered included: phantom diameter, tube current and kilovolts, x ray beam collimation, reconstruction diameter and equipment's post processing filters. Predicted values show good agreement with measurements, which were better in noise model (R 2 adjusted =0.953) than the dose model (R 2 adjusted =0.744). Tube current, object diameter, beam collimation and reconstruction filter were identified as the most influencing factors in models. (author)

  7. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  8. The CEC benchmark interclay on rheological models for clays results of pilot phase (January-June 1989) about the boom clay at Mol (B)

    International Nuclear Information System (INIS)

    Come, B.

    1990-01-01

    A pilot phase of a benchmark exercise for rheological models for boom clay, called interclay, was launched by the CEC in January 1989. The purpose of the benchmark is to compare predictions of calculations made about well-defined rock-mechanical problems, similar to real cases at the Mol facilities, using existing data from laboratory tests on samples. Basically, two approaches were to be compared: one considering clay as an elasto-visco-plastic medium (rock-mechanics approach), and one isolating the role of pore-pressure dissipation (soil-mechanics approach)

  9. Biosphere model for assessing doses from nuclear waste disposal

    International Nuclear Information System (INIS)

    Zach, R.; Amiro, B.D.; Davis, P.A.; Sheppard, S.C.; Szekeley, J.G.

    1994-01-01

    The biosphere model, BIOTRAC, for predicting long term nuclide concentrations and radiological doses from Canada's nuclear fuel waste disposal concept of a vault deep in plutonic rock of the Canadian Shield is presented. This generic, boreal zone biosphere model is based on scenario analysis and systems variability analysis using Monte Carlo simulation techniques. Conservatism is used to bridge uncertainties, even though this creates a small amount of extra nuclide mass. Environmental change over the very long assessment period is mainly handled through distributed parameter values. The dose receptors are a critical group of humans and four generic non-human target organisms. BIOTRAC includes six integrated submodels and it interfaces smoothly with a geosphere model. This interface includes a bedrock well. The geosphere model defines the discharge zones of deep groundwater where nuclides released from the vault enter the biosphere occupied by the dose receptors. The size of one of these zones is reduced when water is withdrawn from the bedrock well. Sensitivity analysis indicates 129 I is by far the most important radionuclide. Results also show bedrock-well water leads to higher doses to man than lake water, but the former doses decrease with the size of the critical group. Under comparable circumstances, doses to the non-human biota are greater than those for man

  10. International Land Model Benchmarking (ILAMB) Workshop Report, Technical Report DOE/SC-0186

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Forrest M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Koven, Charles D.; Kappel-Aleks, Gretchen [Univ. of Michigan, Ann Arbor, MI (United States); Lawrence, David M. [National Center for Atmospheric Research, Boulder, CO (United States); Riley, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Randerson, James T. [Univ. of California, Irvine, CA (United States); Ahlstrom, Anders; Abramowitz, G.; Baldocchi, Dennis; Bond-Lamberty, Benjamin; De Kauwe, Martin G.; Denning, Scott; Desai, Ankur R.; Eyring, Veronika; Fisher, Joshua B.; Fisher, R.; Gleckler, Peter J.; Huang, Maoyi; Hugelius, Gustaf; Jain, Atul K.; Kiang, Nancy Y.; Kim, Hyungjun; Koster, Randy; Kumar, Sujay V.; Li, Hongyi; Luo, Yiqi; Mao, Jiafu; McDowell, Nate G.; Mishra, Umakant; Moorcroft, Paul; Pau, George; Ricciuto, Daniel M.; Schaefer, Kevin; Schwalm, C.; Serbin, Shawn; Shevliakova, Elena; Slater, Andrew G.; Tang, Jinyun; Williams, Mathew; Xia, Jianyang; Xu, Chonggang; Joseph, Renu; Koch, Dorothy

    2016-11-01

    As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.

  11. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  12. Nanotechnology convergence and modeling paradigm of sustainable energy system using polymer electrolyte membrane fuel cell as a benchmark example

    International Nuclear Information System (INIS)

    Chung, Pil Seung; So, Dae Sup; Biegler, Lorenz T.; Jhon, Myung S.

    2012-01-01

    Developments in nanotechnology have led to innovative progress and converging technologies in engineering and science. These demand novel methodologies that enable efficient communications from the nanoscale all the way to decision-making criteria for actual production systems. In this paper, we discuss the convergence of nanotechnology and novel multi-scale modeling paradigms by using the fuel cell system as a benchmark example. This approach includes complex multi-phenomena at different time and length scales along with the introduction of an optimization framework for application-driven nanotechnology research trends. The modeling paradigm introduced here covers the novel holistic integration from atomistic/molecular phenomena to meso/continuum scales. System optimization is also discussed with respect to the reduced order parameters for a coarse-graining procedure in multi-scale model integration as well as system design. The development of a hierarchical multi-scale paradigm consolidates the theoretical analysis and enables large-scale decision-making of process level design, based on first-principles, and therefore promotes the convergence of nanotechnology to sustainable energy technologies.

  13. Nanotechnology convergence and modeling paradigm of sustainable energy system using polymer electrolyte membrane fuel cell as a benchmark example

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Pil Seung; So, Dae Sup; Biegler, Lorenz T.; Jhon, Myung S., E-mail: mj3a@andrew.cmu.edu [Carnegie Mellon University, Department of Chemical Engineering (United States)

    2012-08-15

    Developments in nanotechnology have led to innovative progress and converging technologies in engineering and science. These demand novel methodologies that enable efficient communications from the nanoscale all the way to decision-making criteria for actual production systems. In this paper, we discuss the convergence of nanotechnology and novel multi-scale modeling paradigms by using the fuel cell system as a benchmark example. This approach includes complex multi-phenomena at different time and length scales along with the introduction of an optimization framework for application-driven nanotechnology research trends. The modeling paradigm introduced here covers the novel holistic integration from atomistic/molecular phenomena to meso/continuum scales. System optimization is also discussed with respect to the reduced order parameters for a coarse-graining procedure in multi-scale model integration as well as system design. The development of a hierarchical multi-scale paradigm consolidates the theoretical analysis and enables large-scale decision-making of process level design, based on first-principles, and therefore promotes the convergence of nanotechnology to sustainable energy technologies.

  14. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  15. Interactive Rapid Dose Assessment Model (IRDAM): user's guide

    International Nuclear Information System (INIS)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This User's Guide provides instruction in the setup and operation of the equipment necessary to run IRDAM. Instructions are also given on how to load the magnetic disks and access the interactive part of the program. Two other companion volumes to this one provide additional information on IRDAM. Reactor Accident Assessment Methods (NUREG/CR-3012, Volume 2) describes the technical bases for IRDAM including methods, models and assumptions used in calculations. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios

  16. Dose rate modelled for the outdoors of a gamma irradiation

    International Nuclear Information System (INIS)

    Mangussi, J

    2012-01-01

    A model for the absorbed dose rate calculation on the surroundings of a gamma irradiation plant is developed. In such plants, a part of the radiation emitted upwards reach's the outdoors. The Compton scatterings on the wall of the exhausting pipes through de plant roof and on the outdoors air are modelled. The absorbed dose rate generated by the scattered radiation as far as 200 m is calculated. The results of the models, to be used for the irradiation plant design and for the environmental studies, are showed on graphics (author)

  17. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. System-wide Benchmark Simulation Model for integrated analysis of urban wastewater systems

    DEFF Research Database (Denmark)

    Saagi, R.; Flores-Alsina, X.; Gernaey, K. V.

    Interactions between different components (sewer, wastewater treatment plant (WWTP) and river) of an urban wastewater system (UWS) are widely recognized (Benedetti et al., 2013). This has resulted in an increasing interest in the modelling of the UWS. System-wide models take into account the inte...

  19. Verification of an effective dose equivalent model for neutrons

    International Nuclear Information System (INIS)

    Tanner, J.E.; Piper, R.K.; Leonowich, J.A.; Faust, L.G.

    1992-01-01

    Since the effective dose equivalent, based on the weighted sum of organ dose equivalents, is not a directly measurable quantity, it must be estimated with the assistance of computer modelling techniques and a knowledge of the incident radiation field. Although extreme accuracy is not necessary for radiation protection purposes, a few well chosen measurements are required to confirm the theoretical models. Neutron doses and dose equivalents were measured in a RANDO phantom at specific locations using thermoluminescence dosemeters, etched track dosemeters, and a 1.27 cm (1/2 in) tissue-equivalent proportional counter. The phantom was exposed to a bare and a D 2 O-moderated 252 Cf neutron source at the Pacific Northwest Laboratory's Low Scatter Facility. The Monte Carlo code MCNP with the MIRD-V mathematical phantom was used to model the human body and to calculate the organ doses and dose equivalents. The experimental methods are described and the results of the measurements are compared with the calculations. (author)

  20. Verification of an effective dose equivalent model for neutrons

    International Nuclear Information System (INIS)

    Tanner, J.E.; Piper, R.K.; Leonowich, J.A.; Faust, L.G.

    1991-10-01

    Since the effective dose equivalent, based on the weighted sum of organ dose equivalents, is not a directly measurable quantity, it must be estimated with the assistance of computer modeling techniques and a knowledge of the radiation field. Although extreme accuracy is not necessary for radiation protection purposes, a few well-chosen measurements are required to confirm the theoretical models. Neutron measurements were performed in a RANDO phantom using thermoluminescent dosemeters, track etch dosemeters, and a 1/2-in. (1.27-cm) tissue equivalent proportional counter in order to estimate neutron doses and dose equivalents within the phantom at specific locations. The phantom was exposed to bare and D 2 O-moderated 252 Cf neutrons at the Pacific Northwest Laboratory's Low Scatter Facility. The Monte Carlo code MCNP with the MIRD-V mathematical phantom was used to model the human body and calculate organ doses and dose equivalents. The experimental methods are described and the results of the measurements are compared to the calculations. 8 refs., 3 figs., 3 tabs

  1. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    International Nuclear Information System (INIS)

    Taylor, G. A.; Hiergesell, R. A.

    2013-01-01

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  2. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptune and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow

  3. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  4. Dose loading mathematical modelling of moving through heterogeneous radiation fields

    International Nuclear Information System (INIS)

    Batyij, Je.V.; Kotlyarov, V.T.

    2006-01-01

    Software component for management of data on gamma exposition dose spatial distribution was created in the frameworks of the Ukryttya information model creation. Availability of state-of-the-art programming technologies (NET., ObjectARX) for integration of different models of radiation-hazardous condition to digital engineer documentation system (AutoCAD) was shown on the basis of the component example

  5. A model to accumulate fractionated dose in a deforming organ

    International Nuclear Information System (INIS)

    Yan Di; Jaffray, D.A.; Wong, J.W.

    1999-01-01

    Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists throughout the course of radiation treatment. However, a method of constructing the resultant dose delivered to the organ volume remains a difficult challenge. In this study, a model to quantify internal organ motion and a method to construct a cumulative dose in a deforming organ are introduced. Methods and Materials: A biomechanical model of an elastic body is used to quantify patient organ motion in the process of radiation therapy. Intertreatment displacements of volume elements in an organ of interest is calculated by applying an finite element method with boundary conditions, obtained from multiple daily computed tomography (CT) measurements. Therefore, by incorporating also the measurements of daily setup error, daily dose delivered to a deforming organ can be accumulated by tracking the position of volume elements in the organ. Furthermore, distribution of patient-specific organ motion is also predicted during the early phase of treatment delivery using the daily measurements, and the cumulative dose distribution in the organ can then be estimated. This dose distribution will be updated whenever a new measurement becomes available, and used to reoptimize the ongoing treatment. Results: An integrated process to accumulate dosage in a daily deforming organ was implemented. In this process, intertreatment organ motion and setup error were systematically quantified, and incorporated in the calculation of the cumulative dose. An example of the rectal wall motion in a prostate treatment was applied to test the model. The displacements of volume elements in the rectal wall, as well as the resultant doses, were calculated. Conclusion: This study is intended to provide a systematic framework to incorporate daily patient-specific organ motion and setup error in the reconstruction of the cumulative dose distribution in an organ of interest. The realistic dose

  6. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  7. Comparison of electromagnetic and hadronic models generated using Geant 4 with antiproton dose measured in CERN.

    Science.gov (United States)

    Tavakoli, Mohammad Bagher; Reiazi, Reza; Mohammadi, Mohammad Mehdi; Jabbari, Keyvan

    2015-01-01

    After proposing the idea of antiproton cancer treatment in 1984 many experiments were launched to investigate different aspects of physical and radiobiological properties of antiproton, which came from its annihilation reactions. One of these experiments has been done at the European Organization for Nuclear Research known as CERN using the antiproton decelerator. The ultimate goal of this experiment was to assess the dosimetric and radiobiological properties of beams of antiprotons in order to estimate the suitability of antiprotons for radiotherapy. One difficulty on this way was the unavailability of antiproton beam in CERN for a long time, so the verification of Monte Carlo codes to simulate antiproton depth dose could be useful. Among available simulation codes, Geant4 provides acceptable flexibility and extensibility, which progressively lead to the development of novel Geant4 applications in research domains, especially modeling the biological effects of ionizing radiation at the sub-cellular scale. In this study, the depth dose corresponding to CERN antiproton beam energy by Geant4 recruiting all the standard physics lists currently available and benchmarked for other use cases were calculated. Overall, none of the standard physics lists was able to draw the antiproton percentage depth dose. Although, with some models our results were promising, the Bragg peak level remained as the point of concern for our study. It is concluded that the Bertini model with high precision neutron tracking (QGSP_BERT_HP) is the best to match the experimental data though it is also the slowest model to simulate events among the physics lists.

  8. NAIRAS aircraft radiation model development, dose climatology, and initial validation.

    Science.gov (United States)

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-10-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  9. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    Science.gov (United States)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  10. Benchmarking of Generation and Distribution Units in Nepal Using Modified DEA Models

    Science.gov (United States)

    Jha, Deependra Kumar; Yorino, Naoto; Zoka, Yoshifumi

    This paper analyzes the performance of Nepalese Electricity Supply Industry (ESI) by investigating the relative operational efficiencies of the generating stations as well as the Distribution Centers (DCs) of the Integrated Nepal Power System (INPS). Nepal Electricity Authority (NEA), a state owned utility, owns and operates the INPS. Performance evaluation of both generation and distribution systems is carried out by formulating suitable weight restriction type Data Envelopment Analysis (DEA) models. The models include a wide range of inputs and outputs representing essence of the respective processes. Decision maker's preferences as well as available quantitative information associated with the operation of the Decision Making Units (DMUs) are judiciously incorporated in the DEA models. The proposed models are realized through execution of computer programs written in General Algebraic Modeling Systems (GAMS) and the results obtained are thus compared against those from the conventional DEA models. Sensitivity analysis is performed in order to check the robustness of the results as well as to identify the improvement directions for DMUs. Ranking of the DMUs has been presented based on their average overall efficiency scores.

  11. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  12. Implementation of Extended Statistical Entropy Analysis to the Effluent Quality Index of the Benchmarking Simulation Model No. 2

    Directory of Open Access Journals (Sweden)

    Alicja P. Sobańtka

    2014-01-01

    Full Text Available Extended statistical entropy analysis (eSEA is used to assess the nitrogen (N removal performance of the wastewater treatment (WWT simulation software, the Benchmarking Simulation Model No. 2 (BSM No. 2 . Six simulations with three different types of wastewater are carried out, which vary in the dissolved oxygen concentration (O2,diss. during the aerobic treatment. N2O emissions generated during denitrification are included in the model. The N-removal performance is expressed as reduction in statistical entropy, ΔH, compared to the hypothetical reference situation of direct discharge of the wastewater into the river. The parameters chemical and biological oxygen demand (COD, BOD and suspended solids (SS are analogously expressed in terms of reduction of COD, BOD, and SS, compared to a direct discharge of the wastewater to the river (ΔEQrest. The cleaning performance is expressed as ΔEQnew, the weighted average of ΔH and ΔEQrest. The results show that ΔEQnew is a more comprehensive indicator of the cleaning performance because, in contrast to the traditional effluent quality index (EQ, it considers the characteristics of the wastewater, includes all N-compounds and their distribution in the effluent, the off-gas, and the sludge. Furthermore, it is demonstrated that realistically expectable N2O emissions have only a moderate impact on ΔEQnew.

  13. The benchmark halo giant HD 122563: CNO abundances revisited with three-dimensional hydrodynamic model stellar atmospheres

    DEFF Research Database (Denmark)

    Collet, R.; Nordlund, Ã.; Asplund, M.

    2018-01-01

    We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D...... simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local...... molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen...

  14. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  15. Radiation dose from Chernobyl forests: assessment using the 'forestpath' model

    International Nuclear Information System (INIS)

    Schell, W.R.; Linkov, I.; Belinkaia, E.; Rimkevich, V.; Zmushko, Yu.; Lutsko, A.; Fifield, F.W.; Flowers, A.G.; Wells, G.

    1996-01-01

    Contaminated forests can contribute significantly to human radiation dose for a few decades after initial contamination. Exposure occurs through harvesting the trees, manufacture and use of forest products for construction materials and paper production, and the consumption of food harvested from forests. Certain groups of the population, such as wild animal hunters and harvesters of berries, herbs and mushrooms, can have particularly large intakes of radionuclides from natural food products. Forestry workers have been found to receive radiation doses several times higher than other groups in the same area. The generic radionuclide cycling model 'forestpath' is being applied to evaluate the human radiation dose and risks to population groups resulting from living and working near the contaminated forests. The model enables calculations to be made to predict the internal and external radiation doses at specific times following the accident. The model can be easily adjusted for dose calculations from other contamination scenarios (such as radionuclide deposition at a low and constant rate as well as complex deposition patterns). Experimental data collected in the forests of Southern Belarus are presented. These data, together with the results of epidemiological studies, are used for model calibration and validation

  16. Theory of thermoluminescence gamma dose response: The unified interaction model

    International Nuclear Information System (INIS)

    Horowitz, Y.S.

    2001-01-01

    We describe the development of a comprehensive theory of thermoluminescence (TL) dose response, the unified interaction model (UNIM). The UNIM is based on both radiation absorption stage and recombination stage mechanisms and can describe dose response for heavy charged particles (in the framework of the extended track interaction model - ETIM) as well as for isotropically ionising gamma rays and electrons (in the framework of the TC/LC geminate recombination model) in a unified and self-consistent conceptual and mathematical formalism. A theory of optical absorption dose response is also incorporated in the UNIM to describe the radiation absorption stage. The UNIM is applied to the dose response supralinearity characteristics of LiF:Mg,Ti and is especially and uniquely successful in explaining the ionisation density dependence of the supralinearity of composite peak 5 in TLD-100. The UNIM is demonstrated to be capable of explaining either qualitatively or quantitatively all of the major features of TL dose response with many of the variable parameters of the model strongly constrained by ancilliary optical absorption and sensitisation measurements

  17. Benchmarking the Sandbox: Quantitative Comparisons of Numerical and Analogue Models of Brittle Wedge Dynamics (Invited)

    Science.gov (United States)

    Buiter, S.; Schreurs, G.; Geomod2008 Team

    2010-12-01

    When numerical and analogue models are used to investigate the evolution of deformation processes in crust and lithosphere, they face specific challenges related to, among others, large contrasts in material properties, the heterogeneous character of continental lithosphere, the presence of a free surface, the occurrence of large deformations including viscous flow and offset on shear zones, and the observation that several deformation mechanisms may be active simultaneously. These pose specific demands on numerical software and laboratory models. By combining the two techniques, we can utilize the strengths of each individual method and test the model-independence of our results. We can perhaps even consider our findings to be more robust if we find similar-to-same results irrespective of the modeling method that was used. To assess the role of modeling method and to quantify the variability among models with identical setups, we have performed a direct comparison of results of 11 numerical codes and 15 analogue experiments. We present three experiments that describe shortening of brittle wedges and that resemble setups frequently used by especially analogue modelers. Our first experiment translates a non-accreting wedge with a stable surface slope. In agreement with critical wedge theory, all models maintain their surface slope and do not show internal deformation. This experiment serves as a reference that allows for testing against analytical solutions for taper angle, root-mean-square velocity and gravitational rate of work. The next two experiments investigate an unstable wedge, which deforms by inward translation of a mobile wall. The models accommodate shortening by formation of forward and backward shear zones. We compare surface slope, rate of dissipation of energy, root-mean-square velocity, and the location, dip angle and spacing of shear zones. All models show similar cross-sectional evolutions that demonstrate reproducibility to first order. However

  18. Intravascular brachytherapy: a model for the calculation of the dose

    International Nuclear Information System (INIS)

    Pirchio, Rosana; Martin, Gabriela; Rivera, Elena; Cricco, Graciela; Cocca, Claudia; Gutierrez, Alicia; Nunez, Mariel; Bergoc, Rosa; Guzman, Luis; Belardi, Diego

    2002-01-01

    In this study we present the radiation dose distribution for a theoretical model with Montecarlo simulation, and based on an experimental model developed for the study of the prevention of restenosis post-angioplasty employing intravascular brachytherapy. In the experimental in vivo model, the atherosclerotic plaques were induced in femoral arteries of male New Zealand rabbits through surgical intervention and later administration of cholesterol enriched diet. For the intravascular irradiation we employed a 32P source contained within the balloon used for the angioplasty. The radiation dose distributions were calculated using the Monte Carlo code MCNP4B according to a segment of a simulated artery. We studied the radiation dose distribution in the axial and radial directions for different thickness of the atherosclerotic plaques. The results will be correlated with the biologic effects observed by means of histological analysis of the irradiated arteries (Au)

  19. A model for radiological dose assessment in an urban environment

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Kim, Eun Han; Jeong, Hyo Joon; Suh, Kyung Suk; Han, Moon Hee

    2007-01-01

    A model for radiological dose assessment in an urban environment, METRO-K has been developed. Characteristics of the model are as follows ; 1) mathematical structures are simple (i.e. simplified input parameters) and easy to understand due to get the results by analytical methods using experimental and empirical data, 2) complex urban environment can easily be made up using only 5 types of basic surfaces, 3) various remediation measures can be applied to different surfaces by evaluating the exposure doses contributing from each contamination surface. Exposure doses contributing from each contamination surface at a particular location of a receptor were evaluated using the data library of kerma values as a function of gamma energy and contamination surface. A kerma data library was prepared for 7 representative types of Korean urban building by extending those data given for 4 representative types of European urban buildings. Initial input data are daily radionuclide concentration in air and precipitation, and fraction of chemical type. Final outputs are absorbed dose rate in air contributing from the basic surfaces as a function of time following a radionuclide deposition, and exposure dose rate contributing from various surfaces constituting the urban environment at a particular location of a receptor. As the result of a contaminative scenario for an apartment built-up area, exposure dose rates show a distinct difference for surrounding environment as well as locations of a receptor

  20. A demonstration of dose modeling at Yucca Mountain

    International Nuclear Information System (INIS)

    Miley, T.B.; Eslinger, P.W.

    1992-11-01

    The U. S. Environmental Protection Agency is currently revising the regulatory guidance for high-level nuclear waste disposal. In its draft form, the guidelines contain dose limits. Since this is likely to be the case in the final regulations, it is essential that the US Department of Energy be prepared to calculate site-specific doses for any potential repository location. This year, Pacific Northwest Laboratory (PNL) has made a first attempt to estimate doses for the potential geologic repository at Yucca Mountain, Nevada as part of a preliminary total-systems performance assessment. A set of transport scenarios was defined to assess the cumulative release of radionuclides over 10,000 years under undisturbed and disturbed conditions at Yucca Mountain. Dose estimates were provided for several of the transport scenarios modeled. The exposure scenarios used to estimate dose in this total-systems exercise should not, however, be considered a definitive set of scenarios for determining the risk of the potential repository. Exposure scenarios were defined for waterborne and surface contamination that result from both undisturbed and disturbed performance of the potential repository. The exposure scenarios used for this analysis were designed for the Hanford Site in Washington. The undisturbed performance scenarios for which exposures were modeled are gas-phase release of 14 C to the surface and natural breakdown of the waste containers with waterborne release. The disturbed performance scenario for which doses were estimated is exploratory drilling. Both surface and waterborne contamination were considered for the drilling intrusion scenario

  1. Preliminary assessment of Geant4 HP models and cross section libraries by reactor criticality benchmark calculations

    DEFF Research Database (Denmark)

    Cai, Xiao-Xiao; Llamas-Jansa, Isabel; Mullet, Steven

    2013-01-01

    Geant4 is an open source general purpose simulation toolkit for particle transportation in matter. Since the extension of the thermal scattering model in Geant4.9.5 and the availability of the IAEA HP model cross section libraries, it is now possible to extend the application area of Geant4......, U and O in uranium dioxide, Al metal, Be metal, and Fe metal. The native HP cross section library G4NDL does not include data for elements with atomic number larger than 92. Therefore, transuranic elements, which have impacts for a realistic reactor, can not be simulated by the combination of the HP...... models and the G4NDL library. However, cross sections of those missing isotopes were made available recently through the IAEA project “new evaluated neutron cross section libraries for Geant4”....

  2. Benchmarking of numerical models describing the dispersion of radionuclides in the Arctic Seas

    DEFF Research Database (Denmark)

    Scott, E.M.; Gurbutt, P.; Harms, I.

    1997-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of the International Atomic Energy Agency (IAEA), a working group was created to model the dispersal and transfer of radionuclides released from radioactive waste disposed of in the Kara Sea. The objectives of this group are: (1......) development of realistic and reliable assessment models for the dispersal of radioactive contaminants both within, and from, the Arctic ocean; and (2) evaluation of the contributions of different transfer mechanisms to contaminant dispersal and hence, ultimately, to the risks to human health and environment...

  3. Application of a Novel Dose-Uncertainty Model for Dose-Uncertainty Analysis in Prostate Intensity-Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong

    2010-01-01

    Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.

  4. Shale gas technology innovation rate impact on economic Base Case – Scenario model benchmarks

    International Nuclear Information System (INIS)

    Weijermars, Ruud

    2015-01-01

    Highlights: • Cash flow models control which technology is affordable in emerging shale gas plays. • Impact of technology innovation on IRR can be as important as wellhead price hikes. • Cash flow models are useful for technology decisions that make shale gas plays economic. • The economic gap can be closed by appropriate technology innovation. - Abstract: Low gas wellhead prices in North America have put its shale gas industry under high competitive pressure. Rapid technology innovation can help companies to improve the economic performance of shale gas fields. Cash flow models are paramount for setting effective production and technology innovation targets to achieve positive returns on investment in all global shale gas plays. Future cash flow of a well (or cluster of wells) may either improve further or deteriorate, depending on: (1) the regional volatility in gas prices at the wellhead – which must pay for the gas resource extraction, and (2) the cost and effectiveness of the well technology used. Gas price is an externality and cannot be controlled by individual companies, but well technology cost can be reduced while improving production output. We assume two plausible scenarios for well technology innovation and model the return on investment while checking against sensitivity to gas price volatility. It appears well technology innovation – if paced fast enough – can fully redeem the negative impact of gas price decline on shale well profits, and the required rates are quantified in our sensitivity analysis

  5. Simulation with Different Turbulence Models in an Annex 20 Benchmark Test using Star-CCM+

    DEFF Research Database (Denmark)

    Le Dreau, Jerome; Heiselberg, Per; Nielsen, Peter V.

    The purpose of this investigation is to compare the different flow patterns obtained for the 2D isothermal test case defined in Annex 20 (1990) using different turbulence models. The different results are compared with the existing experimental data. Similar study has already been performed by Rong...

  6. Benchmark simulation model no 2: general protocol and exploratory case studies

    DEFF Research Database (Denmark)

    Jeppsson, U.; Pons, M.N.; Nopens, I.

    2007-01-01

    and digester models, the included temperature dependencies and the reject water storage. BSM2-implementations are now available in a wide range of simulation platforms and a ring test has verified their proper implementation, consistent with the BSM2 definition. This guarantees that users can focus...

  7. A model for automation of radioactive dose control

    International Nuclear Information System (INIS)

    Ribeiro, Carlos Henrique Calazans; Zambon, Jose Waldir; Bitelli, Ricardo; Honaiser, Eduardo Henrique Rangel

    2009-01-01

    The paper presents a proposal for automation of the personnel dose control system to be used in nuclear medicine environments. The model has considered the Standards and rules of the National Commission of Nuclear Energy (CNEN) and of the Health Ministry. The advantages of the model is a robust management of the integrated dose and technicians qualification status. The software platform selected to be used was the Lotus Notes and an analysis of the advantages, disadvantages of the use of this platform is also presented. (author)

  8. A model for automation of radioactive dose control

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Carlos Henrique Calazans; Zambon, Jose Waldir; Bitelli, Ricardo; Honaiser, Eduardo Henrique Rangel [Centro Tecnologico da Marinha em Sao Paulo (CTMSP), Sao Paulo, SP (Brazil)], e-mail: calazans@ctmsp.mar.mil.br, e-mail: zambon@ctmsp.mar.mil.br, e-mail: bitelli@ctmsp.mar.mil.br, e-mail: honaiser@ctmsp.mar.mil.br

    2009-07-01

    The paper presents a proposal for automation of the personnel dose control system to be used in nuclear medicine environments. The model has considered the Standards and rules of the National Commission of Nuclear Energy (CNEN) and of the Health Ministry. The advantages of the model is a robust management of the integrated dose and technicians qualification status. The software platform selected to be used was the Lotus Notes and an analysis of the advantages, disadvantages of the use of this platform is also presented. (author)

  9. Business Models and Sharing Economy: Benchmarking Best Practices in Finland and Russia

    OpenAIRE

    Martynova, Tatiana

    2017-01-01

    The thesis studies the best practices in sharing economy across various industries in Russia and Finland, based on case studies of business models. It researches current legal status of the phenomenon as well as legislative changes that are to be expected in the field of sharing economy. The thesis project was commissioned in November 2015 by Association of Finnish Travel Agents (AFTA), an organization formed by travel agents, tour operators and incoming agents to promote the mutual inter...

  10. Object-oriented process dose modeling for glovebox operations

    International Nuclear Information System (INIS)

    Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.

    1999-01-01

    The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts

  11. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    Science.gov (United States)

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  12. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Directory of Open Access Journals (Sweden)

    2015-12-01

    Full Text Available Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  13. Mathematical model for evaluation of dose-rate effect on biological responses to low dose γ-radiation

    International Nuclear Information System (INIS)

    Ogata, H.; Kawakami, Y.; Magae, J.

    2003-01-01

    Full text: To evaluate quantitative dose-response relationship on the biological response to radiation, it is necessary to consider a model including cumulative dose, dose-rate and irradiation time. In this study, we measured micronucleus formation and [ 3 H] thymidine uptake in human cells as indices of biological response to gamma radiation, and analyzed mathematically and statistically the data for quantitative evaluation of radiation risk at low dose/low dose-rate. Effective dose (ED x ) was mathematically estimated by fitting a general function of logistic model to the dose-response relationship. Assuming that biological response depends on not only cumulative dose but also dose-rate and irradiation time, a multiple logistic function was applied to express the relationship of the three variables. Moreover, to estimate the effect of radiation at very low dose, we proposed a modified exponential model. From the results of fitting curves to the inhibition of [ 3 H] thymidine uptake and micronucleus formation, it was obvious that ED 50 in proportion of inhibition of [ 3 H] thymidine uptake increased with longer irradiation time. As for the micronuclei, ED 30 also increased with longer irradiation times. These results suggest that the biological response depends on not only total dose but also irradiation time. The estimated response surface using the three variables showed that the biological response declined sharply when the dose-rate was less than 0.01 Gy/h. These results suggest that the response does not depend on total cumulative dose at very low dose-rates. Further, to investigate the effect of dose-rate within a wider range, we analyzed the relationship between ED x and dose-rate. Fitted curves indicated that ED x increased sharply when dose-rate was less than 10 -2 Gy/h. The increase of ED x signifies the decline of the response or the risk and suggests that the risk approaches to 0 at infinitely low dose-rate

  14. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    International Nuclear Information System (INIS)

    Malin, Wahlberg; Imre, Pazsit

    2005-01-01

    The purpose of this paper is to demonstrate the use of the invariant embedding method in a series of model transport problems, for which it is also possible to obtain an analytical solution. Due to the non-linear character of the embedding equations, their solution can only be obtained numerically. However, this can be done via a robust and effective iteration scheme. In return, the domain of applicability is far wider than the model problems investigated in this paper. The use of the invariant embedding method is demonstrated in three different areas. The first is the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production. Both constant and energy dependent cross sections with a power law dependence were used in the calculations. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel and unexpected application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and a half-space are interrelated through embedding-like integral equations, by the solution of which the reflected flux from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases the invariant embedding method proved to be robust, fast and monotonically converging to the exact solutions. (authors)

  15. Magnetic Design and Code Benchmarking of the SMC (Short Model Coil) Dipole Magnet

    CERN Document Server

    Manil, P; Rochford, J; Fessia, P; Canfer, S; Baynham, E; Nunio, F; de Rijk, G; Védrine, P

    2010-01-01

    The Short Model Coil (SMC) working group was set in February 2007 to complement the Next European Dipole (NED) program, in order to develop a short-scale model of a Nb$_{3}$Sn dipole magnet. In 2009, the EuCARD/HFM (High Field Magnets) program took over these programs. The SMC group comprises four laboratories: CERN/TE-MSC group (CH), CEA/IRFU (FR), RAL (UK) and LBNL (US). The SMC magnet is designed to reach a peak field of about 13 Tesla (T) on conductor, using a 2500 A/mm2 Powder-In-Tube (PIT) strand. The aim of this magnet device is to study the degradation of the magnetic properties of the Nb$_{3}$Sn cable, by applying different levels of pre-stress. To fully satisfy this purpose, a versatile and easy-to-assemble structure has been realized. The design of the SMC magnet has been developed from an existing dipole magnet, the SD01, designed, built and tested at LBNL with support from CEA. The goal of the magnetic design presented in this paper is to match the high field region with the high stress region, l...

  16. Use of nonlinear dose-effect models to predict consequences

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    The linear dose-effect relationship was introduced as a model for the induction of cancer from exposure to nuclear radiation. Subsequently, it has been used by analogy to assess the risk of chemical carcinogens also. Recently, however, the model for radiation carcinogenesis has come increasingly under attack because its calculations contradict the epidemiological data, such as cancer in atomic bomb survivors. Even so, its proponents vigorously defend it, often using arguments that are not so much scientific as a mix of scientific, societal, and often political arguments. At least in part, the resilience of the linear model is due to two convenient properties that are exclusive to linearity: First, the risk of an event is determined solely by the event dose; second, the total risk of a population group depends only on the total population dose. In reality, the linear model has been conclusively falsified; i.e., it has been shown to make wrong predictions, and once this fact is generally realized, the scientific method calls for a new paradigm model. As all alternative models are by necessity nonlinear, all the convenient properties of the linear model are invalid, and calculational procedures have to be used that are appropriate for nonlinear models

  17. A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Alvarez, Fernando; Plum, Christian Edinger Munk

    2014-01-01

    . The potential for making cost-effective and energy-efficient liner-shipping networks using operations research (OR) is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field......The liner-shipping network design problem is to create a set of nonsimple cyclic sailing routes for a designated fleet of container vessels that jointly transports multiple commodities. The objective is to maximize the revenue of cargo transport while minimizing the costs of operation...... sources of liner shipping for OR researchers in general. We describe and analyze the liner-shipping domain applied to network design and present a rich integer programming model based on services that constitute the fixed schedule of a liner shipping company. We prove the liner-shipping network design...

  18. International Criticality Safety Benchmark Evaluation Project (ICSBEP) - ICSBEP 2015 Handbook

    International Nuclear Information System (INIS)

    Bess, John D.

    2015-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy (DOE). The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Nuclear Energy Agency (NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirements and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross-section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span approximately 69000 pages and contain 567 evaluations with benchmark specifications for 4874 critical, near-critical or subcritical configurations, 31 criticality alarm placement/shielding configurations with multiple dose points for each, and 207 configurations that have been categorised as fundamental physics measurements that are relevant to criticality safety applications. New to the handbook are benchmark specifications for neutron activation foil and thermoluminescent dosimeter measurements performed at the SILENE critical assembly in Valduc, France as part of a joint venture in 2010 between the US DOE and the French Alternative Energies and Atomic Energy Commission (CEA). A photograph of this experiment is shown on the front cover. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these

  19. Radiation dose modeling using IGRIP and Deneb/ERGO

    International Nuclear Information System (INIS)

    Vickers, D.S.; Davis, K.R.; Breazeal, N.L.; Watson, R.A.; Ford, M.S.

    1995-01-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans in radiation environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO (Ergonomics) simulation software products. These commercially available products are augmented with custom C code to provide the radiation exposure information to and collect the radiation dose information from the workcell simulations. The emphasis of this paper is on the IGRIP and Deneb/ERGO parts of REMS, since that represents the extension to existing capabilities developed by the authors. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these database files to compute and accumulate dose to human devices (Deneb's ERGO human) during simulated operations around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. REMS was developed because the proposed reduction in the yearly radiation exposure limit will preclude or require changes in many of the manual operations currently being utilized in the Weapons Complex. This is particularly relevant in the area of dismantlement activities at the Pantex Plant in Amarillo, TX. Therefore, a capability was needed to be able to quantify the dose associated with certain manual processes so that the benefits of automation could be identified and understood

  20. Analysis and modeling of electronic portal imaging exit dose measurements

    International Nuclear Information System (INIS)

    Pistorius, S.; Yeboah, C.

    1995-01-01

    In spite of the technical advances in treatment planning and delivery in recent years, it is still unclear whether the recommended accuracy in dose delivery is being achieved. Electronic portal imaging devices, now in routine use in many centres, have the potential for quantitative dosimetry. As part of a project which aims to develop an expert-system based On-line Dosimetric Verification (ODV) system we have investigated and modelled the dose deposited in the detector of a video based portal imaging system. Monte Carlo techniques were used to simulate gamma and x-ray beams in homogeneous slab phantom geometries. Exit doses and energy spectra were scored as a function of (i) slab thickness, (ii) field size and (iii) the air gap between the exit surface and the detector. The results confirm that in order to accurately calculate the dose in the high atomic number Gd 2 O 2 S detector for a range of air gaps, field sizes and slab thicknesses both the magnitude of the primary and scattered components and their effective energy need to be considered. An analytic, convolution based model which attempts to do this is proposed. The results of the simulation and the ability of the model to represent these data will be presented and discussed. This model is used to show that, after training, a back-propagation feed-forward cascade correlation neural network has the ability to identify and recognise the cause of, significant dosimetric errors

  1. Low dose CT simulation using experimental noise model

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Satori; Zamyatin, Alexander A. [Toshiba Medical Systems Corporation, Tochigi, Otawarashi (Japan); Silver, Michael D. [Toshiba Medical Research Institute, Vernon Hills, IL (United States)

    2011-07-01

    We suggest a method to obtain system noise model experimentally without relying on assumptions on statistical distribution of the noise; also, knowledge of DAS gain and electronic noise level are not required. Evaluation with ultra-low dose CT data (5 mAs) shows good match between simulated and real data noise. (orig.)

  2. Modern multicore and manycore architectures: Modelling, optimisation and benchmarking a multiblock CFD code

    Science.gov (United States)

    Hadade, Ioan; di Mare, Luca

    2016-08-01

    Modern multicore and manycore processors exhibit multiple levels of parallelism through a wide range of architectural features such as SIMD for data parallel execution or threads for core parallelism. The exploitation of multi-level parallelism is therefore crucial for achieving superior performance on current and future processors. This paper presents the performance tuning of a multiblock CFD solver on Intel SandyBridge and Haswell multicore CPUs and the Intel Xeon Phi Knights Corner coprocessor. Code optimisations have been applied on two computational kernels exhibiting different computational patterns: the update of flow variables and the evaluation of the Roe numerical fluxes. We discuss at great length the code transformations required for achieving efficient SIMD computations for both kernels across the selected devices including SIMD shuffles and transpositions for flux stencil computations and global memory transformations. Core parallelism is expressed through threading based on a number of domain decomposition techniques together with optimisations pertaining to alleviating NUMA effects found in multi-socket compute nodes. Results are correlated with the Roofline performance model in order to assert their efficiency for each distinct architecture. We report significant speedups for single thread execution across both kernels: 2-5X on the multicore CPUs and 14-23X on the Xeon Phi coprocessor. Computations at full node and chip concurrency deliver a factor of three speedup on the multicore processors and up to 24X on the Xeon Phi manycore coprocessor.

  3. Weight restrictions on geography variables in the DEA benchmarking model for Norwegian electricity distribution companies

    Energy Technology Data Exchange (ETDEWEB)

    Bjoerndal, Endre; Bjoerndal, Mette; Camanho, Ana

    2008-07-01

    The DEA model for the distribution networks is designed to take into account the diverse operating conditions of the companies through so-called 'geography' variables. Our analyses show that companies with difficult operating conditions tend to be rewarded with relatively high efficiency scores, and this is the reason for introducing weight restrictions. We discuss the relative price restrictions suggested for geography and high voltage variables by NVE (2008), and we compare these to an alternative approach by which the total (virtual) weight of the geography variables is restricted. The main difference between the two approaches is that the former tends to affect more companies, but to a lesser extent, than the latter. We also discuss how to set the restriction limits. Since the virtual restrictions are at a more aggregated level than the relative ones, it may be easier to establish the limits with this approach. Finally, we discuss implementation issues, and give a short overview of available software. (Author). 18 refs., figs

  4. Enabling benchmarking and improving operational efficiency at nuclear power plants through adoption of a common process model: SNPM (standard nuclear performance model)

    International Nuclear Information System (INIS)

    Pete Karns

    2006-01-01

    To support the projected increase in base-load electricity demand, nuclear operating companies must maintain or improve upon current generation rates, all while their assets continue to age. Certainly new plants are and will be built, however the bulk of the world's nuclear generation comes from plants constructed in the 1970's and 1980's. The nuclear energy industry in the United States has dramatically increased its electricity production over the past decade; from 75.1% in 1994 to 91.9% by 2002 (source NEI US Nuclear Industry Net Capacity Factors - 1980 to 2003). This increase, coupled with lowered production costs; $2.43 in 1994 to $1.71 in 2002 (factored for inflation source NEI US Nuclear Industry net Production Costs 1980 to 2002) is due in large part to a focus on operational excellence that is driven by an industry effort to develop and share best practices for the purposes of benchmarking and improving overall performance. These best-practice processes, known as the standard nuclear performance model (SNPM), present an opportunity for European nuclear power generators who are looking to improve current production rates. In essence the SNPM is a model for the safe, reliable, and economically competitive nuclear power generation. The SNPM has been a joint effort of several industry bodies: Nuclear Energy Institute, Electric Cost Utility Group, and Institute of Nuclear Power Operations (INPO). The standard nuclear performance model (see figure 1) is comprised of eight primary processes, supported by forty four sub-processes and a number of company specific activities and tasks. The processes were originally envisioned by INPO in 1994 and evolved into the SNPM that was originally launched in 1998. Since that time communities of practice (CoPs) have emerged via workshops to further improve the processes and their inter-operability, CoP representatives include people from: nuclear power operating companies, policy bodies, industry suppliers and consultants, and

  5. TSD-DOSE: A radiological dose assessment model for treatment, storage, and disposal facilities

    International Nuclear Information System (INIS)

    Pfingston, M.; Arnish, J.; LePoire, D.; Chen, S.-Y.

    1998-01-01

    Past practices at US Department of Energy (DOE) field facilities resulted in the presence of trace amounts of radioactive materials in some hazardous chemical wastes shipped from these facilities. In May 1991, the DOE Office of Waste Operations issued a nationwide moratorium on shipping all hazardous waste until procedures could be established to ensure that only nonradioactive hazardous waste would be shipped from DOE facilities to commercial treatment, storage, and disposal (TSD) facilities. To aid in assessing the potential impacts of shipments of mixed radioactive and chemically hazardous wastes, a radiological assessment computer model (or code) was developed on the basis of detailed assessments of potential radiological exposures and doses for eight commercial hazardous waste TSD facilities. The model, called TSD-DOSE, is designed to incorporate waste-specific and site-specific data to estimate potential radiological doses to on-site workers and the off-site public from waste-handling operations at a TSD facility. The code is intended to provide both DOE and commercial TSD facilities with a rapid and cost-effective method for assessing potential human radiation exposures from the processing of chemical wastes contaminated with trace amounts of radionuclides

  6. The benchmark halo giant HD 122563: CNO abundances revisited with three-dimensional hydrodynamic model stellar atmospheres

    Science.gov (United States)

    Collet, R.; Nordlund, Å.; Asplund, M.; Hayek, W.; Trampedach, R.

    2018-04-01

    We present an abundance analysis of the low-metallicity benchmark red giant star HD 122563 based on realistic, state-of-the-art, high-resolution, three-dimensional (3D) model stellar atmospheres including non-grey radiative transfer through opacity binning with 4, 12, and 48 bins. The 48-bin 3D simulation reaches temperatures lower by ˜300-500 K than the corresponding 1D model in the upper atmosphere. Small variations in the opacity binning, adopted line opacities, or chemical mixture can cool the photospheric layers by a further ˜100-300 K and alter the effective temperature by ˜100 K. A 3D local thermodynamic equilibrium (LTE) spectroscopic analysis of Fe I and Fe II lines gives discrepant results in terms of derived Fe abundance, which we ascribe to non-LTE effects and systematic errors on the stellar parameters. We also determine C, N, and O abundances by simultaneously fitting CH, OH, NH, and CN molecular bands and lines in the ultraviolet, visible, and infrared. We find a small positive 3D-1D abundance correction for carbon (+0.03 dex) and negative ones for nitrogen (-0.07 dex) and oxygen (-0.34 dex). From the analysis of the [O I] line at 6300.3 Å, we derive a significantly higher oxygen abundance than from molecular lines (+0.46 dex in 3D and +0.15 dex in 1D). We rule out important OH photodissociation effects as possible explanation for the discrepancy and note that lowering the surface gravity would reduce the oxygen abundance difference between molecular and atomic indicators.

  7. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  8. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  9. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    Energy Technology Data Exchange (ETDEWEB)

    Gissi, Andrea [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Lombardo, Anna; Roncaglioni, Alessandra [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Gadaleta, Domenico [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy); Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Mangiatordi, Giuseppe Felice; Nicolotti, Orazio [Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari (Italy); Benfenati, Emilio, E-mail: emilio.benfenati@marionegri.it [Laboratory of Environmental Chemistry and Toxicology, IRCCS – Istituto di Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milano (Italy)

    2015-02-15

    regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.

  10. Evaluation and comparison of benchmark QSAR models to predict a relevant REACH endpoint: The bioconcentration factor (BCF)

    International Nuclear Information System (INIS)

    Gissi, Andrea; Lombardo, Anna; Roncaglioni, Alessandra; Gadaleta, Domenico; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Benfenati, Emilio

    2015-01-01

    sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods

  11. Absorbed dose in fibrotic microenvironment models employing Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zambrano Ramírez, O.D.; Rojas Calderón, E.L.; Azorín Vega, E.P.; Ferro Flores, G.; Martínez Caballero, E.

    2015-01-01

    The presence or absence of fibrosis and yet more, the multimeric and multivalent nature of the radiopharmaceutical have recently been reported to have an effect on the radiation absorbed dose in tumor microenvironment models. Fibroblast and myofibroblast cells produce the extracellular matrix by the secretion of proteins which provide structural and biochemical support to cells. The reactive and reparative mechanisms triggered during the inflammatory process causes the production and deposition of extracellular matrix proteins, the abnormal excessive growth of the connective tissue leads to fibrosis. In this work, microenvironment (either not fibrotic or fibrotic) models composed of seven spheres representing cancer cells of 10 μm in diameter each with a 5 μm diameter inner sphere (cell nucleus) were created in two distinct radiation transport codes (PENELOPE and MCNP). The purpose of creating these models was to determine the radiation absorbed dose in the nucleus of cancer cells, based on previously reported radiopharmaceutical retain (by HeLa cells) percentages of the 177 Lu-Tyr 3 -octreotate (monomeric) and 177 Lu-Tyr 3 -octreotate-AuNP (multimeric) radiopharmaceuticals. A comparison in the results between the PENELOPE and MCNP was done. We found a good agreement in the results of the codes. The percent difference between the increase percentages of the absorbed dose in the not fibrotic model with respect to the fibrotic model of the codes PENELOPE and MCNP was found to be under 1% for both radiopharmaceuticals. (authors)

  12. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  13. Strong-coupling expansion for the momentum distribution of the Bose-Hubbard model with benchmarking against exact numerical results

    International Nuclear Information System (INIS)

    Freericks, J. K.; Krishnamurthy, H. R.; Kato, Yasuyuki; Kawashima, Naoki; Trivedi, Nandini

    2009-01-01

    A strong-coupling expansion for the Green's functions, self-energies, and correlation functions of the Bose-Hubbard model is developed. We illustrate the general formalism, which includes all possible (normal-phase) inhomogeneous effects in the formalism, such as disorder or a trap potential, as well as effects of thermal excitations. The expansion is then employed to calculate the momentum distribution of the bosons in the Mott phase for an infinite homogeneous periodic system at zero temperature through third order in the hopping. By using scaling theory for the critical behavior at zero momentum and at the critical value of the hopping for the Mott insulator-to-superfluid transition along with a generalization of the random-phase-approximation-like form for the momentum distribution, we are able to extrapolate the series to infinite order and produce very accurate quantitative results for the momentum distribution in a simple functional form for one, two, and three dimensions. The accuracy is better in higher dimensions and is on the order of a few percent relative error everywhere except close to the critical value of the hopping divided by the on-site repulsion. In addition, we find simple phenomenological expressions for the Mott-phase lobes in two and three dimensions which are much more accurate than the truncated strong-coupling expansions and any other analytic approximation we are aware of. The strong-coupling expansions and scaling-theory results are benchmarked against numerically exact quantum Monte Carlo simulations in two and three dimensions and against density-matrix renormalization-group calculations in one dimension. These analytic expressions will be useful for quick comparison of experimental results to theory and in many cases can bypass the need for expensive numerical simulations.

  14. The role of dose inhomogeneity in biological models of dose response

    International Nuclear Information System (INIS)

    Crawford-Brown, D.J.

    1989-01-01

    The paper focuses on the semi-empirical functions proposed by NAS (1980), ICRP (1977), in which terms for initiation and cell killing appear. The extent is not to produce a new model of carcinogenesis, or to reanalyse existing epidemiological data, but to explore whether an existing extrapolation function (proposed by the NAS) can be shown to have coherent theoretical support, while at the same time reproducing (however reasonably) the features of epidemiological data. Attention is restricted to irradiation by high LET radiations such as alpha particles, which may produce large inhomogeneities in both emission density and dose in cellular populations. Particular interest is directed towards epidemiological studies of uranium miners (Hornung and Meinhardt, 1987) and persons injected with 224 Ra (Spiess and Mays, 1970), although the results of the radium dial studies are included since they are discussed in the NAS report. Both populations are characterized by large uncertainties in dose estimation (mean organ dose) and by highly inhomogeneous patterns of irradiation within a single organ (Arnold and Jee, 1959; Diel, 1978; Singh, Bennettee and Wrenn, 1987; Rowland and Marshall, 1959). (author)

  15. Comprehensive fluence model for absolute portal dose image prediction

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2009-01-01

    Amorphous silicon (a-Si) electronic portal imaging devices (EPIDs) continue to be investigated as treatment verification tools, with a particular focus on intensity modulated radiation therapy (IMRT). This verification could be accomplished through a comparison of measured portal images to predicted portal dose images. A general fluence determination tailored to portal dose image prediction would be a great asset in order to model the complex modulation of IMRT. A proposed physics-based parameter fluence model was commissioned by matching predicted EPID images to corresponding measured EPID images of multileaf collimator (MLC) defined fields. The two-source fluence model was composed of a focal Gaussian and an extrafocal Gaussian-like source. Specific aspects of the MLC and secondary collimators were also modeled (e.g., jaw and MLC transmission factors, MLC rounded leaf tips, tongue and groove effect, interleaf leakage, and leaf offsets). Several unique aspects of the model were developed based on the results of detailed Monte Carlo simulations of the linear accelerator including (1) use of a non-Gaussian extrafocal fluence source function, (2) separate energy spectra used for focal and extrafocal fluence, and (3) different off-axis energy spectra softening used for focal and extrafocal fluences. The predicted energy fluence was then convolved with Monte Carlo generated, EPID-specific dose kernels to convert incident fluence to dose delivered to the EPID. Measured EPID data were obtained with an a-Si EPID for various MLC-defined fields (from 1x1 to 20x20 cm 2 ) over a range of source-to-detector distances. These measured profiles were used to determine the fluence model parameters in a process analogous to the commissioning of a treatment planning system. The resulting model was tested on 20 clinical IMRT plans, including ten prostate and ten oropharyngeal cases. The model predicted the open-field profiles within 2%, 2 mm, while a mean of 96.6% of pixels over all

  16. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  17. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    Science.gov (United States)

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  18. Modeling of Phenix End-of-Life control rod withdrawal benchmark with DYN3D SFR version

    Energy Technology Data Exchange (ETDEWEB)

    Nikitin, Evgeny; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety

    2017-06-01

    The reactor dynamics code DYN3D is currently under extension for Sodium cooled Fast Reactor applications. The control rod withdrawal benchmark from the Phenix End-of-Life experiments was selected for verification and validation purposes. This report presents some selected results to demonstrate the feasibility of using DYN3D for steady-state Sodium cooled Fast Reactor analyses.

  19. Low Dose Radiation Cancer Risks: Epidemiological and Toxicological Models

    Energy Technology Data Exchange (ETDEWEB)

    David G. Hoel, PhD

    2012-04-19

    The basic purpose of this one year research grant was to extend the two stage clonal expansion model (TSCE) of carcinogenesis to exposures other than the usual single acute exposure. The two-stage clonal expansion model of carcinogenesis incorporates the biological process of carcinogenesis, which involves two mutations and the clonal proliferation of the intermediate cells, in a stochastic, mathematical way. The current TSCE model serves a general purpose of acute exposure models but requires numerical computation of both the survival and hazard functions. The primary objective of this research project was to develop the analytical expressions for the survival function and the hazard function of the occurrence of the first cancer cell for acute, continuous and multiple exposure cases within the framework of the piece-wise constant parameter two-stage clonal expansion model of carcinogenesis. For acute exposure and multiple exposures of acute series, it is either only allowed to have the first mutation rate vary with the dose, or to have all the parameters be dose dependent; for multiple exposures of continuous exposures, all the parameters are allowed to vary with the dose. With these analytical functions, it becomes easy to evaluate the risks of cancer and allows one to deal with the various exposure patterns in cancer risk assessment. A second objective was to apply the TSCE model with varing continuous exposures from the cancer studies of inhaled plutonium in beagle dogs. Using step functions to estimate the retention functions of the pulmonary exposure of plutonium the multiple exposure versions of the TSCE model was to be used to estimate the beagle dog lung cancer risks. The mathematical equations of the multiple exposure versions of the TSCE model were developed. A draft manuscript which is attached provides the results of this mathematical work. The application work using the beagle dog data from plutonium exposure has not been completed due to the fact

  20. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  1. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  2. Modeling gamma radiation dose in dwellings due to building materials.

    Science.gov (United States)

    de Jong, Peter; van Dijk, Willem

    2008-01-01

    A model is presented that calculates the absorbed dose rate in air of gamma radiation emitted by building materials in a rectangular body construction. The basis for these calculations is formed by a fixed set of specific absorbed dose rates (the dose rate per Bq kg(-1) 238U, 232Th, and 40K), as determined for a standard geometry with the dimensions 4 x 5 x 2.8 m3. Using the computer codes Marmer and MicroShield, correction factors are assessed that quantify the influence of several room and material related parameters on the specific absorbed dose rates. The investigated parameters are the position in the construction; the thickness, density, and dimensions of the construction parts; the contribution from the outer leave; the presence of doors and windows; the attenuation by internal partition walls; the contribution from building materials present in adjacent rooms; and the effect of non-equilibrium due to 222Rn exhalation. To verify the precision, the proposed method is applied to three Dutch reference dwellings, i.e., a row house, a coupled house, and a gallery apartment. The averaged difference with MCNP calculations is found to be 4%.

  3. Mathematical modelling for dose deposition in photon-therapy

    International Nuclear Information System (INIS)

    Pichard, Teddy

    2016-01-01

    Radiotherapy treatments consists in irradiating the patient with beams of energetic particles (typically photons) targeting the tumor. Such particles are transported through the medium and deposit energy in the medium. This deposited energy is the so called dose, responsible for the biological effect of the radiations. The present work aim to develop numerical methods for dose computation and optimization that are competitive in terms of computational cost and accuracy compared to reference method. The motion of particles is first studied through a system of linear transport equations at the kinetic level. However, solving directly such systems is numerically too costly for medical application. Instead, the moment method is used with a special focus on the Mn models. Those moment equations are non-linear and valid under a condition called realizability. Standard numerical schemes for moment equations are constrained by stability conditions which happen to be very restrictive when the medium contains low density regions. Inconditionally stable numerical schemes adapted to moment equations (preserving the realizability property) are developed. Those schemes are shown to be competitive in terms of computational costs compared to reference approaches. Finally they are applied to in an optimization procedure aiming to maximize the dose in the tumor and to minimize the dose in healthy tissues. (author) [fr

  4. A comprehensive set of benchmark tests for a land surface model of simultaneous fluxes of water and carbon at both the global and seasonal scale

    Directory of Open Access Journals (Sweden)

    E. Blyth

    2011-04-01

    Full Text Available Evaluating the models we use in prediction is important as it allows us to identify uncertainties in prediction as well as guiding the priorities for model development. This paper describes a set of benchmark tests that is designed to quantify the performance of the land surface model that is used in the UK Hadley Centre General Circulation Model (JULES: Joint UK Land Environment Simulator. The tests are designed to assess the ability of the model to reproduce the observed fluxes of water and carbon at the global and regional spatial scale, and on a seasonal basis. Five datasets are used to test the model: water and carbon dioxide fluxes from ten FLUXNET sites covering the major global biomes, atmospheric carbon dioxide concentrations at four representative stations from the global network, river flow from seven catchments, the seasonal mean NDVI over the seven catchments and the potential land cover of the globe (after the estimated anthropogenic changes have been removed. The model is run in various configurations and results are compared with the data.

    A few examples are chosen to demonstrate the importance of using combined use of observations of carbon and water fluxes in essential in order to understand the causes of model errors. The benchmarking approach is suitable for application to other global models.

  5. Biologically based modelling and simulation of carcinogenesis at low doses

    International Nuclear Information System (INIS)

    Ouchi, Noriyuki B.

    2003-01-01

    The process of the carcinogenesis is studied by computer simulation. In general, we need a large number of experimental samples to detect mutations at low doses, but in practice it is difficult to get such a large number of data. To satisfy the requirements of the situation at low doses, it is good to study the process of carcinogenesis using biologically based mathematical model. We have mainly studied it by using as known as 'multi-stage model'; the model seems to get complicated, as we adopt the recent new findings of molecular biological experiments. Moreover, the basic idea of the multi-stage model is based on the epidemiologic data of log-log variation of cancer incidence with age, it seems to be difficult to compare with experimental data of irradiated cell culture system, which has been increasing in recent years. Taking above into consideration, we concluded that we had better make new model with following features: 1) a unit of the target system is a cell, 2) the new information of the molecular biology can be easily introduced, 3) having spatial coordinates for checking a colony formation or tumorigenesis. In this presentation, we will show the detail of the model and some simulation results about the carcinogenesis. (author)

  6. OECD/NEA BENCHMARK FOR UNCERTAINTY ANALYSIS IN MODELING (UAM FOR LWRS – SUMMARY AND DISCUSSION OF NEUTRONICS CASES (PHASE I

    Directory of Open Access Journals (Sweden)

    RYAN N. BRATTON

    2014-06-01

    Full Text Available A Nuclear Energy Agency (NEA, Organization for Economic Co-operation and Development (OECD benchmark for Uncertainty Analysis in Modeling (UAM is defined in order to facilitate the development and validation of available uncertainty analysis and sensitivity analysis methods for best-estimate Light water Reactor (LWR design and safety calculations. The benchmark has been named the OECD/NEA UAM-LWR benchmark, and has been divided into three phases each of which focuses on a different portion of the uncertainty propagation in LWR multi-physics and multi-scale analysis. Several different reactor cases are modeled at various phases of a reactor calculation. This paper discusses Phase I, known as the “Neutronics Phase”, which is devoted mostly to the propagation of nuclear data (cross-section uncertainty throughout steady-state stand-alone neutronics core calculations. Three reactor systems (for which design, operation and measured data are available are rigorously studied in this benchmark: Peach Bottom Unit 2 BWR, Three Mile Island Unit 1 PWR, and VVER-1000 Kozloduy-6/Kalinin-3. Additional measured data is analyzed such as the KRITZ LEU criticality experiments and the SNEAK-7A and 7B experiments of the Karlsruhe Fast Critical Facility. Analyzed results include the top five neutron-nuclide reactions, which contribute the most to the prediction uncertainty in keff, as well as the uncertainty in key parameters of neutronics analysis such as microscopic and macroscopic cross-sections, six-group decay constants, assembly discontinuity factors, and axial and radial core power distributions. Conclusions are drawn regarding where further studies should be done to reduce uncertainties in key nuclide reaction uncertainties (i.e.: 238U radiative capture and inelastic scattering (n, n’ as well as the average number of neutrons released per fission event of 239Pu.

  7. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    Energy Technology Data Exchange (ETDEWEB)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Carlsson Tedgren, Åsa [Department of Medical and Health Sciences (IMH), Radiation Physics, Faculty of Health Sciences, Linköping University, Linköping SE-581 85, Sweden and Department of Medical Physics, Karolinska University Hospital, Stockholm SE-171 76 (Sweden); Granero, Domingo [Department of Radiation Physics, ERESA, Hospital General Universitario, Valencia E-46014 (Spain); Haworth, Annette [Department of Physical Sciences, Peter MacCallum Cancer Centre and Royal Melbourne Institute of Technology, Melbourne, Victoria 3000 (Australia); Mourtada, Firas [Department of Radiation Oncology, Helen F. Graham Cancer Center, Christiana Care Health System, Newark, Delaware 19713 (United States); Fonseca, Gabriel Paiva [Instituto de Pesquisas Energéticas e Nucleares – IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW, School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Zourari, Kyveli; Papagiannis, Panagiotis [Medical Physics Laboratory, Medical School, University of Athens, 75 MikrasAsias, Athens 115 27 (Greece); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel, Kiel 24105 (Germany); Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); and others

    2015-06-15

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by

  8. Testing the metacognitive model against the benchmark CBT model of social anxiety disorder: Is it time to move beyond cognition?

    Directory of Open Access Journals (Sweden)

    Henrik Nordahl

    Full Text Available The recommended treatment for Social Phobia is individual Cognitive-Behavioural Therapy (CBT. CBT-treatments emphasize social self-beliefs (schemas as the core underlying factor for maladaptive self-processing and social anxiety symptoms. However, the need for such beliefs in models of psychopathology has recently been questioned. Specifically, the metacognitive model of psychological disorders asserts that particular beliefs about thinking (metacognitive beliefs are involved in most disorders, including social anxiety, and are a more important factor underlying pathology. Comparing the relative importance of these disparate underlying belief systems has the potential to advance conceptualization and treatment for SAD. In the cognitive model, unhelpful self-regulatory processes (self-attention and safety behaviours arise from (e.g. correlate with cognitive beliefs (schemas whilst the metacognitive model proposes that such processes arise from metacognitive beliefs. In the present study we therefore set out to evaluate the absolute and relative fit of the cognitive and metacognitive models in a longitudinal data-set, using structural equation modelling. Five-hundred and five (505 participants completed a battery of self-report questionnaires at two time points approximately 8 weeks apart. We found that both models fitted the data, but that the metacognitive model was a better fit to the data than the cognitive model. Further, a specified metacognitive model, emphasising negative metacognitive beliefs about the uncontrollability and danger of thoughts and cognitive confidence improved the model fit further and was significantly better than the cognitive model. It would seem that advances in understanding and treating social anxiety could benefit from moving to a full metacognitive theory that includes negative metacognitive beliefs about the uncontrollability and danger of thoughts, and judgements of cognitive confidence. These findings challenge

  9. Testing the metacognitive model against the benchmark CBT model of social anxiety disorder: Is it time to move beyond cognition?

    Science.gov (United States)

    Nordahl, Henrik; Wells, Adrian

    2017-01-01

    The recommended treatment for Social Phobia is individual Cognitive-Behavioural Therapy (CBT). CBT-treatments emphasize social self-beliefs (schemas) as the core underlying factor for maladaptive self-processing and social anxiety symptoms. However, the need for such beliefs in models of psychopathology has recently been questioned. Specifically, the metacognitive model of psychological disorders asserts that particular beliefs about thinking (metacognitive beliefs) are involved in most disorders, including social anxiety, and are a more important factor underlying pathology. Comparing the relative importance of these disparate underlying belief systems has the potential to advance conceptualization and treatment for SAD. In the cognitive model, unhelpful self-regulatory processes (self-attention and safety behaviours) arise from (e.g. correlate with) cognitive beliefs (schemas) whilst the metacognitive model proposes that such processes arise from metacognitive beliefs. In the present study we therefore set out to evaluate the absolute and relative fit of the cognitive and metacognitive models in a longitudinal data-set, using structural equation modelling. Five-hundred and five (505) participants completed a battery of self-report questionnaires at two time points approximately 8 weeks apart. We found that both models fitted the data, but that the metacognitive model was a better fit to the data than the cognitive model. Further, a specified metacognitive model, emphasising negative metacognitive beliefs about the uncontrollability and danger of thoughts and cognitive confidence improved the model fit further and was significantly better than the cognitive model. It would seem that advances in understanding and treating social anxiety could benefit from moving to a full metacognitive theory that includes negative metacognitive beliefs about the uncontrollability and danger of thoughts, and judgements of cognitive confidence. These findings challenge a core

  10. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  11. Radionuclide transport and dose assessment modelling in biosphere assessment 2009

    International Nuclear Information System (INIS)

    Hjerpe, T.; Broed, R.

    2010-11-01

    Following the guidelines set forth by the Ministry of Trade and Industry (now Ministry of Employment and Economy), Posiva is preparing to submit a construction license application for the final disposal spent nuclear fuel at the Olkiluoto site, Finland, by the end of the year 2012. Disposal will take place in a geological repository implemented according to the KBS-3 method. The long-term safety section supporting the license application will be based on a safety case that, according to the internationally adopted definition, will be a compilation of the evidence, analyses and arguments that quantify and substantiate the safety and the level of expert confidence in the safety of the planned repository. This report documents in detail the conceptual and mathematical models and key data used in the landscape model set-up, radionuclide transport modelling, and radiological consequences analysis applied in the 2009 biosphere assessment. Resulting environmental activity concentrations in landscape model due to constant unit geosphere release rates, and the corresponding annual doses, are also calculated and presented in this report. This provides the basis for understanding the behaviour of the applied landscape model and subsequent dose calculations. (orig.)

  12. Biological profiling and dose-response modeling tools ...

    Science.gov (United States)

    Through its ToxCast project, the U.S. EPA has developed a battery of in vitro high throughput screening (HTS) assays designed to assess the potential toxicity of environmental chemicals. At present, over 1800 chemicals have been tested in up to 600 assays, yielding a large number of concentration-response data sets. Standard processing of these data sets involves finding a best fitting mathematical model and set of model parameters that specify this model. The model parameters include quantities such as the half-maximal activity concentration (or “AC50”) that have biological significance and can be used to inform the efficacy or potency of a given chemical with respect to a given assay. All of this data is processed and stored in an online-accessible database and website: http://actor.epa.gov/dashboard2. Results from these in vitro assays are used in a multitude of ways. New pathways and targets can be identified and incorporated into new or existing adverse outcome pathways (AOPs). Pharmacokinetic models such as those implemented EPA’s HTTK R package can be used to translate an in vitro concentration into an in vivo dose; i.e., one can predict the oral equivalent dose that might be expected to activate a specific biological pathway. Such predicted values can then be compared with estimated actual human exposures prioritize chemicals for further testing.Any quantitative examination should be accompanied by estimation of uncertainty. We are developing met

  13. On the influence of the exposure model on organ doses

    International Nuclear Information System (INIS)

    Drexler, G.; Eckerl, H.

    1988-01-01

    Based on the design characteristics of the MIRD-V phantom, two sex-specific adult phantoms, ADAM and EVA were introduced especially for the calculation of organ doses resulting from external irradiation. Although the body characteristics of all the phantoms are in good agreement with those of the reference man and woman, they have some disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages related to the location and shape of organs and form of the whole body. To overcome these disadvantages related to the location and shape of organs and the form of the whole body. To overcome these disadvantages and to obtain more realistic phantoms, a technique based on computer tomographic data (voxel-phantom) was developed. This technique allows any physical phantom or real body to be converted into computer files. The improvements are of special importance with regard to the skeleton, because a better modeling of the bone surfaces and separation of hard bone and bone marrow can be achieved. For photon irradiation, the sensitivity of the model on organ doses or the effective dose equivalent is important for operational radiation protection

  14. Benchmarking Academic Anatomic Pathologists

    Directory of Open Access Journals (Sweden)

    Barbara S. Ducatman MD

    2016-10-01

    Full Text Available The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above. The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative

  15. Benchmarking time-dependent renormalized natural orbital theory with exact solutions for a laser-driven model helium atom

    Energy Technology Data Exchange (ETDEWEB)

    Brics, Martins

    2016-12-09

    -called renormalized natural orbitals (RNOs), TDRNOT is benchmarked with the help of a numerically exactly solvable model helium atom in laser fields. In the special case of time-dependent two-electron systems the two-particle density matrix in terms of ONs and NOs is known exactly. Hence, in this case TDRNOT is exact, apart from the unavoidable truncation of the number of RNOs per particle taken into account in the simulation. It is shown that, unlike TDDFT, TDRNOT is able to describe doubly-excited states, Fano profiles in electron and absorption spectra, auto-ionization, Rabi oscillations, high harmonic generation, non-sequential ionization, and single-photon double ionization in excellent agreement with the corresponding TDSE results.

  16. Benchmarking time-dependent renormalized natural orbital theory with exact solutions for a laser-driven model helium atom

    International Nuclear Information System (INIS)

    Brics, Martins

    2016-01-01

    -called renormalized natural orbitals (RNOs), TDRNOT is benchmarked with the help of a numerically exactly solvable model helium atom in laser fields. In the special case of time-dependent two-electron systems the two-particle density matrix in terms of ONs and NOs is known exactly. Hence, in this case TDRNOT is exact, apart from the unavoidable truncation of the number of RNOs per particle taken into account in the simulation. It is shown that, unlike TDDFT, TDRNOT is able to describe doubly-excited states, Fano profiles in electron and absorption spectra, auto-ionization, Rabi oscillations, high harmonic generation, non-sequential ionization, and single-photon double ionization in excellent agreement with the corresponding TDSE results.

  17. THE APPLICATION OF DATA ENVELOPMENT ANALYSIS METHODOLOGY TO IMPROVE THE BENCHMARKING PROCESS IN THE EFQM BUSINESS MODEL (CASE STUDY: AUTOMOTIVE INDUSTRY OF IRAN

    Directory of Open Access Journals (Sweden)

    K. Shahroudi

    2009-10-01

    Full Text Available This paper reports a survey and case study research outcomes on the application of Data Envelopment Analysis (DEA to the ranking method of European Foundation for Quality Management (EFQM Business Excellence Model in Iran’s Automotive Industry and improving benchmarking process after assessment. Following the global trend, the Iranian industry leaders have introduced the EFQM practice to their supply chain in order to improve the supply base competitiveness during the last four years. A question which is raises is whether the EFQM model can be combined with a mathematical model such as DEA in order to generate a new ranking method and develop or facilitate the benchmarking process. The developed model of this paper is simple. However, it provides some new and interesting insights. The paper assesses the usefulness and capability of the DEA technique to recognize a new scoring system in order to compare the classical ranking method and the EFQM business model. We used this method to identify meaningful exemplar companies for each criterion of the EFQM model then we designed a road map based on realistic targets in the criterion which have currently been achieved by exemplar companies. The research indicates that the DEA approach is a reliable tool to analyze the latent knowledge of scores generated by conducting self- assessments. The Wilcoxon Rank Sum Test is used to compare two scores and the Test of Hypothesis reveals the meaningful relation between the EFQM and DEA new ranking methods. Finally, we drew a road map based on the benchmarking concept using the research results.

  18. Uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. Final report, February 16, 1990--December 31, 1994

    International Nuclear Information System (INIS)

    Busch, R.D.

    1995-01-01

    Dr. Robert Busch of the Department of Chemical and Nuclear Engineering was the principal investigator on this project with technical direction provided by the staff in the Nuclear Criticality Safety Group at Los Alamos. During the period of the contract, he had a number of graduate and undergraduate students working on subtasks. The objective of this work was to develop information on uranium systems to enhance benchmarks for use in the verification of criticality safety computer models. During the first year of this project, most of the work was focused on setting up the SUN SPARC-1 Workstation and acquiring the literature which described the critical experiments. By august 1990, the Workstation was operational with the current version of TWODANT loaded on the system. MCNP, version 4 tape was made available from Los Alamos late in 1990. Various documents were acquired which provide the initial descriptions of the critical experiments under consideration as benchmarks. The next four years were spent working on various benchmark projects. A number of publications and presentations were made on this material. These are briefly discussed in this report

  19. Inverse modeling of FIB milling by dose profile optimization

    International Nuclear Information System (INIS)

    Lindsey, S.; Waid, S.; Hobler, G.; Wanzenböck, H.D.; Bertagnolli, E.

    2014-01-01

    FIB technologies possess a unique ability to form topographies that are difficult or impossible to generate with binary etching through typical photo-lithography. The ability to arbitrarily vary the spatial dose distribution and therefore the amount of milling opens possibilities for the production of a wide range of functional structures with applications in biology, chemistry, and optics. However in practice, the realization of these goals is made difficult by the angular dependence of the sputtering yield and redeposition effects that vary as the topography evolves. An inverse modeling algorithm that optimizes dose profiles, defined as the superposition of time invariant pixel dose profiles (determined from the beam parameters and pixel dwell times), is presented. The response of the target to a set of pixel dwell times in modeled by numerical continuum simulations utilizing 1st and 2nd order sputtering and redeposition, the resulting surfaces are evaluated with respect to a target topography in an error minimization routine. Two algorithms for the parameterization of pixel dwell times are presented, a direct pixel dwell time method, and an abstracted method that uses a refineable piecewise linear cage function to generate pixel dwell times from a minimal number of parameters. The cage function method demonstrates great flexibility and efficiency as compared to the direct fitting method with performance enhancements exceeding ∼10× as compared to direct fitting for medium to large simulation sets. Furthermore, the refineable nature of the cage function enables solutions to adapt to the desired target function. The optimization algorithm, although working with stationary dose profiles, is demonstrated to be applicable also outside the quasi-static approximation. Experimental data confirms the viability of the solutions for 5 × 7 μm deep lens like structures defined by 90 pixel dwell times

  20. Effects of existing evaluated nuclear data files on neutronics characteristics of the BFS-62-3A critical assembly benchmark model

    International Nuclear Information System (INIS)

    Semenov, Mikhail

    2002-11-01

    This report is continuation of studying of the experiments performed on BFS-62-3A critical assembly in Russia. The objective of work is definition of the cross section uncertainties on reactor neutronics parameters as applied to the hybrid core of the BN-600 reactor of Beloyarskaya NPP. Two-dimensional benchmark model of BFS-62-3A was created specially for these purposes and experimental values were reduced to it. Benchmark characteristics for this assembly are 1) criticality; 2) central fission rate ratios (spectral indices); and 3) fission rate distributions in stainless steel reflector. The effects of nuclear data libraries have been studied by comparing the results calculated using available modern data libraries - ENDF/B-V, ENDF/B-VI, ENDF/B-VI-PT, JENDL-3.2 and ABBN-93. All results were computed by Monte Carlo method with the continuous energy cross-sections. The checking of the cross sections of major isotopes on wide benchmark criticality collection was made. It was shown that ENDF/B-V data underestimate the criticality of fast reactor systems up to 2% Δk. As for the rest data, the difference between each other in criticality for BFS-62-3A is around 0.6% Δk. However, taking into account the results obtained for other fast reactor benchmarks (and steel-reflected also), it may conclude that the difference in criticality calculation results can achieve 1% Δk. This value is in a good agreement with cross section uncertainty evaluated for BN-600 hybrid core (±0.6% Δk). This work is related to the JNC-IPPE Collaboration on Experimental Investigation of Excess Weapons Grade Pu Disposition in BN-600 Reactor Using BFS-2 Facility. (author)

  1. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    Science.gov (United States)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  2. Benchmarks for Uncertainty Analysis in Modelling (UAM) for the Design, Operation and Safety Analysis of LWRs - Volume I: Specification and Support Data for Neutronics Cases (Phase I)

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.; Kamerow, S.; Kodeli, I.; Sartori, E.; Ivanov, E.; Cabellos, O.

    2013-01-01

    The objective of the OECD LWR UAM activity is to establish an internationally accepted benchmark framework to compare, assess and further develop different uncertainty analysis methods associated with the design, operation and safety of LWRs. As a result, the LWR UAM benchmark will help to address current nuclear power generation industry and regulation needs and issues related to practical implementation of risk-informed regulation. The realistic evaluation of consequences must be made with best-estimate coupled codes, but to be meaningful, such results should be supplemented by an uncertainty analysis. The use of coupled codes allows us to avoid unnecessary penalties due to incoherent approximations in the traditional decoupled calculations, and to obtain more accurate evaluation of margins regarding licensing limit. This becomes important for licensing power upgrades, improved fuel assembly and control rod designs, higher burn-up and others issues related to operating LWRs as well as to the new Generation 3+ designs being licensed now (ESBWR, AP-1 000, EPR-1 600, etc.). Establishing an internationally accepted LWR UAM benchmark framework offers the possibility to accelerate the licensing process when using best estimate methods. The proposed technical approach is to establish a benchmark for uncertainty analysis in best-estimate modelling and coupled multi-physics and multi-scale LWR analysis, using as bases a series of well-defined problems with complete sets of input specifications and reference experimental data. The objective is to determine the uncertainty in LWR system calculations at all stages of coupled reactor physics/thermal hydraulics calculations. The full chain of uncertainty propagation from basic data, engineering uncertainties, across different scales (multi-scale), and physics phenomena (multi-physics) will be tested on a number of benchmark exercises for which experimental data are available and for which the power plant details have been

  3. Calculations of dose distributions using a neural network model

    International Nuclear Information System (INIS)

    Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J

    2005-01-01

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map

  4. Estimating adolescent sleep need using dose-response modeling.

    Science.gov (United States)

    Short, Michelle A; Weber, Nathan; Reynolds, Chelsea; Coussens, Scott; Carskadon, Mary A

    2018-04-01

    This study will (1) estimate the nightly sleep need of human adolescents, (2) determine the time course and severity of sleep-related deficits when sleep is reduced below this optimal quantity, and (3) determine whether sleep restriction perturbs the circadian system as well as the sleep homeostat. Thirty-four adolescents aged 15 to 17 years spent 10 days and nine nights in the sleep laboratory. Between two baseline nights and two recovery nights with 10 hours' time in bed (TIB) per night, participants experienced either severe sleep restriction (5-hour TIB), moderate sleep restriction (7.5-hour TIB), or no sleep restriction (10-hour TIB) for five nights. A 10-minute psychomotor vigilance task (PVT; lapse = response after 500 ms) and the Karolinska Sleepiness Scale were administered every 3 hours during wake. Salivary dim-light melatonin onset was calculated at baseline and after four nights of each sleep dose to estimate circadian phase. Dose-dependent deficits to sleep duration, circadian phase timing, lapses of attention, and subjective sleepiness occurred. Less TIB resulted in less sleep, more lapses of attention, greater subjective sleepiness, and larger circadian phase delays. Sleep need estimated from 10-hour TIB sleep opportunities was approximately 9 hours, while modeling PVT lapse data suggested that 9.35 hours of sleep is needed to maintain optimal sustained attention performance. Sleep restriction perturbs homeostatic and circadian systems, leading to dose-dependent deficits to sustained attention and sleepiness. Adolescents require more sleep for optimal functioning than typically obtained.

  5. The models of internal dose calculation in ICRP

    International Nuclear Information System (INIS)

    Nakano, Takashi

    1995-01-01

    There are a lot discussions about internal dose calculation in ICRP. Many efforts are devoted to improvement in models and parameters. In this report, we discuss what kind of models and parameters are used in ICRP. Models are divided into two parts, the dosimetric model and biokinetic model. The former is a mathematical phantom model, and it is mainly developed in ORNL. The results are used in many researchers. The latter is a compartment model and it has a difficulty to decide the parameter values. They are not easy to estimate because of their age dependency. ICRP officially sets values at ages of 3 month, 1 year, 5 year, 10 year, 15 year and adult, and recommends to get values among ages by linear age interpolate. But it is very difficult to solve the basic equation with these values, so we calculate by use of computers. However, it has complex shame and needs long CPU time. We should make approximated equations. The parameter values include much uncertainty because of less experimental data, especially for a child. And these models and parameter values are for Caucasian. We should inquire whether they could correctly describe other than Caucasian. The body size affects the values of calculated SAF, and the differences of metabolism change the biokinetic pattern. (author)

  6. Dose estimation with the help of food chain compartment models

    International Nuclear Information System (INIS)

    Murzin, N.V.

    1987-01-01

    Food chain chamber models for calculation of human irradiation doses are considered. Chamber models are divided into steady-state (SSCM) and dynamic (DCM) ones according to the type of interaction between chambers. SSCM are built on the ground of the postulate about steady-static equilibrium presence within organism-environment system. DCM are based on two main assumptions: 1) food chain may be divided into several interacting chambers, between which radionuclides exchange occurs. Radionuclide specific activity in all parts of the chamber is identical at any instant of time; 2) radionuclide losses by the chamber are proportional to radionuclide specific activity in the chamber. The construction principles for economic chamber model are considered

  7. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  8. Residential radon in Finland: sources, variation, modelling and dose comparisons

    International Nuclear Information System (INIS)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.)

  9. Comparison of two dose and three dose human papillomavirus vaccine schedules: cost effectiveness analysis based on transmission model.

    Science.gov (United States)

    Jit, Mark; Brisson, Marc; Laprise, Jean-François; Choi, Yoon Hong

    2015-01-06

    To investigate the incremental cost effectiveness of two dose human papillomavirus vaccination and of additionally giving a third dose. Cost effectiveness study based on a transmission dynamic model of human papillomavirus vaccination. Two dose schedules for bivalent or quadrivalent human papillomavirus vaccines were assumed to provide 10, 20, or 30 years' vaccine type protection and cross protection or lifelong vaccine type protection without cross protection. Three dose schedules were assumed to give lifelong vaccine type and cross protection. United Kingdom. Males and females aged 12-74 years. No, two, or three doses of human papillomavirus vaccine given routinely to 12 year old girls, with an initial catch-up campaign to 18 years. Costs (from the healthcare provider's perspective), health related utilities, and incremental cost effectiveness ratios. Giving at least two doses of vaccine seems to be highly cost effective across the entire range of scenarios considered at the quadrivalent vaccine list price of £86.50 (€109.23; $136.00) per dose. If two doses give only 10 years' protection but adding a third dose extends this to lifetime protection, then the third dose also seems to be cost effective at £86.50 per dose (median incremental cost effectiveness ratio £17,000, interquartile range £11,700-£25,800). If two doses protect for more than 20 years, then the third dose will have to be priced substantially lower (median threshold price £31, interquartile range £28-£35) to be cost effective. Results are similar for a bivalent vaccine priced at £80.50 per dose and when the same scenarios are explored by parameterising a Canadian model (HPV-ADVISE) with economic data from the United Kingdom. Two dose human papillomavirus vaccine schedules are likely to be the most cost effective option provided protection lasts for at least 20 years. As the precise duration of two dose schedules may not be known for decades, cohorts given two doses should be closely

  10. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    International Nuclear Information System (INIS)

    Suwazono, Yasushi; Nogawa, Kazuhiro; Uetani, Mirei; Nakada, Satoru; Kido, Teruhiko; Nakagawa, Hideaki

    2011-01-01

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  12. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)

    2011-02-15

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  13. ICSBEP-2007, International Criticality Safety Benchmark Experiment Handbook

    International Nuclear Information System (INIS)

    Blair Briggs, J.

    2007-01-01

    1 - Description: The Critically Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United Sates Department of Energy. The project quickly became an international effort as scientist from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization of Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA). This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material. The example calculations presented do not constitute a validation of the codes or cross section data. The work of the ICSBEP is documented as an International Handbook of Evaluated Criticality Safety Benchmark Experiments. Currently, the handbook spans over 42,000 pages and contains 464 evaluations representing 4,092 critical, near-critical, or subcritical configurations and 21 criticality alarm placement/shielding configurations with multiple dose points for each and 46 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. The handbook is intended for use by criticality safety analysts to perform necessary validations of their calculational techniques and is expected to be a valuable tool for decades to come. The ICSBEP Handbook is available on DVD. You may request a DVD by completing the DVD Request Form on the internet. Access to the Handbook on the Internet requires a password. You may request a password by completing the Password Request Form. The Web address is: http://icsbep.inel.gov/handbook.shtml 2 - Method of solution: Experiments that are found

  14. Computational Modeling of Micrometastatic Breast Cancer Radiation Dose Response

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Daniel L.; Debeb, Bisrat G. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Morgan Welch Inflammatory Breast Cancer Research Program and Clinic, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Thames, Howard D. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A., E-mail: wwoodward@mdanderson.org [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Morgan Welch Inflammatory Breast Cancer Research Program and Clinic, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2016-09-01

    Purpose: Prophylactic cranial irradiation (PCI) involves giving radiation to the entire brain with the goals of reducing the incidence of brain metastasis and improving overall survival. Experimentally, we have demonstrated that PCI prevents brain metastases in a breast cancer mouse model. We developed a computational model to expand on and aid in the interpretation of our experimental results. Methods and Materials: MATLAB was used to develop a computational model of brain metastasis and PCI in mice. Model input parameters were optimized such that the model output would match the experimental number of metastases per mouse from the unirradiated group. An independent in vivo–limiting dilution experiment was performed to validate the model. The effect of whole brain irradiation at different measurement points after tumor cells were injected was evaluated in terms of the incidence, number of metastases, and tumor burden and was then compared with the corresponding experimental data. Results: In the optimized model, the correlation between the number of metastases per mouse and the experimental fits was >95. Our attempt to validate the model with a limiting dilution assay produced 99.9% correlation with respect to the incidence of metastases. The model accurately predicted the effect of whole-brain irradiation given 3 weeks after cell injection but substantially underestimated its effect when delivered 5 days after cell injection. The model further demonstrated that delaying whole-brain irradiation until the development of gross disease introduces a dose threshold that must be reached before a reduction in incidence can be realized. Conclusions: Our computational model of mouse brain metastasis and PCI correlated strongly with our experiments with unirradiated mice. The results further suggest that early treatment of subclinical disease is more effective than irradiating established disease.

  15. Development of a Model Protein Interaction Pair as a Benchmarking Tool for the Quantitative Analysis of 2-Site Protein-Protein Interactions.

    Science.gov (United States)

    Yamniuk, Aaron P; Newitt, John A; Doyle, Michael L; Arisaka, Fumio; Giannetti, Anthony M; Hensley, Preston; Myszka, David G; Schwarz, Fred P; Thomson, James A; Eisenstein, Edward

    2015-12-01

    A significant challenge in the molecular interaction field is to accurately determine the stoichiometry and stepwise binding affinity constants for macromolecules having >1 binding site. The mission of the Molecular Interactions Research Group (MIRG) of the Association of Biomolecular Resource Facilities (ABRF) is to show how biophysical technologies are used to quantitatively characterize molecular interactions, and to educate the ABRF members and scientific community on the utility and limitations of core technologies [such as biosensor, microcalorimetry, or analytic ultracentrifugation (AUC)]. In the present work, the MIRG has developed a robust model protein interaction pair consisting of a bivalent variant of the Bacillus amyloliquefaciens extracellular RNase barnase and a variant of its natural monovalent intracellular inhibitor protein barstar. It is demonstrated that this system can serve as a benchmarking tool for the quantitative analysis of 2-site protein-protein interactions. The protein interaction pair enables determination of precise binding constants for the barstar protein binding to 2 distinct sites on the bivalent barnase binding partner (termed binase), where the 2 binding sites were engineered to possess affinities that differed by 2 orders of magnitude. Multiple MIRG laboratories characterized the interaction using isothermal titration calorimetry (ITC), AUC, and surface plasmon resonance (SPR) methods to evaluate the feasibility of the system as a benchmarking model. Although general agreement was seen for the binding constants measured using solution-based ITC and AUC approaches, weaker affinity was seen for surface-based method SPR, with protein immobilization likely affecting affinity. An analysis of the results from multiple MIRG laboratories suggests that the bivalent barnase-barstar system is a suitable model for benchmarking new approaches for the quantitative characterization of complex biomolecular interactions.

  16. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  17. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  18. Comparative evaluation of 1D and quasi-2D hydraulic models based on benchmark and real-world applications for uncertainty assessment in flood mapping

    Science.gov (United States)

    Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas

    2016-03-01

    One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.

  19. Imidazole derivatives as angiotensin II AT1 receptor blockers: Benchmarks, drug-like calculations and quantitative structure-activity relationships modeling

    Science.gov (United States)

    Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi

    2018-03-01

    We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.

  20. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  1. Mathematical models for calculating radiation dose to the fetus

    International Nuclear Information System (INIS)

    Watson, E.E.

    1992-01-01

    Estimates of radiation dose from radionuclides inside the body are calculated on the basis of energy deposition in mathematical models representing the organs and tissues of the human body. Complex models may be used with radiation transport codes to calculate the fraction of emitted energy that is absorbed in a target tissue even at a distance from the source. Other models may be simple geometric shapes for which absorbed fractions of energy have already been calculated. Models of Reference Man, the 15-year-old (Reference Woman), the 10-year-old, the five-year-old, the one-year-old, and the newborn have been developed and used for calculating specific absorbed fractions (absorbed fractions of energy per unit mass) for several different photon energies and many different source-target combinations. The Reference woman model is adequate for calculating energy deposition in the uterus during the first few weeks of pregnancy. During the course of pregnancy, the embryo/fetus increases rapidly in size and thus requires several models for calculating absorbed fractions. In addition, the increases in size and changes in shape of the uterus and fetus result in the repositioning of the maternal organs and in different geometric relationships among the organs and the fetus. This is especially true of the excretory organs such as the urinary bladder and the various sections of the gastrointestinal tract. Several models have been developed for calculating absorbed fractions of energy in the fetus, including models of the uterus and fetus for each month of pregnancy and complete models of the pregnant woman at the end of each trimester. In this paper, the available models and the appropriate use of each will be discussed. (Author) 19 refs., 7 figs

  2. Modified Exponential (MOE) Models: statistical Models for Risk Estimation of Low dose Rate Radiation

    International Nuclear Information System (INIS)

    Ogata, H.; Furukawa, C.; Kawakami, Y.; Magae, J.

    2004-01-01

    Simultaneous inclusion of dose and dose-rate is required to evaluate the risk of long term irradiation at low dose-rates, since biological responses to radiation are complex processes that depend both on irradiation time and total dose. Consequently, it is necessary to consider a model including cumulative dose,dose-rate and irradiation time to estimate quantitative dose-response relationship on the biological response to radiation. In this study, we measured micronucleus formation and (3H) thymidine uptake in U2OS, human osteosarcoma cell line, as indicators of biological response to gamma radiation. Cells were exposed to gamma ray in irradiation room bearing 50,000 Ci 60Co. After irradiation, they were cultured for 24h in the presence of cytochalasin B to block cytokinesis, and cytoplasm and nucleus were stained with DAPI and propidium iodide. The number of binuclear cells bearing a micronucleus was counted under a florescence microscope. For proliferation inhibition, cells were cultured for 48 h after the irradiation and (3h) thymidine was pulsed for 4h before harvesting. We statistically analyzed the data for quantitative evaluation of radiation risk at low dose/dose-rate. (Author)

  3. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  4. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  5. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  6. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  7. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  8. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  9. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  10. Dose Response Model of Biological Reaction to Low Dose Rate Gamma Radiation

    International Nuclear Information System (INIS)

    Magae, J.; Furikawa, C.; Hoshi, Y.; Kawakami, Y.; Ogata, H.

    2004-01-01

    It is necessary to use reproducible and stable indicators to evaluate biological responses to long term irradiation at low dose-rate. They should be simple and quantitative enough to produce the results statistically accurate, because we have to analyze the subtle changes of biological responses around background level at low dose. For these purposes we chose micronucleus formation of U2OS, a human osteosarcoma cell line, as indicators of biological responses. Cells were exposed to gamma ray in irradiation rom bearing 50,000 Ci 60Co. After irradiation, they were cultured for 24 h in the presence of cytochalasin B to block cytokinesis, and cytoplasm and nucleus were stained with DAPI and prospidium iodide, respectively. the number of binuclear cells bearing micronuclei was counted under a fluorescence microscope. Dose rate in the irradiation room was measured with PLD. Dose response of PLD is linear between 1 mGy to 10 Gy, and standard deviation of triplicate count was several percent of mean value. We fitted statistically dose response curves to the data, and they were plotted on the coordinate of linearly scale response and dose. The results followed to the straight line passing through the origin of the coordinate axes between 0.1-5 Gy, and dose and does rate effectiveness factor (DDREF) was less than 2 when cells were irradiated for 1-10 min. Difference of the percent binuclear cells bearing micronucleus between irradiated cells and control cells was not statistically significant at the dose above 0.1 Gy when 5,000 binuclear cells were analyzed. In contrast, dose response curves never followed LNT, when cells were irradiated for 7 to 124 days. Difference of the percent binuclear cells bearing micronucleus between irradiated cells and control cells was not statistically significant at the dose below 6 Gy, when cells were continuously irradiated for 124 days. These results suggest that dose response curve of biological reaction is remarkably affected by exposure

  11. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  12. Radiobiological modelling of dose-gradient effects in low dose rate, high dose rate and pulsed brachytherapy

    International Nuclear Information System (INIS)

    Armpilia, C; Dale, R G; Sandilos, P; Vlachos, L

    2006-01-01

    This paper presents a generalization of a previously published methodology which quantified the radiobiological consequences of dose-gradient effects in brachytherapy applications. The methodology uses the linear-quadratic (LQ) formulation to identify an equivalent biologically effective dose (BED eq ) which, if applied uniformly to a specified tissue volume, would produce the same net cell survival as that achieved by a given non-uniform brachytherapy application. Multiplying factors (MFs), which enable the equivalent BED for an enclosed volume to be estimated from the BED calculated at the dose reference surface, have been calculated and tabulated for both spherical and cylindrical geometries. The main types of brachytherapy (high dose rate (HDR), low dose rate (LDR) and pulsed (PB)) have been examined for a range of radiobiological parameters/dimensions. Equivalent BEDs are consistently higher than the BEDs calculated at the reference surface by an amount which depends on the treatment prescription (magnitude of the prescribed dose) at the reference point. MFs are closely related to the numerical BED values, irrespective of how the original BED was attained (e.g., via HDR, LDR or PB). Thus, an average MF can be used for a given prescribed BED as it will be largely independent of the assumed radiobiological parameters (radiosensitivity and α/β) and standardized look-up tables may be applicable to all types of brachytherapy treatment. This analysis opens the way to more systematic approaches for correlating physical and biological effects in several types of brachytherapy and for the improved quantitative assessment and ranking of clinical treatments which involve a brachytherapy component

  13. Code assessment and modelling for Design Basis Accident Analysis of the European sodium fast reactor design. Part I: System description, modelling and benchmarking

    International Nuclear Information System (INIS)

    Lázaro, A.; Ammirabile, L.; Bandini, G.; Darmet, G.; Massara, S.; Dufour, Ph.; Tosello, A.; Gallego, E.; Jimenez, G.; Mikityuk, K.; Schikorr, M.; Bubelis, E.; Ponomarev, A.; Kruessmann, R.; Stempniewicz, M.

    2014-01-01

    Highlights: • Ten system-code models of the ESFR were developed in the frame of the CP-ESFR project. • Eight different thermohydraulic system codes adapted to sodium fast reactor's technology. • Benchmarking exercise settled to check the consistency of the calculations. • Upgraded system codes able to simulate the reactivity feedback and key safety parameters. -- Abstract: The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes

  14. Code assessment and modelling for Design Basis Accident Analysis of the European sodium fast reactor design. Part I: System description, modelling and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Lázaro, A., E-mail: aurelio.lazaro-chueca@ec.europa.eu [JRC-IET European Commission—Westerduinweg 3, PO Box-2, 1755 ZG Petten (Netherlands); UPV—Universidad Politecnica de Valencia, Cami de vera s/n-46002, Valencia (Spain); Ammirabile, L. [JRC-IET European Commission—Westerduinweg 3, PO Box-2, 1755 ZG Petten (Netherlands); Bandini, G. [ENEA, Via Martiri di Monte Sole 4, 40129 Bologna (Italy); Darmet, G.; Massara, S. [EDF, 1 avenue du Général de Gaulle, 92141 Clamart (France); Dufour, Ph.; Tosello, A. [CEA, St Paul lez Durance, 13108 Cadarache (France); Gallego, E.; Jimenez, G. [UPM, José Gutiérrez Abascal, 2-28006 Madrid (Spain); Mikityuk, K. [PSI—Paul Scherrer Institut, 5232 Villigen Switzerland (Switzerland); Schikorr, M.; Bubelis, E.; Ponomarev, A.; Kruessmann, R. [KIT—Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen Germany (Germany); Stempniewicz, M. [NRG, Utrechtseweg 310, PO Box 9034 6800 ES, Arnhem (Netherlands)

    2014-01-15

    Highlights: • Ten system-code models of the ESFR were developed in the frame of the CP-ESFR project. • Eight different thermohydraulic system codes adapted to sodium fast reactor's technology. • Benchmarking exercise settled to check the consistency of the calculations. • Upgraded system codes able to simulate the reactivity feedback and key safety parameters. -- Abstract: The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.

  15. Uncertainty and sensitivity analysis in reactivity-initiated accident fuel modeling: synthesis of organisation for economic co-operation and development (OECD/nuclear energy agency (NEA benchmark on reactivity-initiated accident codes phase-II

    Directory of Open Access Journals (Sweden)

    Olivier Marchand

    2018-03-01

    Full Text Available In the framework of OECD/NEA Working Group on Fuel Safety, a RIA fuel-rod-code Benchmark Phase I was organized in 2010–2013. It consisted of four experiments on highly irradiated fuel rodlets tested under different experimental conditions. This benchmark revealed the need to better understand the basic models incorporated in each code for realistic simulation of the complicated integral RIA tests with high burnup fuel rods. A second phase of the benchmark (Phase II was thus launched early in 2014, which has been organized in two complementary activities: (1 comparison of the results of different simulations on simplified cases in order to provide additional bases for understanding the differences in modelling of the concerned phenomena; (2 assessment of the uncertainty of the results. The present paper provides a summary and conclusions of the second activity of the Benchmark Phase II, which is based on the input uncertainty propagation methodology. The main conclusion is that uncertainties cannot fully explain the difference between the code predictions. Finally, based on the RIA benchmark Phase-I and Phase-II conclusions, some recommendations are made. Keywords: RIA, Codes Benchmarking, Fuel Modelling, OECD

  16. Analytical models for total dose ionization effects in MOS devices.

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Phillip Montgomery; Bogdan, Carolyn W.

    2008-08-01

    MOS devices are susceptible to damage by ionizing radiation due to charge buildup in gate, field and SOI buried oxides. Under positive bias holes created in the gate oxide will transport to the Si / SiO{sub 2} interface creating oxide-trapped charge. As a result of hole transport and trapping, hydrogen is liberated in the oxide which can create interface-trapped charge. The trapped charge will affect the threshold voltage and degrade the channel mobility. Neutralization of oxidetrapped charge by electron tunneling from the silicon and by thermal emission can take place over long periods of time. Neutralization of interface-trapped charge is not observed at room temperature. Analytical models are developed that account for the principal effects of total dose in MOS devices under different gate bias. The intent is to obtain closed-form solutions that can be used in circuit simulation. Expressions are derived for the aging effects of very low dose rate radiation over long time periods.

  17. Lyssavirus infection: 'low dose, multiple exposure' in the mouse model.

    Science.gov (United States)

    Banyard, Ashley C; Healy, Derek M; Brookes, Sharon M; Voller, Katja; Hicks, Daniel J; Núñez, Alejandro; Fooks, Anthony R

    2014-03-06

    The European bat lyssaviruses (EBLV-1 and EBLV-2) are zoonotic pathogens present within bat populations across Europe. The maintenance and transmission of lyssaviruses within bat colonies is poorly understood. Cases of repeated isolation of lyssaviruses from bat roosts have raised questions regarding the maintenance and intraspecies transmissibility of these viruses within colonies. Furthermore, the significance of seropositive bats in colonies remains unclear. Due to the protected nature of European bat species, and hence restrictions to working with the natural host for lyssaviruses, this study analysed the outcome following repeat inoculation of low doses of lyssaviruses in a murine model. A standardized dose of virus, EBLV-1, EBLV-2 or a 'street strain' of rabies (RABV), was administered via a peripheral route to attempt to mimic what is hypothesized as natural infection. Each mouse (n=10/virus/group/dilution) received four inoculations, two doses in each footpad over a period of four months, alternating footpad with each inoculation. Mice were tail bled between inoculations to evaluate antibody responses to infection. Mice succumbed to infection after each inoculation with 26.6% of mice developing clinical disease following the initial exposure across all dilutions (RABV, 32.5% (n=13/40); EBLV-1, 35% (n=13/40); EBLV-2, 12.5% (n=5/40)). Interestingly, the lowest dose caused clinical disease in some mice upon first exposure ((RABV, 20% (n=2/10) after first inoculation; RABV, 12.5% (n=1/8) after second inoculation; EBLV-2, 10% (n=1/10) after primary inoculation). Furthermore, five mice developed clinical disease following the second exposure to live virus (RABV, n=1; EBLV-1, n=1; EBLV-2, n=3) although histopathological examination indicated that the primary inoculation was the most probably cause of death due to levels of inflammation and virus antigen distribution observed. All the remaining mice (RABV, n=26; EBLV-1, n=26; EBLV-2, n=29) survived the tertiary and

  18. Ameliorative effects of low dose/low dose-rate irradiation on reactive oxygen species-related diseases model mice

    International Nuclear Information System (INIS)

    Nomura, Takaharu

    2008-01-01

    Living organisms have developed complex biological system which protects themselves against environmental radiation, and irradiation with proper dose, dose-rate and irradiation time can stimulate their biological responses against oxidative stress evoked by the irradiation. Because reactive oxygen species are involved in various human diseases, non-toxic low dose/low dose-rate radiation can be utilized for the amelioration of such diseases. In this study, we used mouse experimental models for fatty liver, nephritis, diabetes, and ageing to elucidate the ameliorative effect of low dose/low dose-rate radiation in relation to endogenous antioxidant activity. Single irradiation at 0.5 Gy ameliorates carbon tetrachloride-induced fatty liver. The irradiation increases hepatic anti-oxidative system involving glutathione and glutathione peroxidase, suggesting that endogenous radical scavenger is essential for the ameliorative effect of low dose radiation on carbon tetrachloride-induced fatty liver. Single irradiation at 0.5 Gy ameliorates ferric nitrilotriacetate-induced nephritis. The irradiation increases catalase and decreases superoxide dismutase in kidney. The result suggests that low dose radiation reduced generation of hydroxide radical generation by reducing cellular hydroperoxide level. Single irradiation at 0.5 Gy at 12 week of age ameliorates incidence of type I diabetes in non-obese diabetic (NOD) mice through the suppression of inflammatory activity of splenocytes, and resultant apoptosis of β-cells in pancreas. The irradiation activities of superoxide dismutase and catalase, which coordinately diminish intracellular reactive oxygen species. Continuous irradiation at 0.70 mGy/hr from 10 week of age elongates life span, and suppresses alopecia in type II diabetesmice. The irradiation improved glucose clearance without affecting insulin-resistance, and increased pancreatic catalase activity. The results suggest that continuous low dose-rate irradiation protect

  19. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  20. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    NARCIS (Netherlands)

    Appel MJ; Bouman HGM; Pieters MN; Slob W; Adviescentrum voor chemische; CSR

    2001-01-01

    Five chemicals used in workplace, for which a risk assessment had already been carried out, were selected and the relevant critical studies re-analyzed by the Benchmark approach. The endpoints involved included continuous, and ordinal data. Dose-response modeling could be reasonablyapplied to the

  1. Models for dose assessments. Modules for various biosphere types

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, U.; Nordlinder, S.; Aggeryd, I. [Studsvik Eco and Safety AB, Nykoeping (Sweden)

    1999-12-01

    The main objective of this study was to provide a basis for illustrations of yearly dose rates to the most exposed individual from hypothetical leakages of radionuclides from a deep bedrock repository for spent nuclear fuel and other radioactive waste. The results of this study will be used in the safety assessment SR 97 and in a study on the design and long-term safety for a repository planned to contain long-lived low and intermediate level waste. The repositories will be designed to isolate the radionuclides for several hundred thousands of years. In the SR 97 study, however, hypothetical scenarios for leakage are postulated. Radionuclides are hence assumed to be transported in the geosphere by groundwater, and probably discharge into the biosphere. This may occur in several types of ecosystems. A number of categories of such ecosystems were identified, and turnover of radionuclides was modelled separately for each ecosystem. Previous studies had focused on generic models for wells, lakes and coastal areas. These models were, in this study, developed further to use site-specific data. In addition, flows of groundwater, containing radionuclides, to agricultural land and peat bogs were considered. All these categories are referred to as modules in this report. The forest ecosystems were not included, due to a general lack of knowledge of biospheric processes in connection with discharge of groundwater in forested areas. Examples of each type of module were run with the assumption of a continuous annual release into the biosphere of 1 Bq for each radionuclide during 10 000 years. The results are presented as ecosystem specific dose conversion factors (EDFs) for each nuclide at the year 10 000, assuming stationary ecosystems and prevailing living conditions and habits. All calculations were performed with uncertainty analyses included. Simplifications and assumptions in the modelling of biospheric processes are discussed. The use of modules may be seen as a step

  2. Models for dose assessments. Modules for various biosphere types

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.; Aggeryd, I.

    1999-12-01

    The main objective of this study was to provide a basis for illustrations of yearly dose rates to the most exposed individual from hypothetical leakages of radionuclides from a deep bedrock repository for spent nuclear fuel and other radioactive waste. The results of this study will be used in the safety assessment SR 97 and in a study on the design and long-term safety for a repository planned to contain long-lived low and intermediate level waste. The repositories will be designed to isolate the radionuclides for several hundred thousands of years. In the SR 97 study, however, hypothetical scenarios for leakage are postulated. Radionuclides are hence assumed to be transported in the geosphere by groundwater, and probably discharge into the biosphere. This may occur in several types of ecosystems. A number of categories of such ecosystems were identified, and turnover of radionuclides was modelled separately for each ecosystem. Previous studies had focused on generic models for wells, lakes and coastal areas. These models were, in this study, developed further to use site-specific data. In addition, flows of groundwater, containing radionuclides, to agricultural land and peat bogs were considered. All these categories are referred to as modules in this report. The forest ecosystems were not included, due to a general lack of knowledge of biospheric processes in connection with discharge of groundwater in forested areas. Examples of each type of module were run with the assumption of a continuous annual release into the biosphere of 1 Bq for each radionuclide during 10 000 years. The results are presented as ecosystem specific dose conversion factors (EDFs) for each nuclide at the year 10 000, assuming stationary ecosystems and prevailing living conditions and habits. All calculations were performed with uncertainty analyses included. Simplifications and assumptions in the modelling of biospheric processes are discussed. The use of modules may be seen as a step

  3. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  4. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  5. An improved analytical model for CT dose simulation with a new look at the theory of CT dose

    International Nuclear Information System (INIS)

    Dixon, Robert L.; Munley, Michael T.; Bayram, Ersin

    2005-01-01

    Gagne [Med. Phys. 16, 29-37 (1989)] has previously described a model for predicting the sensitivity and dose profiles in the slice-width (z) direction for CT scanners. The model, developed prior to the advent of multidetector CT scanners, is still widely used; however, it does not account for the effect of anode tilt on the penumbra or include the heel effect, both of which are increasingly important for the wider beams (up to 40 mm) of contemporary, multidetector scanners. Additionally, it applied only on (or near) the axis of rotation, and did not incorporate the photon energy spectrum. The improved model described herein transcends all of the aforementioned limitations of the Gagne model, including extension to the peripheral phantom axes. Comparison of simulated and measured dose data provides experimental validation of the model, including verification of the superior match to the penumbra provided by the tilted-anode model, as well as the observable effects on the cumulative dose distribution. The initial motivation for the model was to simulate the quasiperiodic dose distribution on the peripheral, phantom axes resulting from a helical scan series in order to facilitate the implementation of an improved method of CT dose measurement utilizing a short ion chamber, as proposed by Dixon [Med. Phys. 30, 1272-1280 (2003)]. A more detailed set of guidelines for implementing such measurements is also presented in this paper. In addition, some fundamental principles governing CT dose which have not previously been clearly enunciated follow from the model, and a fundamental (energy-based) quantity dubbed 'CTDI-aperture' is introduced

  6. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  7. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  8. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  9. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  10. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  11. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  12. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  13. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  14. Estimating dose painting effects in radiotherapy: a mathematical model.

    Directory of Open Access Journals (Sweden)

    Juan Carlos López Alfonso

    Full Text Available Tumor heterogeneity is widely considered to be a determinant factor in tumor progression and in particular in its recurrence after therapy. Unfortunately, current medical techniques are unable to deduce clinically relevant information about tumor heterogeneity by means of non-invasive methods. As a consequence, when radiotherapy is used as a treatment of choice, radiation dosimetries are prescribed under the assumption that the malignancy targeted is of a homogeneous nature. In this work we discuss the effects of different radiation dose distributions on heterogeneous tumors by means of an individual cell-based model. To that end, a case is considered where two tumor cell phenotypes are present, which we assume to strongly differ in their respective cell cycle duration and radiosensitivity properties. We show herein that, as a result of such differences, the spatial distribution of the corresponding phenotypes, whence the resulting tumor heterogeneity can be predicted as growth proceeds. In particular, we show that if we start from a situation where a majority of ordinary cancer cells (CCs and a minority of cancer stem cells (CSCs are randomly distributed, and we assume that the length of CSC cycle is significantly longer than that of CCs, then CSCs become concentrated at an inner region as tumor grows. As a consequence we obtain that if CSCs are assumed to be more resistant to radiation than CCs, heterogeneous dosimetries can be selected to enhance tumor control by boosting radiation in the region occupied by the more radioresistant tumor cell phenotype. It is also shown that, when compared with homogeneous dose distributions as those being currently delivered in clinical practice, such heterogeneous radiation dosimetries fare always better than their homogeneous counterparts. Finally, limitations to our assumptions and their resulting clinical implications will be discussed.

  15. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  16. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  17. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  18. Absorbed dose modeled for a liquid circulating around a Co-60 irradiator

    International Nuclear Information System (INIS)

    Mangussi, J.

    2013-01-01

    A model for the distribution of the absorbed dose in a volume of liquid circulating into an active tank containing a Co-60 irradiator is presented. The absorbed dose, the stir process and the liquid recirculation into the active tank are modeled. The absorbed dose for different fractions of the volume is calculated. The necessary irradiation times for the achievement of the required absorbed dose are evaluated. (author)

  19. Model for assessing alpha doses for a Reference Japanese Man

    International Nuclear Information System (INIS)

    Kawamura, Hisao

    1993-01-01

    In view of the development of the nuclear fuel cycle in this country, it is urgently important to establish dose assessment models and related human and environmental parameters for long-lived radionuclides. In the current program, intake and body content of actinides (Pu, Th, U) and related alpha-emitting nuclides (Ra and daughters) have been studied as well as physiological aspects of Reference Japanese Man as the basic model of man for dosimetry. The ultimate object is to examine applicability of the existing models particularly recommended by the ICRP for workers to members of the public. The result of an interlaboratory intercomparison of 239 Pu + 240 Pu determination including our result was published. Alpha-spectrometric determinations of 226 Ra in bone yielded repesentative bone concentration level in Tokyo and Ra-Ca O.R. (bone-diet) which appear consistent with the literature value for Sapporo and Kyoto by Ohno using a Rn emanation method. Specific effective energies for alpha radiation from 226 Ra and daughters were calculated using the ICRP dosimetric model for bone incorporating masses of source and target organs of Reference Japanese Man. Reference Japanese data including the adult, adolescent, child and infant of both sexes was extensively and intensively studied by Tanaka as part of the activities of the ICRP Task Group on Reference Man Revision. Normal data for the physical measurements, mass and dimension of internal organs and body surfaces and some of the body composition were analysed viewing the nutritional data in the Japanese population. Some of the above works are to be continued. (author)

  20. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  1. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  2. New model for assessing dose and dose rate sensitivity of Gamma ray radiation loss in polarization maintaining optical fibers

    International Nuclear Information System (INIS)

    Zhang Hongchen; Liu Hai; Qiao Wenqiang; Xue Huijie; He Shiyu

    2012-01-01

    Highlights: ► Building a new phenomenological theory model to investigate the relation about the irradiation induced loss with irradiation dose and dose rate. ► The Gamma ray irradiation induced loss of the “Capsule” type and “Panda” type polarization maintaining optical fibers at 1310 nm wavelength are investigated. ► The anti irradiation performance of the “Panda” type polarization maintaining optical fiber is better than that of the “Capsule” type polarization maintaining optical fiber, the reason is that the stress region doped by GeO 2 . - Abstract: The Gamma ray irradiation induced loss of the “Capsule” type and “Panda” type polarization maintaining optical fibers at 1310 nm wavelength are investigated. A phenomenological theory model is introduced and the influence of irradiation dose and dose rate on the irradiation induced loss is discussed. The phenomenological theoretical results are consistent with the experimental results of the irradiation induced loss for the two types of polarization maintaining optical fibers. The anti irradiation performance of the “Panda” type polarization maintaining optical fiber is better than that of the “Capsule” type polarization maintaining optical fiber, the reason is that the stress region dope with GeO 2 . Meanwhile, both of the polarization maintaining optical fiber irradiation induced loss increase with increasing the irradiation dose. In the case of same dose, the high dose rate Gamma ray irradiation induced optical fiber losses are higher than that of the low dose rate.

  3. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta

    2003-01-01

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 90