WorldWideScience

Sample records for benchmark dose method

  1. Benchmark Dose Modeling

    Science.gov (United States)

    Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...

  2. The reference dose for subchronic exposure of pigs to cadmium leading to early renal damage by benchmark dose method.

    Science.gov (United States)

    Wu, Xiaosheng; Wei, Shuai; Wei, Yimin; Guo, Boli; Yang, Mingqi; Zhao, Duoyong; Liu, Xiaoling; Cai, Xianfeng

    2012-08-01

    Pigs were exposed to cadmium (Cd) (in the form of CdCl(2)) concentrations ranging from 0 to 32mg Cd/kg feed for 100 days. Urinary cadmium (U-Cd) and blood cadmium (B-Cd) levels were determined as indicators of Cd exposure. Urinary levels of β(2)-microglobulin (β(2)-MG), α(1)-microglobulin (α(1)-MG), N-acetyl-β-D-glucosaminidase (NAG), cadmium-metallothionein (Cd-MT), and retinol binding protein (RBP) were determined as biomarkers of tubular dysfunction. U-Cd concentrations were increased linearly with time and dose, whereas B-Cd reached two peaks at 40 days and 100 days in the group exposed to 32mg Cd/kg. Hyper-metallothionein-urinary (HyperMTuria) and hyper-N-acetyl-β-D-glucosaminidase-urinary (hyperNAGuria) emerged from 80 days onwards in the group exposed to 32mg Cd/kg feed, followed by hyper-β2-microglobulin-urinary (hyperβ2-MGuria) and hyper-retinol-binding-protein-urinary (hyperRBPuria) from 100 days onwards. The relationships between the Cd exposure dose and biomarkers of exposure (as well as the biomarkers of effect) were examined, and significant correlations were found between them (except for α(1)-MG). Dose-response relationships between Cd exposure dose and biomarkers of tubular dysfunction were studied. The critical concentration of Cd exposure dose was calculated by the benchmark dose (BMD) method. The BMD(10)/BMDL(10) was estimated to be 1.34/0.67, 1.21/0.88, 2.75/1.00, and 3.73/3.08mg Cd/kg feed based on urinary RBP, NAG, Cd-MT, and β(2)-MG, respectively. The calculated tolerable weekly intake of Cd for humans was 1.4 μg/kg body weight based on a safety factor of 100. This value is lower than the currently available values set by several different countries. This indicates a need for further studies on the effects of Cd and a re-evaluation of the human health risk assessment for the metal. PMID:22610606

  3. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  4. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  5. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  6. Method and system for benchmarking computers

    Science.gov (United States)

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  7. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Science.gov (United States)

    2012-06-19

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Notice of Availability of the Benchmark Dose Technical Guidance AGENCY: Environmental Protection... announcing the availability of Benchmark Dose Technical Guidance (BMD). This document was developed as...

  8. Benchmarking analytical calculations of proton doses in heterogeneous matter.

    Science.gov (United States)

    Ciangaru, George; Polf, Jerimy C; Bues, Martin; Smith, Alfred R

    2005-12-01

    A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall

  9. Dose-response modeling : Evaluation, application, and development of procedures for benchmark dose analysis in health risk assessment of chemical substances

    OpenAIRE

    Sand, Salomon

    2005-01-01

    In this thesis, dose-response modeling and procedures for benchmark dose (BMD) analysis in health risk assessment of chemical substances have been investigated. The BMD method has been proposed as an alternative to the NOAEL (no-observedadverse- effect-level) approach in health risk assessment of non-genotoxic agents. According to the BMD concept, a dose-response model is fitted to data and the BMD is defined as the dose causing a predetermined change in response. A lowe...

  10. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  11. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  12. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment.

    Science.gov (United States)

    Deutsch, Roland C; Piegorsch, Walter W

    2013-09-01

    Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single-agent setting to joint-action, two-agent studies. Focus is on continuous response outcomes. Extending the single-exposure setting, representations of risk are based on a joint-action dose-response model involving both agents. Based on such a model, the concept of a benchmark profile-a two-dimensional analog of the single-dose BMD at which both agents achieve the specified BMR-is defined for use in quantitative risk characterization and assessment.

  13. Evaluation of the benchmark dose for point of departure determination for a variety of chemical classes in applied regulatory settings.

    Science.gov (United States)

    Izadi, Hoda; Grundy, Jean E; Bose, Ranjan

    2012-05-01

    Repeated-dose studies received by the New Substances Assessment and Control Bureau (NSACB) of Health Canada are used to provide hazard information toward risk calculation. These studies provide a point of departure (POD), traditionally the NOAEL or LOAEL, which is used to extrapolate the quantity of substance above which adverse effects can be expected in humans. This project explored the use of benchmark dose (BMD) modeling as an alternative to this approach for studies with few dose groups. Continuous data from oral repeated-dose studies for chemicals previously assessed by NSACB were reanalyzed using U.S. EPA benchmark dose software (BMDS) to determine the BMD and BMD 95% lower confidence limit (BMDL(05) ) for each endpoint critical to NOAEL or LOAEL determination for each chemical. Endpoint-specific benchmark dose-response levels , indicative of adversity, were consistently applied. An overall BMD and BMDL(05) were calculated for each chemical using the geometric mean. The POD obtained from benchmark analysis was then compared with the traditional toxicity thresholds originally used for risk assessment. The BMD and BMDL(05) generally were higher than the NOAEL, but lower than the LOAEL. BMDL(05) was generally constant at 57% of the BMD. Benchmark provided a clear advantage in health risk assessment when a LOAEL was the only POD identified, or when dose groups were widely distributed. Although the benchmark method cannot always be applied, in the selected studies with few dose groups it provided a more accurate estimate of the real no-adverse-effect level of a substance.

  14. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  15. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  16. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  17. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment.

    Science.gov (United States)

    Deutsch, Roland C; Piegorsch, Walter W

    2012-12-01

    Benchmark analysis is a widely used tool in public health risk analysis. Therein, estimation of minimum exposure levels, called Benchmark Doses (BMDs), that induce a prespecified Benchmark Response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This article demonstrates how the benchmark modeling paradigm can be expanded from the single-dose setting to joint-action, two-agent studies. Focus is on response outcomes expressed as proportions. Extending the single-exposure setting, representations of risk are based on a joint-action dose-response model involving both agents. Based on such a model, the concept of a benchmark profile (BMP) - a two-dimensional analog of the single-dose BMD at which both agents achieve the specified BMR - is defined for use in quantitative risk characterization and assessment. The resulting, joint, low-dose guidelines can improve public health planning and risk regulation when dealing with low-level exposures to combinations of hazardous agents.

  18. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  19. Application of Benchmark Dose (BMD) in Estimating Biological Exposure Limit (BEL) to Cadmium

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Objective To estimate the biological exposure limit (BEL) using benchmark dose (BMD) based on two sets of data from occupational epidemiology. Methods Cadmium-exposed workers were selected from a cadmium smelting factory and a zinc product factory. Doctors, nurses or shop assistants living in the same area served as a control group. Urinary cadmium (UCd) was used as an exposure biomarker and urinary β2-microgloburin (B2M), N-acetyl-β-D-glucosaminidase (NAG) and albumin (ALB) as effect biomarkers. All urine parameters were adjusted by urinary creatinine. Software of BMDS (Version 1.3.2, EPA.U.S.A) was used to calculate BMD. Results The cut-off point (abnormal values) was determined based on the upper limit of 95% of effect biomarkers in control group. There was a significant dose response relationship between the effect biomarkers (urinary B2M, NAG, and ALB) and exposure biomarker (UCd). BEL value was 5 μg/g creatinine for UB2M as an effect biomarker, consistent with the recommendation of WHO. BEL could be estimated by using the method of BMD. BEL value was 3 μg/g creatinine for UNAG as an effect biomarker. The more sensitive the used biomarker is, the more occupational population will be protected. Conclusion BMD can be used in estimating the biological exposure limit (BEL). UNAG is a sensitive biomarker for estimating BEL after cadmium exposure.

  20. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  1. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted. PMID:23634166

  2. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  3. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  4. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  5. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone

    OpenAIRE

    Kenneth T. Bogen

    2010-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses (“hormesis”). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characteri...

  6. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;

    2014-01-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology......; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species...

  7. Modeling the emetic potencies of food-borne trichothecenes by benchmark dose methodology.

    Science.gov (United States)

    Male, Denis; Wu, Wenda; Mitchell, Nicole J; Bursian, Steven; Pestka, James J; Wu, Felicia

    2016-08-01

    Trichothecene mycotoxins commonly co-contaminate cereal products. They cause immunosuppression, anorexia, and emesis in multiple species. Dietary exposure to such toxins often occurs in mixtures. Hence, if it were possible to determine their relative toxicities and assign toxic equivalency factors (TEFs) to each trichothecene, risk management and regulation of these mycotoxins could become more comprehensive and simple. We used a mink emesis model to compare the toxicities of deoxynivalenol, 3-acetyldeoxynivalenol, 15-acetyldeoxynivalenol, nivalenol, fusarenon-X, HT-2 toxin, and T-2 toxin. These toxins were administered to mink via gavage and intraperitoneal injection. The United States Environmental Protection Agency (EPA) benchmark dose software was used to determine benchmark doses for each trichothecene. The relative potencies of each of these toxins were calculated as the ratios of their benchmark doses to that of DON. Our results showed that mink were more sensitive to orally administered toxins than to toxins administered by IP. T-2 and HT-2 toxins caused the greatest emetic responses, followed by FX, and then by DON, its acetylated derivatives, and NIV. Although these results provide key information on comparative toxicities, there is still a need for more animal based studies focusing on various endpoints and combined effects of trichothecenes before TEFs can be established. PMID:27292944

  8. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark

    International Nuclear Information System (INIS)

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  9. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    Science.gov (United States)

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  10. Concordance of transcriptional and apical benchmark dose levels for conazole-induced liver effects in mice.

    Science.gov (United States)

    Bhat, Virunya S; Hester, Susan D; Nesnow, Stephen; Eastmond, David A

    2013-11-01

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode-of-action determinations and improves quantitative risk assessments. Previous global expression profiling identified a 330-probe cluster differentially expressed and commonly responsive to 3 hepatotumorigenic conazoles (cyproconazole, epoxiconazole, and propiconazole) at 30 days. Extended to 2 more conazoles (triadimefon and myclobutanil), the present assessment encompasses 4 tumorigenic and 1 nontumorigenic conazole. Transcriptional benchmark dose levels (BMDL(T)) were estimated for a subset of the cluster with dose-responsive behavior and a ≥ 5-fold increase or decrease in signal intensity at the highest dose. These genes primarily encompassed CAR/RXR activation, P450 metabolism, liver hypertrophy- glutathione depletion, LPS/IL-1-mediated inhibition of RXR, and NRF2-mediated oxidative stress pathways. Median BMDL(T) estimates from the subset were concordant (within a factor of 2.4) with apical benchmark doses (BMDL(A)) for increased liver weight at 30 days for the 5 conazoles. The 30-day median BMDL(T) estimates were within one-half order of magnitude of the chronic BMDLA for hepatocellular tumors. Potency differences seen in the dose-responsive transcription of certain phase II metabolism, bile acid detoxification, and lipid oxidation genes mirrored each conazole's tumorigenic potency. The 30-day BMDL(T) corresponded to tumorigenic potency on a milligram per kilogram day basis with cyproconazole > epoxiconazole > propiconazole > triadimefon > myclobutanil (nontumorigenic). These results support the utility of measuring short-term gene expression changes to inform quantitative risk assessments from long-term exposures.

  11. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    Science.gov (United States)

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane.

  12. 氟砷致骨代谢损伤生物暴露限值基准剂量法分析%Determination of damage in bone metabolism caused by co-exposure to fluoride and arsenic using benchmark dose method in Chinese population

    Institute of Scientific and Technical Information of China (English)

    曾奇兵; 刘云; 洪峰; 杨鋆; 喻仙

    2012-01-01

    目的 应用基准剂量法探讨燃煤氟砷污染致暴露人群骨代谢损伤的生物暴露限值,为预防氟砷暴露对人体健康的损害提供骨损害方面的参考依据.方法 应用BMDS Version 2.1.2软件计算氟砷暴露人群尿氟、尿砷的基准剂量(BMD)及其可信限下限(BMDL).结果 氟、砷混合暴露引起骨代谢损伤的尿氟BMD及BMDL分别为1.96mg/gCr、1.32 mg/gCr;尿砷BMD及BMDL分别为120.11 μg/gCr、94.83 μg/gCr.结论 建议氟、砷混合暴露引起暴露人群骨代谢损伤尿氟和尿砷的生物暴露限值分别为1.32 mg/gCr和94.83 μg/gCr.%Objective To explore the biological exposure limitation for bone metabolism injury with benchmark dose method for the determination of potential risk associated with chronic co-exposure to fluoride and arsenic in Chinese population. Methods The benchmark dose( BMD) and the lower confidence limitation for the benchmark dose(BMDL) of urinary fluorine and arsenic in the exposure population were calculated using BMDS Version 2. 1. 2. Results The BMD and BMDL of urinary fluorine were 1. % mg/g creatinine and 1. 32 mg/g creatinine. BMD and BMDL of urinary arsenic were 120.11 μg/g and 94. 83 μg/g creatinine. Conclusion The estimated biological exposure limitation of urinary fluoride and arsenic were 1. 32 mg/g creatinine and 94. 83 μg/g creatinine in chronic co-exposure to fluoride and arsenic, respectively.

  13. Current modeling practice may lead to falsely high benchmark dose estimates.

    Science.gov (United States)

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment.

  14. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    Science.gov (United States)

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  15. An adaptive nonparametric method in benchmark analysis for bioassay and environmental studies.

    Science.gov (United States)

    Bhattacharya, Rabi; Lin, Lizhen

    2010-12-01

    We present a novel nonparametric method for bioassay and benchmark analysis in risk assessment, which averages isotonic MLEs based on disjoint subgroups of dosages. The asymptotic theory for the methodology is derived, showing that the MISEs (mean integrated squared error) of the estimates of both the dose-response curve F and its inverse F(-1) achieve the optimal rate O(N(-4/5)). Also, we compute the asymptotic distribution of the estimate ζ~p of the effective dosage ζ(p) = F(-1) (p) which is shown to have an optimally small asymptotic variance.

  16. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Science.gov (United States)

    Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.

  17. Netherlands contribution to the EC project: Benchmark exercise on dose estimation in a regulatory context

    International Nuclear Information System (INIS)

    On request of the Netherlands government FEL-TNO is developing a decision support system with the acronym RAMBOS for the assessment of the off-site consequences of an accident with hazardous materials. This is a user friendly interactive computer program, which uses very sophisticated graphical means. RAMBOS supports the emergency planning organization in two ways. Firstly, the risk to the residents in the surroundings of the accident is quantified in terms of severity and magnitude (number of casualties, etc.). Secondly, the consequences of countermeasures, such as sheltering and evacuation, are predicted. By evaluating several countermeasures the user can determine an optimum policy to reduce the impact of the accident. Within the framework of the EC project 'Benchmark exercise on dose estimation in a regulatory context' on request of the Ministry of Housing, Physical Planning and Environment calculations were carried out with the RAMBOS system. This report contains the results of these calculations. 3 refs.; 2 figs.; 10 tabs

  18. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Fujii, K [Graduate School of Medicine, Nagoya University, Nagoya, JP (Japan); UCLA School of Medicine, Los Angeles, CA (United States); Bostani, M; Cagnon, C; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.

  19. SU-E-I-32: Benchmarking Head CT Doses: A Pooled Vs. Protocol Specific Analysis of Radiation Doses in Adult Head CT Examinations

    International Nuclear Information System (INIS)

    Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tube Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks

  20. Avoiding Pitfalls in the Use of the Benchmark Dose Approach to Chemical Risk Assessments; Some Illustrative Case Studies (Presentation)

    Science.gov (United States)

    The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...

  1. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    International Nuclear Information System (INIS)

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  2. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    OpenAIRE

    Puton, T.; Kozlowski, L. P.; Rother, K. M.; Bujnicki, J. M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative perfor...

  3. Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus

    Science.gov (United States)

    Oishi, K.; Kosako, K.; Kobayashi, Y.; Sonoki, I.

    2014-06-01

    Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of 60Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.

  4. Benchmark Experiment of Dose Rate Distributions Around the Gamma Knife Medical Apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Oishi, K., E-mail: koji_oishi@shimz.co.jp [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kosako, K. [Institute of Technology, Shimizu Corporation, Tokyo (Japan); Kobayashi, Y.; Sonoki, I. [Giken Kogyo Co., Ltd., Tokyo (Japan)

    2014-06-15

    Dose rate measurements around a gamma knife apparatus were performed by using an ionization chamber. Analyses have been performed by using the Monte Carlo code MCNP-5. The nuclear library used for the dose rate distribution of {sup 60}Co was MCPLIB04. The calculation model was prepared with a high degree of fidelity, such as the position of each Cobalt source and shielding materials. Comparisons between measured results and calculated ones were performed, and a very good agreement was observed. It is concluded that the Monte Carlo calculation method with its related nuclear data library is very effective for such a complicated radiation oncology apparatus.

  5. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  6. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    Science.gov (United States)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  7. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  8. Benchmark dose approach for low-level lead induced haematogenesis inhibition and associations of childhood intelligences with ALAD activity and ALA levels.

    Science.gov (United States)

    Wang, Q; Ye, L X; Zhao, H H; Chen, J W; Zhou, Y K

    2011-04-15

    Lead (Pb) levels, delta-aminolevulinic acid dehydratase (ALAD) activities, zinc protoporphyrin (ZPP) levels in blood, and urinary delta-aminolevulinic acid (ALA) and coproporphyrin (CP) concentrations were measured for 318 environmental Pb exposed children recruited from an area of southeast China. The mean of blood lead (PbB) levels was 75.0μg/L among all subjects. Benchmark dose (BMD) method was conducted to present a lower PbB BMD (lower bound of BMD) of 32.4μg/L (22.7) based on ALAD activity than those based on the other three haematological indices, corresponding to a benchmark response of 1%. Childhood intelligence degrees were not associated significantly with ALAD activities or ALA levels. It was concluded that blood ALAD activity is a sensitive indicator of early haematological damage due to low-level Pb exposures for children.

  9. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  10. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    Science.gov (United States)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  11. Generic Hockey-Stick Model for Estimating Benchmark Dose and Potency: Performance Relative to BMDS and Application to Anthraquinone.

    Science.gov (United States)

    Bogen, Kenneth T

    2011-01-01

    Benchmark Dose Model software (BMDS), developed by the U.S. Environmental Protection Agency, involves a growing suite of models and decision rules now widely applied to assess noncancer and cancer risk, yet its statistical performance has never been examined systematically. As typically applied, BMDS also ignores the possibility of reduced risk at low doses ("hormesis"). A simpler, proposed Generic Hockey-Stick (GHS) model also estimates benchmark dose and potency, and additionally characterizes and tests objectively for hormetic trend. Using 100 simulated dichotomous-data sets (5 dose groups, 50 animals/group), sampled from each of seven risk functions, GHS estimators performed about as well or better than BMDS estimators, and a surprising observation was that BMDS mis-specified all of six non-hormetic sampled risk functions most or all of the time. When applied to data on rodent tumors induced by the genotoxic chemical carcinogen anthraquinone (AQ), the GHS model yielded significantly negative estimates of net potency exhibited by the combined rodent data, suggesting that-consistent with the anti-leukemogenic properties of AQ and structurally similar quinones-environmental AQ exposures do not likely increase net cancer risk. In addition to its simplicity and flexibility, the GHS approach offers a unified, consistent approach to quantifying environmental chemical risk. PMID:21731536

  12. Biological dosimetry - Dose estimation method using biomakers

    International Nuclear Information System (INIS)

    The individual radiation dose estimation is an important step in the radiation risk assessment. In case of radiation incident or radiation accident, sometime, physical dosimetry method can not be used for calculating the individual radiation dose, the other complement method such as biological dosimetry is very necessary. This method is based on the quantitative specific biomarkers induced by ionizing radiation, such as dicentric chromosomes, translocations, micronuclei... in human peripheral blood lymphocytes. The basis of the biological dosimetry method is the close relationship between the biomarkers and absorbed dose or dose rate; the effects of in vitro and in vivo are similar, so it is able to generate the calibration dose-effect curve in vitro for in vivo assessment. Possibilities and perspectives for performing biological dosimetry method in radiation protection area are presented in this report. (author)

  13. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  14. Methods of bone marrow dose calculation

    International Nuclear Information System (INIS)

    Several methods of bone marrow dose calculation for photon irradiation were analised. After a critical analysis, the author proposes the adoption, by the Instituto de Radioprotecao e Dosimetria/CNEN, of Rosenstein's method for dose calculations in Radiodiagnostic examinations and Kramer's method in case of occupational irradiation. It was verified by Eckerman and Simpson that for monoenergetic gamma emitters uniformly distributed within the bone mineral of the skeleton the dose in the bone surface can be several times higher than dose in skeleton. In this way, is also proposed the Calculation of tissue-air ratios for bone surfaces in some irradiation geometries and photon energies to be included in the Rosenstein's method for organ dose calculation in Radiodiagnostic examinations. (Author)

  15. Framework for benchmarking online retailing performance using fuzzy AHP and TOPSIS method

    Directory of Open Access Journals (Sweden)

    M. Ahsan Akhtar Hasin

    2012-08-01

    Full Text Available Due to increasing penetration of internet connectivity, on-line retail is growing from the pioneer phase to increasing integration within people's lives and companies' normal business practices. In the increasingly competitive environment, on-line retail service providers require systematic and structured approach to have cutting edge over the rival. Thus, the use of benchmarking has become indispensable to accomplish superior performance to support the on-line retail service providers. This paper uses the fuzzy analytic hierarchy process (FAHP approach to support a generic on-line retail benchmarking process. Critical success factors for on-line retail service have been identified from a structured questionnaire and literature and prioritized using fuzzy AHP. Using these critical success factors, performance levels of the ORENET an on-line retail service provider is benchmarked along with four other on-line service providers using TOPSIS method. Based on the benchmark, their relative ranking has also been illustrated.

  16. Simplified CCSD(T)-F12 methods: theory and benchmarks.

    Science.gov (United States)

    Knizia, Gerald; Adler, Thomas B; Werner, Hans-Joachim

    2009-02-01

    The simple and efficient CCSD(T)-F12x approximations (x = a,b) we proposed in a recent communication [T. B. Adler, G. Knizia, and H.-J. Werner, J. Chem. Phys. 127, 221106 (2007)] are explained in more detail and extended to open-shell systems. Extensive benchmark calculations are presented, which demonstrate great improvements in basis set convergence for a wide variety of applications. These include reaction energies of both open- and closed-shell reactions, atomization energies, electron affinities, ionization potentials, equilibrium geometries, and harmonic vibrational frequencies. For all these quantities, results better than the AV5Z quality are obtained already with AVTZ basis sets, and usually AVDZ treatments reach at least the conventional AVQZ quality. For larger molecules, the additional cost for these improvements is only a few percent of the time for a standard CCSD(T) calculation. For the first time ever, total reaction energies with chemical accuracy are obtained using valence-double-zeta basis sets. PMID:19206955

  17. Adaptive Cruise Control for a SMART Car: A Comparison Benchmark for MPC-PWA Control Methods

    NARCIS (Netherlands)

    Corona, D.; De Schutter, B.

    2008-01-01

    The design of an adaptive cruise controller for a SMART car, a type of small car, is proposed as a benchmark setup for several model predictive control methods for nonlinear and piecewise affine systems. Each of these methods has been already applied to specific case studies, different from method t

  18. Radiation transport benchmarks for simple geometries with void regions using the spherical harmonics method

    International Nuclear Information System (INIS)

    In 2001, an international cooperation on the 3D radiation transport benchmarks for simple geometries with void region was performed under the leadership of E. Sartori of OECD/NEA. There were contributions from eight institutions, where 6 contributions were by the discrete ordinate method and only two were by the spherical harmonics method. The 3D spherical harmonics program FFT3 by the finite Fourier transformation method has been improved for this presentation, and benchmark solutions for the 2D and 3D simple geometries with void region by the FFT2 and FFT3 are given showing fairly good accuracy. (authors)

  19. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  20. Benchmarking the inelastic neutron scattering soil carbon method

    Science.gov (United States)

    The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...

  1. Continuum discretization methods in a composite-particle scattering off a nucleus: the benchmark calculations

    CERN Document Server

    Rubtsova, O A; Moro, A M

    2008-01-01

    The direct comparison of two different continuum discretization methods towards the solution of a composite particle scattering off a nucleus is presented. The first approach -- the Continumm-Discretized Coupled Channel method -- is based on the differential equation formalism, while the second one -- the Wave-Packet Continuum Discretization method -- uses the integral equation formulation for the composite-particle scattering problem. As benchmark calculations we have chosen the deuteron off \

  2. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    Science.gov (United States)

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement).

  3. Semiempirical Quantum-Chemical Orthogonalization-Corrected Methods: Benchmarks for Ground-State Properties.

    Science.gov (United States)

    Dral, Pavlo O; Wu, Xin; Spörkel, Lasse; Koslowski, Axel; Thiel, Walter

    2016-03-01

    The semiempirical orthogonalization-corrected OMx methods (OM1, OM2, and OM3) go beyond the standard MNDO model by including additional interactions in the electronic structure calculation. When augmented with empirical dispersion corrections, the resulting OMx-Dn approaches offer a fast and robust treatment of noncovalent interactions. Here we evaluate the performance of the OMx and OMx-Dn methods for a variety of ground-state properties using a large and diverse collection of benchmark sets from the literature, with a total of 13035 original and derived reference data. Extensive comparisons are made with the results from established semiempirical methods (MNDO, AM1, PM3, PM6, and PM7) that also use the NDDO (neglect of diatomic differential overlap) integral approximation. Statistical evaluations show that the OMx and OMx-Dn methods outperform the other methods for most of the benchmark sets. PMID:26771261

  4. Validation and benchmarking of calculation methods for photon and neutron transport at cask configurations

    International Nuclear Information System (INIS)

    The reliability of calculation tools to evaluate and calculate dose rates appearing behind multi-layered shields is important with regard to the certification of transport and storage casks. Actual benchmark databases like SINBAD do not offer such configurations because they were developed for reactor and accelerator purposes. Due to this, a bench-mark-suite based on own experiments that contain dose rates measured in different distances and levels from a transport and storage cask and on a public benchmark to validate Monte-Carlo-transport-codes has been developed. The analysed and summarised experiments include a 60Co point-source located in a cylindrical cask, a 252Cf line-source shielded by iron and polyethylene (PE) and a bare 252Cf source moderated by PE in a concrete-labyrinth with different inserted shielding materials to quantify neutron streaming effects on measured dose rates. In detail not only MCNPTM (version 5.1.6) but also MAVRIC, included in the SCALE 6.1 package, have been compared for photon and neutron transport. Aiming at low deviations between calculation and measurement requires precise source term specification and exact measurements of the dose rates which have been evaluated carefully including known uncertainties. In MAVRIC different source-descriptions with respect to the group-structure of the nuclear data library are analysed for the calculation of gamma dose rates because the energy lines of 60Co can only be modelled in groups. In total the comparison shows that MCNPTM fits very wall to the measurements within up to two standard deviations and that MAVRIC behaves similarly under the prerequisite that the source-model can be optimized. (author)

  5. Two prospective dosing methods for nortriptyline.

    Science.gov (United States)

    Perry, P J; Browne, J L; Alexander, B; Tsuang, M T; Sherman, A D; Dunner, F J

    1984-01-01

    This study compared two prospective pharmacokinetic dosing methods to predict steady-state concentrations of nortriptyline. One method required multiple determinations of the nortriptyline plasma concentration to estimate the drug's steady-state concentration. The second method required a single nortriptyline concentration drawn at a fixed time, preferably 36 hours, following a nortriptyline test dose. The 36-hour nortriptyline plasma concentrations (NTP 36h) were substituted into the straight-line equation of Cssav = 17.2 + 3.74 (NTP 36h), where Cssav is the average steady-state concentration for a 100 mg/day dose of nortriptyline. No differences were noted between the observed steady-state nortriptyline concentration of 121 +/- 19 ng/ml, the 36-hour single-point prediction mean concentration of 121 +/- 21 ng/ml, or the multiple-point prediction mean concentration of 122 +/- 19 ng/ml. Because of the similar findings between the two methods, the clinical advantages and disadvantages of each kinetic approach are discussed to put these prospective dosing protocols into their proper perspective.

  6. Comparison of in vitro and in vivo clastogenic potency based on benchmark dose analysis of flow cytometric micronucleus data.

    Science.gov (United States)

    Bemis, Jeffrey C; Wills, John W; Bryce, Steven M; Torous, Dorothea K; Dertinger, Stephen D; Slob, Wout

    2016-05-01

    The application of flow cytometry as a scoring platform for both in vivo and in vitro micronucleus (MN) studies has enabled the efficient generation of high quality datasets suitable for comprehensive assessment of dose-response. Using this information, it is possible to obtain precise estimates of the clastogenic potency of chemicals. We illustrate this by estimating the in vivo and the in vitro potencies of seven model clastogenic agents (melphalan, chlorambucil, thiotepa, 1,3-propane sultone, hydroxyurea, azathioprine and methyl methanesulfonate) by deriving BMDs using freely available BMD software (PROAST). After exposing male rats for 3 days with up to nine dose levels of each individual chemical, peripheral blood samples were collected on Day 4. These chemicals were also evaluated for in vitro MN induction by treating TK6 cells with up to 20 concentrations in quadruplicate. In vitro MN frequencies were determined via flow cytometry using a 96-well plate autosampler. The estimated in vitro and in vivo BMDs were found to correlate to each other. The correlation showed considerable scatter, as may be expected given the complexity of the whole animal model versus the simplicity of the cell culture system. Even so, the existence of the correlation suggests that information on the clastogenic potency of a compound can be derived from either whole animal studies or cell culture-based models of chromosomal damage. We also show that the choice of the benchmark response, i.e. the effect size associated with the BMD, is not essential in establishing the correlation between both systems. Our results support the concept that datasets derived from comprehensive genotoxicity studies can provide quantitative dose-response metrics. Such investigational studies, when supported by additional data, might then contribute directly to product safety investigations, regulatory decision-making and human risk assessment. PMID:26049158

  7. A novel and well-defined benchmarking method for second generation read mapping

    Directory of Open Access Journals (Sweden)

    Weese David

    2011-05-01

    Full Text Available Abstract Background Second generation sequencing technologies yield DNA sequence data at ultra high-throughput. Common to most biological applications is a mapping of the reads to an almost identical or highly similar reference genome. The assessment of the quality of read mapping results is not straightforward and has not been formalized so far. Hence, it has not been easy to compare different read mapping approaches in a unified way and to determine which program is the best for what task. Results We present a new benchmark method, called Rabema (Read Alignment BEnchMArk, for read mappers. It consists of a strict definition of the read mapping problem and of tools to evaluate the result of arbitrary read mappers supporting the SAM output format. Conclusions We show the usefulness of the benchmark program by performing a comparison of popular read mappers. The tools supporting the benchmark are licensed under the GPL and available from http://www.seqan.de/projects/rabema.html.

  8. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  9. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    Energy Technology Data Exchange (ETDEWEB)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, Nebraska 68588 (United States)

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.

  10. Multiple exposures to indoor contaminants: Derivation of benchmark doses and relative potency factors based on male reprotoxic effects.

    Science.gov (United States)

    Fournier, K; Tebby, C; Zeman, F; Glorennec, P; Zmirou-Navier, D; Bonvallot, N

    2016-02-01

    Semi-Volatile Organic Compounds (SVOCs) are commonly present in dwellings and several are suspected of having effects on male reproductive function mediated by an endocrine disruption mode of action. To improve knowledge of the health impact of these compounds, cumulative toxicity indicators are needed. This work derives Benchmark Doses (BMD) and Relative Potency Factors (RPF) for SVOCs acting on the male reproductive system through the same mode of action. We included SVOCs fulfilling the following conditions: detection frequency (>10%) in French dwellings, availability of data on the mechanism/mode of action for male reproductive toxicity, and availability of comparable dose-response relationships. Of 58 SVOCs selected, 18 induce a decrease in serum testosterone levels. Six have sufficient and comparable data to derive BMDs based on 10 or 50% of the response. The SVOCs inducing the largest decrease in serum testosterone concentration are: for 10%, bisphenol A (BMD10 = 7.72E-07 mg/kg bw/d; RPF10 = 7,033,679); for 50%, benzo[a]pyrene (BMD50 = 0.030 mg/kg bw/d; RPF50 = 1630), and the one inducing the smallest one is benzyl butyl phthalate (RPF10 and RPF50 = 0.095). This approach encompasses contaminants from diverse chemical families acting through similar modes of action, and makes possible a cumulative risk assessment in indoor environments. The main limitation remains the lack of comparable toxicological data.

  11. Physical methods for dose determinations in mammography

    International Nuclear Information System (INIS)

    There is small but significant risk of radiation induced carcinogenesis associated with mammography. High quality mammography is the best method of early breast cancer detection. Besides, image as a basic requirement for an effective diagnosis, radiation protection principles require the radiation dose to the imaged tissue to be as low as compatible with required image quality. Glandular tissues is the most radiosensitive, thus the evaluation of Mean Glandular Dose (MGD) is the most relevant factor for estimation of radiation risk as well as the comparison of performance at different mammographic machines. MGD was estimated using Entrance Surface Air KERMA at the breast surface Kf measured free in air and appropriate conversation factors. Under evaluation were eight mammographic machines at institute of radiology, Skopje and mammographic machines at the Health's centers in Vevchani, Bitola, Prilep, Negotino and Shtip. Estimated values of MGD do not exceed the European reference level (<2mGy), but it can not be generally concluded for all mammography units in Macedonia, until their examination. In the near future all mammography units will be subject of Q C tests and dose measurements. (Author)

  12. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  13. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller;

    2016-01-01

    was compared with the observed phenotypes for all isolates. To challenge further the sensitivity of the in silico methods, the datasets were also down-sampled to 1% of the reads and reanalysed. The best results were obtained by identification of resistance genes by mapping directly against the raw reads......Next generation sequencing (NGS) may be an alternative to phenotypic susceptibility testing for surveillance and clinical diagnosis. However, current bioinformatics methods may be associated with false positives and negatives. In this study, a novel mapping method was developed and benchmarked...... to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...

  14. The MIRD method of estimating absorbed dose

    Energy Technology Data Exchange (ETDEWEB)

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.

  15. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  16. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Prabir C. [Civil and Structural Engineering Division, Atomic Energy Regulatory Board (India)]. E-mail: pcb@aerb.gov.in; Roshan, A.D. [Civil and Structural Engineering Division, Atomic Energy Regulatory Board (India)

    2007-07-15

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results.

  17. A simple method for solar energetic particle event dose forecasting

    International Nuclear Information System (INIS)

    Bayesian, non-linear regression models or artificial neural networks are used to make predictions of dose and dose rate time profiles using calculated dose and/or dose rates soon after event onset. Both methods match a new event to similar historical events before making predictions for the new events. The currently developed Bayesian method categorizes a new event based on calculated dose rates up to 5 h (categorization window) after event onset. Categories are determined using ranges of dose rates from previously observed SEP events. These categories provide a range of predicted asymptotic dose for the new event. The model then goes on to make predictions of dose and dose rate time profiles out to 120 h beyond event onset. We know of no physical significance to our 5 h categorization window. In this paper, we focus on the efficacy of a simple method for SEP event asymptotic dose forecasting. Instead of making temporal predictions of dose and dose rate, we investigate making predictions of ranges of asymptotic dose using only dose rates at times prior to 5 h after event onset. A range of doses may provide sufficient information to make operational decisions such as taking emergency shelter or commencing/canceling extra-vehicular operations. Specifically, predicted ranges of doses that are found to be insignificant for the effect of interest would be ignored or put on a watch list while predicted ranges of greater significance would be used in the operational decision making progress

  18. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    OpenAIRE

    Yu.V. Dvirko

    2013-01-01

    The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalizatio...

  19. Methodic of dose planning for WWER-1000 power units

    International Nuclear Information System (INIS)

    Methods of minimization of dose loads for Zaporozhe NPP personnel were studied. They are aimed decrease the dose limits for reactor personnel to 20 mSv/year on the base of organization and technical improvements and ALARA principle

  20. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole;

    2012-01-01

    evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version......The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several...... of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological...

  1. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    CERN Document Server

    Ciolek, Glenn E

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests ...

  2. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  3. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  4. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    Energy Technology Data Exchange (ETDEWEB)

    Ciolek, Glenn E.; Roberge, Wayne G., E-mail: cioleg@rpi.edu, E-mail: roberw@rpi.edu [New York Center for Astrobiology (United States); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 (United States)

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  5. Derivation of the critical effect size/benchmark response for the dose-response analysis of the uptake of radioactive iodine in the human thyroid.

    Science.gov (United States)

    Weterings, Peter J J M; Loftus, Christine; Lewandowski, Thomas A

    2016-08-22

    Potential adverse effects of chemical substances on thyroid function are usually examined by measuring serum levels of thyroid-related hormones. Instead, recent risk assessments for thyroid-active chemicals have focussed on iodine uptake inhibition, an upstream event that by itself is not necessarily adverse. Establishing the extent of uptake inhibition that can be considered de minimis, the chosen benchmark response (BMR), is therefore critical. The BMR values selected by two international advisory bodies were 5% and 50%, a difference that had correspondingly large impacts on the estimated risks and health-based guidance values that were established. Potential treatment-related inhibition of thyroidal iodine uptake is usually determined by comparing thyroidal uptake of radioactive iodine (RAIU) during treatment with a single pre-treatment RAIU value. In the present study it is demonstrated that the physiological intra-individual variation in iodine uptake is much larger than 5%. Consequently, in-treatment RAIU values, expressed as a percentage of the pre-treatment value, have an inherent variation, that needs to be considered when conducting dose-response analyses. Based on statistical and biological considerations, a BMR of 20% is proposed for benchmark dose analysis of human thyroidal iodine uptake data, to take the inherent variation in relative RAIU data into account. Implications for the tolerated daily intakes for perchlorate and chlorate, recently established by the European Food Safety Authority (EFSA), are discussed. PMID:27268963

  6. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  7. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  8. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  9. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, Sebastian; Azmy, Yousry Y., E-mail: snschune@ncsu.edu, E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC (United States)

    2011-07-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  10. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  11. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    International Nuclear Information System (INIS)

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  12. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  13. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare the...... traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  14. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu [Nuclear Engineering, Missouri University of Science and Technology, Rolla, Missouri 65409 (United States); Hsieh, Jiang [GE Healthcare, Waukesha, Wisconsin 53188 (United States)

    2015-07-15

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  15. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  16. Re-analysis of Alaskan benchmark glacier mass-balance data using the index method

    Science.gov (United States)

    Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.

    2010-01-01

    At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.

  17. Application of a heterogeneous coarse mesh transport method to a MOX benchmark problem

    International Nuclear Information System (INIS)

    Recently, a coarse mesh transport method was extended to 2-D geometry by coupling Monte Carlo response function calculations to deterministic sweeps for converging the partial currents on the coarse mesh boundaries. More extensive testing of the new method has been performed with the previously published continuous energy benchmark problem, as well as the multigroup C5G7 MOX problem. The effect of the partial current representation in space, for the MOX problem, and in space and energy, for the smaller problem, on the accuracy of the results is the focus of this paper. For the MOX problem, accurate results were obtained with the assumption that the partial currents are piecewise-constant on four spatial segments per coarse mesh interface. Specifically, the errors in the system multiplication factor and the average absolute pin power were 0.12% and 0.68%, respectively. The root mean square and the mean relative pin power errors were 1.15% and 0.56%, respectively. (authors)

  18. Benchmarking as a method of assessment of region’s intellectual potential

    Directory of Open Access Journals (Sweden)

    P.G. Pererva

    2015-12-01

    innovative development of regions. It is asked to assess the intellectual potential of the region using benchmarking technology. Evaluation of potential IR regions and its impact on development remains a meaningful and necessary in terms more adequate to the real practice of development of economic systems diagrams, algorithms, models and methods of analysis, forecasting and designing the future. In the recognized global market leaders the largest international image companies constantly and consistently there is an active innovative process where in parallel, as the operational policy evolutionary upgrade policy and strategic developments of radical innovations with significant period from idea to its realization. This second direction uses benchmarking as the research methodology with useful results. Conclusions and directions of further researches. A scientific review of the nature and methods of the use of intellectual capital at this stage of economic development of Ukraine has its own challenges, among which are such areas of study as the structure of this capital, capacity assessment at the level of enterprises, regions and the country as a whole, identify the key factors influencing intellectual capital, evaluation of investment efficiency in support of it. Further studies relate to such unsolved issues as the relationship of the IR with the mechanisms of management of innovative development, efficiency of investments in IR, the need to ensure the intellectual development, assessment of intellectual potential of subjects of economic activity.

  19. Development of Benchmarks for Operating Costs and Resources Consumption to be Used in Healthcare Building Sustainability Assessment Methods

    Directory of Open Access Journals (Sweden)

    Maria de Fátima Castro

    2015-09-01

    Full Text Available Since the last decade of the twentieth century, the healthcare industry is paying attention to the environmental impact of their buildings and therefore new regulations, policy goals, and Building Sustainability Assessment (HBSA methods are being developed and implemented. At the present, healthcare is one of the most regulated industries and it is also one of the largest consumers of energy per net floor area. To assess the sustainability of healthcare buildings it is necessary to establish a set of benchmarks related with their life-cycle performance. They are both essential to rate the sustainability of a project and to support designers and other stakeholders in the process of designing and operating a sustainable building, by allowing the comparison to be made between a project and the conventional and best market practices. This research is focused on the methodology to set the benchmarks for resources consumption, waste production, operation costs and potential environmental impacts related to the operational phase of healthcare buildings. It aims at contributing to the reduction of the subjectivity found in the definition of the benchmarks used in Building Sustainability Assessment (BSA methods, and it is applied in the Portuguese context. These benchmarks will be used in the development of a Portuguese HBSA method.

  20. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data

    International Nuclear Information System (INIS)

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques. The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected. The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1–5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. (paper)

  1. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data.

    Science.gov (United States)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-08-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described. PMID:25069410

  2. Benchmarking DFT and semiempirical methods on structures and lattice energies for ten ice polymorphs

    Science.gov (United States)

    Brandenburg, Jan Gerit; Maas, Tilo; Grimme, Stefan

    2015-03-01

    Water in different phases under various external conditions is very important in bio-chemical systems and for material science at surfaces. Density functional theory methods and approximations thereof have to be tested system specifically to benchmark their accuracy regarding computed structures and interaction energies. In this study, we present and test a set of ten ice polymorphs in comparison to experimental data with mass densities ranging from 0.9 to 1.5 g/cm3 and including explicit corrections for zero-point vibrational and thermal effects. London dispersion inclusive density functionals at the generalized gradient approximation (GGA), meta-GGA, and hybrid level as well as alternative low-cost molecular orbital methods are considered. The widely used functional of Perdew, Burke and Ernzerhof (PBE) systematically overbinds and overall provides inconsistent results. All other tested methods yield reasonable to very good accuracy. BLYP-D3atm gives excellent results with mean absolute errors for the lattice energy below 1 kcal/mol (7% relative deviation). The corresponding optimized structures are very accurate with mean absolute relative deviations (MARDs) from the reference unit cell volume below 1%. The impact of Axilrod-Teller-Muto (atm) type three-body dispersion and of non-local Fock exchange is small but on average their inclusion improves the results. While the density functional tight-binding model DFTB3-D3 performs well for low density phases, it does not yield good high density structures. As low-cost alternative for structure related problems, we recommend the recently introduced minimal basis Hartree-Fock method HF-3c with a MARD of about 3%.

  3. Evaluation of anode (electro)catalytic materials for the direct borohydride fuel cell: Methods and benchmarks

    Science.gov (United States)

    Olu, Pierre-Yves; Job, Nathalie; Chatenet, Marian

    2016-09-01

    In this paper, different methods are discussed for the evaluation of the potential of a given catalyst, in view of an application as a direct borohydride fuel cell DBFC anode material. Characterizations results in DBFC configuration are notably analyzed at the light of important experimental variables which influence the performances of the DBFC. However, in many practical DBFC-oriented studies, these various experimental variables prevent one to isolate the influence of the anode catalyst on the cell performances. Thus, the electrochemical three-electrode cell is a widely-employed and useful tool to isolate the DBFC anode catalyst and to investigate its electrocatalytic activity towards the borohydride oxidation reaction (BOR) in the absence of other limitations. This article reviews selected results for different types of catalysts in electrochemical cell containing a sodium borohydride alkaline electrolyte. In particular, propositions of common experimental conditions and benchmarks are given for practical evaluation of the electrocatalytic activity towards the BOR in three-electrode cell configuration. The major issue of gaseous hydrogen generation and escape upon DBFC operation is also addressed through a comprehensive review of various results depending on the anode composition. At last, preliminary concerns are raised about the stability of potential anode catalysts upon DBFC operation.

  4. Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) Benchmark Phase II: Identification of Influential Parameters

    International Nuclear Information System (INIS)

    The objective of the Post-BEMUSE Reflood Model Input Uncertainty Methods (PREMIUM) benchmark is to progress on the issue of the quantification of the uncertainty of the physical models in system thermal-hydraulic codes by considering a concrete case: the physical models involved in the prediction of core reflooding. The PREMIUM benchmark consists of five phases. This report presents the results of Phase II dedicated to the identification of the uncertain code parameters associated with physical models used in the simulation of reflooding conditions. This identification is made on the basis of the Test 216 of the FEBA/SEFLEX programme according to the following steps: - identification of influential phenomena; - identification of the associated physical models and parameters, depending on the used code; - quantification of the variation range of identified input parameters through a series of sensitivity calculations. A procedure for the identification of potentially influential code input parameters has been set up in the Specifications of Phase II of PREMIUM benchmark. A set of quantitative criteria has been as well proposed for the identification of influential IP and their respective variation range. Thirteen participating organisations, using 8 different codes (7 system thermal-hydraulic codes and 1 sub-channel module of a system thermal-hydraulic code) submitted Phase II results. The base case calculations show spread in predicted cladding temperatures and quench front propagation that has been characterized. All the participants, except one, predict a too fast quench front progression. Besides, the cladding temperature time trends obtained by almost all the participants show oscillatory behaviour which may have numeric origins. Adopted criteria for identification of influential input parameters differ between the participants: some organisations used the set of criteria proposed in Specifications 'as is', some modified the quantitative thresholds

  5. Benchmark Dose Analysis from Multiple Datasets: The Cumulative Risk Assessment for the N-Methyl Carbamate Pesticides

    Science.gov (United States)

    The US EPA’s N-Methyl Carbamate (NMC) Cumulative Risk assessment was based on the effect on acetylcholine esterase (AChE) activity of exposure to 10 NMC pesticides through dietary, drinking water, and residential exposures, assuming the effects of joint exposure to NMCs is dose-...

  6. Neutron Cross Section Processing Methods for Improved Integral Benchmarking of Unresolved Resonance Region Evaluations

    Science.gov (United States)

    Walsh, Jonathan A.; Forget, Benoit; Smith, Kord S.; Brown, Forrest B.

    2016-03-01

    In this work we describe the development and application of computational methods for processing neutron cross section data in the unresolved resonance region (URR). These methods are integrated with a continuous-energy Monte Carlo neutron transport code, thereby enabling their use in high-fidelity analyses. Enhanced understanding of the effects of URR evaluation representations on calculated results is then obtained through utilization of the methods in Monte Carlo integral benchmark simulations of fast spectrum critical assemblies. First, we present a so-called on-the-fly (OTF) method for calculating and Doppler broadening URR cross sections. This method proceeds directly from ENDF-6 average unresolved resonance parameters and, thus, eliminates any need for a probability table generation pre-processing step in which tables are constructed at several energies for all desired temperatures. Significant memory reduction may be realized with the OTF method relative to a probability table treatment if many temperatures are needed. Next, we examine the effects of using a multi-level resonance formalism for resonance reconstruction in the URR. A comparison of results obtained by using the same stochastically-generated realization of resonance parameters in both the single-level Breit-Wigner (SLBW) and multi-level Breit-Wigner (MLBW) formalisms allows for the quantification of level-level interference effects on integrated tallies such as keff and energy group reaction rates. Though, as is well-known, cross section values at any given incident energy may differ significantly between single-level and multi-level formulations, the observed effects on integral results are minimal in this investigation. Finally, we demonstrate the calculation of true expected values, and the statistical spread of those values, through independent Monte Carlo simulations, each using an independent realization of URR cross section structure throughout. It is observed that both probability table

  7. Benchmark dose estimation for cadmium-induced renal tubular damage among environmental cadmium-exposed women aged 35-54 years in two counties of China.

    Directory of Open Access Journals (Sweden)

    Jia Hu

    Full Text Available A number of factors, including gender, age, smoking habits, and occupational exposure, affect the levels of urinary cadmium. Few studies have considered these influences when calculating the benchmark dose (BMD of cadmium. In the present study, we aimed to calculate BMDs and their 95% lower confidence bounds (BMDLs for cadmium-induced renal tubular effects in an age-specific population in south-central China.In this study, urinary cadmium, β2-microglobulin, and N-acetyl-β-D-glucosaminidase levels were measured in morning urine samples from 490 randomly selected non-smoking women aged 35-54 years. Participants were selected using stratified cluster sampling in two counties (counties A and B in China. Multiple regression and logistic regression analyses were used to investigate the dose-response relationship between urinary cadmium levels and tubular effects. BMDs/BMDLs corresponding to an additional risk (benchmark response of 5% and 10% were calculated with assumed cut-off values of the 84th and 90th percentile of urinary β2-microglobulin and N-acetyl-β-D-glucosaminidase levels of the controls.Urinary levels of β2-microglobulin and N-acetyl-β-D-glucosaminidase increased significantly with increasing levels of urinary cadmium. Age was not associated with urinary cadmium levels, possibly because of the narrow age range included in this study. Based on urinary β2-microglobulin and N-acetyl-β-D-glucosaminidase, BMDs and BMDLs of urinary cadmium ranged from 2.08 to 3.80 (1.41-2.18 µg/g cr for subjects in county A and from 0.99 to 3.34 (0.74-1.91 µg/g cr for those in county B. The predetermined benchmark response of 0.05 and the 90th percentiles of urinary β2-microglobulin and N-acetyl-β-D-glucosaminidase levels of the subjects not exposed to cadmium (i.e., the control group served as cut-off values.The obtained BMDs of urinary cadmium were similar to the reference point of 1 µg/g cr, as suggested by the European Food Safety Authority

  8. Classification of criticality calculations with correlation coefficient method and its application to OECD/NEA burnup credit benchmarks phase III-A and II-A

    International Nuclear Information System (INIS)

    A method for classifying benchmark results of criticality calculations according to similarity was proposed in this paper. After formulation of the method utilizing correlation coefficients, it was applied to burnup credit criticality benchmarks Phase III-A and II-A, which were conducted by the Expert Group on Burnup Credit Criticality Safety under auspices of the Nuclear Energy Agency of the Organisation for Economic Cooperation and Development (OECD/NEA). Phase III-A benchmark was a series of criticality calculations for irradiated Boiling Water Reactor (BWR) fuel assemblies, whereas Phase II-A benchmark was a suite of criticality calculations for irradiated Pressurized Water Reactor (PWR) fuel pins. These benchmark problems and their results were summarized. The correlation coefficients were calculated and sets of benchmark calculation results were classified according to the criterion that the values of the correlation coefficients were no less than 0.15 for Phase III-A and 0.10 for Phase II-A benchmarks. When a couple of benchmark calculation results belonged to the same group, one calculation result was found predictable from the other. An example was shown for each of the Benchmarks. While the evaluated nuclear data seemed the main factor for the classification, further investigations were required for finding other factors. (author)

  9. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  10. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    Science.gov (United States)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random

  11. Assessment and benchmarking of the impact to gamma dose rate employing different photon-to-dose conversion factors using MCNPX code at the decommissioning stage of Ignalina Nuclear Power Plant.

    Science.gov (United States)

    Stankunas, Gediminas; Pabarcius, Raimondas; Tonkunas, Aurimas

    2014-11-01

    A comparative study was performed to reveal the impact of several photon-to-dose conversion factors for gamma dose rate calculations when applied to heterogeneous environment in the case of decommission stage of the Ignalina Nuclear Power Plant. The following set of conversion factors were investigated by employing the Monte Carlo N-particle transport (MCNPX) code derived from the recommendations given in ICRP-21, ICRP-74 and ANSI/ANS-6.1.1 standards (1977 and 1991), based on the experiments performed for gamma radiation dose rate measurements inside the deplanting Emergency Core Cooling System tank. MCNPX precise simulation and the benchmark between the conversion coefficients highlighted the impact to the results for the selected case of this investigation. The results revealed that the data from the ANSI/ANS-6.1.1 1991 publication are reliable for various dose and shielding calculations in the case of decontamination of radioactive equipment and similar applications since it showed a statistically satisfied agreement between the simulation results and experimental data. These tendencies suggest that the radiological protection system currently adopted in NPP during the decommissioning stage can be characterised using ANSI/ANS-6.1.1 1991 standards with respect to gamma dosimetry. PMID:24990828

  12. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    Science.gov (United States)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single

  13. Methods for monitoring patient dose in dental radiology

    International Nuclear Information System (INIS)

    Different types of X-ray equipment are used in dental radiology, such as intra-oral, panoramic, cephalo-metric, cone-beam computed tomography (CBCT) and multi-slice computed tomography (MSCT) units. Digital receptors have replaced film and screen-film systems and other technical developments have been made. The radiation doses arising from different types of examination are sparsely documented and often expressed in different radiation quantities. In order to allow the comparison of radiation doses using conventional techniques, i.e. intra-oral, panoramic and cephalo-metric units, with those obtained using, CBCT or MSCT techniques, the same quantities and units of dose must be used. Dose determination should be straightforward and reproducible, and data should be stored for each image and clinical examination. It is shown here that air kerma-area product (PKA) values can be used to monitor the radiation doses used in all types of dental examinations including CBCT and MSCT. However, for the CBCT and MSCT techniques, the methods for the estimation of dose must be more thoroughly investigated. The values recorded can be used to determine the diagnostic standard doses and to set diagnostic reference levels for each type of clinical examination and equipment used. It should also be possible to use these values for the estimation and documentation of organ or effective doses. (authors)

  14. Benchmarked Empirical Bayes Methods in Multiplicative Area-level Models with Risk Evaluation

    OpenAIRE

    Ghosh, Malay; Kubokawa, Tatsuya; Kawakubo, Yuki

    2014-01-01

    The paper develops empirical Bayes and benchmarked empirical Bayes estimators of positive small area means under multiplicative models. A simple example will be estimation of per capita income for small areas. It is now well-understood that small area estimation needs explicit, or at least implicit use of models. One potential difficulty with model-based estimators is that the overall estimator for a larger geographical area based on (weighted) sum of the model-based estimators is not necessa...

  15. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    CERN Document Server

    Chen, Hsing-Ta; Reichman, David R

    2016-01-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  16. A method of estimating fetal dose during brain radiation therapy

    International Nuclear Information System (INIS)

    Purpose: To develop a simple method of estimating fetal dose during brain radiation therapy. Methods and Materials: An anthropomorphic phantom was modified to simulate pregnancy at 12 and 24 weeks of gestation. Fetal dose measurements were carried out using thermoluminescent dosimeters. Brain radiation therapy was performed with two lateral and opposed fields using 6 MV photons. Three sheets of lead, 5.1-cm-thick, were positioned over the phantom's abdomen to reduce fetal exposure. Linear and nonlinear regression analysis was used to investigate the dependence of radiation dose to an unshielded and/or shielded fetus upon field size and distance from field isocenter. Results: Formulas describing the exponential decrease of radiation dose to an unshielded and/or shielded fetus with distance from the field isocenter are presented. All fitted parameters of the above formulas can be easily derived using a set of graphs showing their correlation with field size. Conclusion: This study describes a method of estimating fetal dose during brain radiotherapy, accounting for the effects of gestational age, field size and distance from field isocenter. Accurate knowledge of absorbed dose to the fetus before treatment course allows for the selection of the proper irradiation technique in order to achieve the maximum patient benefit with the least risk to the fetus

  17. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  18. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  19. Calculation method for gamma-dose rates from spherical puffs

    International Nuclear Information System (INIS)

    The Lagrangian puff-models are widely used for calculation of the dispersion of atmospheric releases. Basic output from such models are concentrations of material in the air and on the ground. The most simple method for calculation of the gamma dose from the concentration of airborne activity is based on semi-infinite cloud model. This method is however only applicable for points far away from the release point. The exact calculation of the cloud dose using the volume integral requires significant computer time. The volume integral for the gamma dose could be approximated by using the semi-infinite cloud model combined with correction factors. This type of calculation procedure is very fast, but usually the accuracy is poor due to the fact that the same correction factors are used for all isotopes. The authors describe a more elaborate correction method. This method uses precalculated values of the gamma-dose rate as a function of the puff dispersion parameter (δp) and the distance from the puff centre for four energy groups. The release of energy for each radionuclide in each energy group has been calculated and tabulated. Based on these tables and a suitable interpolation procedure the calculation of gamma doses takes very short time and is almost independent of the number of radionuclides. (au) (7 tabs., 7 ills., 12 refs.)

  20. A unique manual method for emergency offsite dose calculations

    International Nuclear Information System (INIS)

    This paper describes a manual method developed for performance of emergency offsite dose calculations for PP and L's Susquehanna Steam Electric Station. The method is based on a three-part carbonless form. The front page guides the user through selection of the appropriate accident case and inclusion of meteorological and effluent data data. By circling the applicable accident descriptors, the user circles the dose factors on pages 2 and 3 which are then simply multiplied to yield the whole body and thyroid dose rates at the plant boundary, two, five, and ten miles. The process used to generate the worksheet is discussed, including the method used to incorporate the observed terrain effects on airflow patterns caused by the Susquehanna River Valley topography

  1. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  2. Fully Automated Treatment Planning for Head and Neck Radiotherapy using a Voxel-Based Dose Prediction and Dose Mimicking Method

    CERN Document Server

    McIntosh, Chris; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G

    2016-01-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present an atlas-based approach which learns a dose prediction model for each patient (atlas) in a training database, and then learns to match novel patients to the most relevant atlases. The method creates a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces any requirement for specifying dose-volume objectives for conveying the goals of treatment planning. A probabilistic dose distribution is inferred from the most relevant atlases, and is scalarized using a conditional random field to determine the most likely spatial distribution of dose to yield a specific dose prior (histogram) for relevant regions of interest. Voxel-based dose mimicking then converts the predicted dose distribution to a deliverable treatment plan dose distribution. In this study, we ...

  3. Comparison of the dose evaluation methods for criticality accident

    International Nuclear Information System (INIS)

    The improvement of the dose evaluation method for criticality accidents is important to rationalize design of the nuclear fuel cycle facilities. The source spectrums of neutron and gamma ray of a criticality accident depend on the condition of the source, its materials, moderation, density and so on. The comparison of the dose evaluation methods for a criticality accident is made. Some methods, which are combination of criticality calculation and shielding calculation, are proposed. Prompt neutron and gamma ray doses from nuclear criticality of some uranium systems have been evaluated as the Nuclear Criticality Slide Rule. The uranium metal source (unmoderated system) and the uranyl nitrate solution source (moderated system) in the rule are evaluated by some calculation methods, which are combinations of code and cross section library, as follows: (a) SAS1X (ENDF/B-IV), (b) MCNP4C (ENDF/B-VI)-ANISN (DLC23E or JSD120), (c) MCNP4C-MCNP4C (ENDF/B-VI). They have consisted of criticality calculation and shielding calculation. These calculation methods are compared about the tissue absorbed dose and the spectrums at 2 m from the source. (author)

  4. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. The lead cooled fast reactor benchmark Brest-300: analysis with sensitivity method

    International Nuclear Information System (INIS)

    Lead cooled fast neutrons reactor is one of the most interesting candidates for the development of atomic energy. BREST-300 is a 300 MWe lead cooled fast reactor developed by the NIKIET (Russia) with a deterministic safety approach which aims to exclude reactivity margins greater than the delayed neutron fraction. The development of innovative reactors (lead coolant, nitride fuel...) and fuel cycles with new constraints such as cycle closure or actinide burning, requires new technologies and new nuclear data. In this connection, the tool and neutron data used for the calculational analysis of reactor characteristics requires thorough validation. NIKIET developed a reactor benchmark fitting of design type calculational tools (including neutron data). In the frame of technical exchanges between NIKIET and EDF (France), results of this benchmark calculation concerning the principal parameters of fuel evolution and safety parameters has been inter-compared, in order to estimate the uncertainties and validate the codes for calculations of this new kind of reactors. Different codes and cross-sections data have been used, and sensitivity studies have been performed to understand and quantify the uncertainties sources.The comparison of results shows that the difference on keff value between ERANOS code with ERALIB1 library and the reference is of the same order of magnitude than the delayed neutron fraction. On the other hand, the discrepancy is more than twice bigger if JEF2.2 library is used with ERANOS. Analysis of discrepancies in calculation results reveals that the main effect is provided by the difference of nuclear data, namely U238, Pu239 fission and capture cross sections and lead inelastic cross sections

  6. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  7. Dosing method of physical activity in aerobics classes for students

    OpenAIRE

    Beliak Yu. I.; Zinchenko N.M.

    2014-01-01

    Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years). In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumb...

  8. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    Science.gov (United States)

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  9. Comparison of dose calculation methods for brachytherapy of intraocular tumors

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, Mark J.; Chiu-Tsao, Sou-Tung; Finger, Paul T.; Meigooni, Ali S.; Melhus, Christopher S.; Mourtada, Firas; Napolitano, Mary E.; Rogers, D. W. O.; Thomson, Rowan M.; Nath, Ravinder [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Quality MediPhys LLC, Denville, New Jersey 07834 (United States); New York Eye Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States); Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Department of Radiation Physics, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States) and Department of Experimental Diagnostic Imaging, University of Texas, M.D. Anderson Cancer Center, Houston, Texas 77030 (United States); Physics, Elekta Inc., Norcross, Georgia 30092 (United States); Department of Physics, Carleton University, Ottawa, Ontario K1S 5B6 (Canada); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06520 (United States)

    2011-01-15

    Purpose: To investigate dosimetric differences among several clinical treatment planning systems (TPS) and Monte Carlo (MC) codes for brachytherapy of intraocular tumors using {sup 125}I or {sup 103}Pd plaques, and to evaluate the impact on the prescription dose of the adoption of MC codes and certain versions of a TPS (Plaque Simulator with optional modules). Methods: Three clinical brachytherapy TPS capable of intraocular brachytherapy treatment planning and two MC codes were compared. The TPS investigated were Pinnacle v8.0dp1, BrachyVision v8.1, and Plaque Simulator v5.3.9, all of which use the AAPM TG-43 formalism in water. The Plaque Simulator software can also handle some correction factors from MC simulations. The MC codes used are MCNP5 v1.40 and BrachyDose/EGSnrc. Using these TPS and MC codes, three types of calculations were performed: homogeneous medium with point sources (for the TPS only, using the 1D TG-43 dose calculation formalism); homogeneous medium with line sources (TPS with 2D TG-43 dose calculation formalism and MC codes); and plaque heterogeneity-corrected line sources (Plaque Simulator with modified 2D TG-43 dose calculation formalism and MC codes). Comparisons were made of doses calculated at points-of-interest on the plaque central-axis and at off-axis points of clinical interest within a standardized model of the right eye. Results: For the homogeneous water medium case, agreement was within {approx}2% for the point- and line-source models when comparing between TPS and between TPS and MC codes, respectively. For the heterogeneous medium case, dose differences (as calculated using the MC codes and Plaque Simulator) differ by up to 37% on the central-axis in comparison to the homogeneous water calculations. A prescription dose of 85 Gy at 5 mm depth based on calculations in a homogeneous medium delivers 76 Gy and 67 Gy for specific {sup 125}I and {sup 103}Pd sources, respectively, when accounting for COMS-plaque heterogeneities. For off

  10. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Suwazono, Yasushi, E-mail: suwa@faculty.chiba-u.jp [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nogawa, Kazuhiro; Uetani, Mirei [Department of Occupational and Environmental Medicine, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuoku, Chiba 260-8670 (Japan); Nakada, Satoru [Safety and Health Organization, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522 (Japan); Kido, Teruhiko [Department of Community Health Nursing, Kanazawa University School of Health Sciences, 5-11-80 Kodatsuno, Kanazawa, Ishikawa 920-0942 (Japan); Nakagawa, Hideaki [Department of Epidemiology and Public Health, Kanazawa Medical University, 1-1 Daigaku, Uchnada, Ishikawa 920-0293 (Japan)

    2011-02-15

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and {beta}2-microglobulin ({beta}2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for {beta}2-MG was 3.5 {mu}g/g creatinine in men and 3.7 {mu}g/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  11. Application of the hybrid approach to the benchmark dose of urinary cadmium as the reference level for renal effects in cadmium polluted and non-polluted areas in Japan

    International Nuclear Information System (INIS)

    Objectives: The aim of this study was to evaluate the reference level of urinary cadmium (Cd) that caused renal effects. An updated hybrid approach was used to estimate the benchmark doses (BMDs) and their 95% lower confidence limits (BMDL) in subjects with a wide range of exposure to Cd. Methods: The total number of subjects was 1509 (650 men and 859 women) in non-polluted areas and 3103 (1397 men and 1706 women) in the environmentally exposed Kakehashi river basin. We measured urinary cadmium (U-Cd) as a marker of long-term exposure, and β2-microglobulin (β2-MG) as a marker of renal effects. The BMD and BMDL that corresponded to an additional risk (BMR) of 5% were calculated with background risk at zero exposure set at 5%. Results: The U-Cd BMDL for β2-MG was 3.5 μg/g creatinine in men and 3.7 μg/g creatinine in women. Conclusions: The BMDL values for a wide range of U-Cd were generally within the range of values measured in non-polluted areas in Japan. This indicated that the hybrid approach is a robust method for different ranges of cadmium exposure. The present results may contribute further to recent discussions on health risk assessment of Cd exposure.

  12. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    Energy Technology Data Exchange (ETDEWEB)

    Belley, Matthew D.; Wang, Chu [Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Nguyen, Giao; Gunasingha, Rathnayaka [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chao, Nelson J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 and Department of Immunology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chen, Benny J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 (United States); Dewhirst, Mark W. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Yoshizumi, Terry T., E-mail: terry.yoshizumi@duke.edu [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.

  13. Method of simulation of low dose rate for total dose effect in 0.18 {mu}m CMOS technology

    Energy Technology Data Exchange (ETDEWEB)

    He Baoping; Yao Zhibin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Wang Yuanming; Zhang Keying, E-mail: baopinghe@126.co [Northwest Institute of Nuclear Technology, Xi' an 710613 (China)

    2009-07-15

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 {mu}m CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 {sup 0}C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  14. Method of simulation of low dose rate for total dose effect in 0.18 μm CMOS technology

    International Nuclear Information System (INIS)

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 μm CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 0C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  15. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the savannah river site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  16. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  17. Comparison between calculation methods of dose rates in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)

  18. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)], E-mail: bekarkb@ornl.gov; Azmy, Yousry Y. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: yyazmy@ncsu.edu

    2009-04-15

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many ({approx}25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances.

  19. Iterative methods for dose reduction and image enhancement in tomography

    Science.gov (United States)

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  20. Benchmark Calculations on the Atomization Enthalpy,Geometry and Vibrational Frequencies of UF6 with Relativistic DFT Methods

    Institute of Scientific and Technical Information of China (English)

    XIAO Hai; LI Jun

    2008-01-01

    Benchmark calculations on the molar atomization enthalpy, geometry, and vibrational frequencies of uranium hexafluoride (UF6) have been performed by using relativistic density functional theory (DFT) with various levels of relativistic effects, different types of basis sets, and exchange-correlation functionals. Scalar relativistic effects are shown to be critical for the structural properties. The spin-orbit coupling effects are important for the calculated energies, but are much less important for other calculated ground-state properties of closed-shell UF6. We conclude through systematic investigations that ZORA- and RECP-based relativistic DFT methods are both appropriate for incorporating relativistic effects. Comparisons of different types of basis sets (Slater, Gaussian, and plane-wave types) and various levels of theoretical approximation of the exchange-correlation functionals were also made.

  1. Dosing method of physical activity in aerobics classes for students

    Directory of Open Access Journals (Sweden)

    Beliak Yu.I.

    2014-10-01

    Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.

  2. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  3. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    International Nuclear Information System (INIS)

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput

  4. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H. [Utah Univ., Salt Lake City, UT (United States)

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  5. Computer–based method of bite mark analysis: A benchmark in forensic dentistry?

    Science.gov (United States)

    Pallam, Nandita Kottieth; Boaz, Karen; Natrajan, Srikant; Raj, Minu; Manaktala, Nidhi; Lewis, Amitha J.

    2016-01-01

    Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting surfaces of six upper and six lower anterior teeth by hand tracing from study casts, hand tracing from wax impressions of the bite surface, radiopaque wax impression method, and xerographic method. These were compared with the original overlay produced digitally. Results: Xerographic method was the most accurate of the four techniques, with the highest reproducibility for bite mark analysis. The methods of wax impression were better for producing overlay of tooth away from the occlusal plane. Conclusions: Various techniques are used in bite mark analysis and the choice of technique depends largely on personal preference. No single technique has been shown to be better than the others and very little research has been carried out to compare different methods. This study evaluated the accuracy of direct comparisons between suspect's models and bite marks with indirect comparisons in the form of conventional traced overlays of suspects and found the xerographic technique to be the best. PMID:27051221

  6. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    Science.gov (United States)

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  7. Computer–based method of bite mark analysis: A benchmark in forensic dentistry?

    OpenAIRE

    Nandita Kottieth Pallam; Karen Boaz; Srikant Natrajan; Minu Raj; Nidhi Manaktala; Lewis, Amitha J

    2016-01-01

    Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting ...

  8. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  9. Absorbed dose determination in photon fields using the tandem method

    International Nuclear Information System (INIS)

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF2: Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with 90Sr-90Y, calibrated with the energy of 60Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than 5%. The reason of the answers of the CaF2: Dy and LiF: Mg, Ti, according to the energy of the radiation, allows us to establish the effective energy of photons and the absorbed dose, with a margin of error of less than 10% and 20% respectively

  10. A method of transferring G.T.S. benchmark value to survey area using electronic total station

    Digital Repository Service at National Institute of Oceanography (India)

    Ganesan, P.

    A G.T.S. (Great Trigonometrical Survey) benchmark is a permanently fixed reference survey station (or point), having known elevation with respect to a standard datum (mean sea level). These are established all over India by Survey of India...

  11. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  12. A track length estimator method for dose calculations in low-energy X-ray irradiations: implementation, properties and performance.

    Science.gov (United States)

    Baldacci, F; Mittone, A; Bravin, A; Coan, P; Delaire, F; Ferrero, C; Gasilov, S; Létang, J M; Sarrut, D; Smekens, F; Freud, N

    2015-03-01

    The track length estimator (TLE) method, an "on-the-fly" fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 10(3), with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams. PMID:24973309

  13. Benchmarking for plant maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Komonen, K.; Ahonen, T.; Kunttu, S. (VTT Technical Research Centre of Finland, Espoo (Finland))

    2010-05-15

    The product of the project, e-Famemain, is a new kind of tool for benchmarking, which is based on many years' research efforts within Finnish industry. It helps to evaluate plants' performance in operations and maintenance by making industrial plants comparable with the aid of statistical methods. The system is updated continually and automatically. It carries out automatically multivariate statistical analysis when data is entered into system, and many other statistical operations. Many studies within Finnish industry during the last ten years have revealed clear causalities between various performance indicators. In addition, these causalities should be taken into account when utilising benchmarking or forecasting indicator values e.g. for new investments. The benchmarking system consists of five sections: data input section, positioning section, locating differences section, best practices and planning section and finally statistical tables. (orig.)

  14. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    This report is a compilation of the information submitted by AECL, CIAE, JAERI, ORNL and Siemens in response to a need identified at the 'Workshop on R and D Needs' at the IGORR-3 meeting. The survey compiled information on the national standards applied to the Safety Quality Assurance (SQA) programs undertaken by the participants. Information was assembled for the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods used to verify and validate the codes and libraries. Although the survey was not comprehensive, it provides a basis for exchanging information of common interest to the research reactor community

  15. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PMID:21587191

  16. Benchmark problem proposal

    International Nuclear Information System (INIS)

    The meeting of the Radiation Energy Spectra Unfolding Workshop organized by the Radiation Shielding Information Center is discussed. The plans of the unfolding code benchmarking effort to establish methods of standardization for both the few channel neutron and many channel gamma-ray and neutron spectroscopy problems are presented

  17. Statistical methods used for code-to-code comparisons in the OECD/NRC PWR MSLB benchmark

    International Nuclear Information System (INIS)

    The ongoing pressurized water reactor (PWR) main steam line break (MSLB) benchmark problem, sponsored by the Office for Economic Cooperation and Development (OECD), the United States Nuclear Regulatory Commission (US NRC), and the Pennsylvania State University (PSU) consists of three exercises, whose combined purpose is to verify the capability of system codes to analyze complex transients with coupled core/plant interactions; to test fully the 3D neutronics/thermal-hydraulic coupling; and to evaluate discrepancies between the predictions of coupled codes in best-estimate transient simulations. Exercise two is intended to test core response to imposed system thermal-hydraulic conditions. For this exercise, the participants are provided with transient boundary conditions and two cross-section libraries. Results are submitted for six steady-state cases and two transient scenarios. The boundary conditions, the details for each case, and the output requested are described in the final specifications for the benchmark problem. To fully analyze the data for comparison in the final report, a suite of statistical methods has been developed, to serve as a reference in the absence of experimental data. A corrected arithmetical mean and standard deviation are calculated for all data types: single-value parameters, 1D axial distributions, 2D radial distributions, and time histories. Each participant's deviation from the mean and a corresponding figure-of-merit are reported for the purposes of comparison and discussion. Selected mean values and standard deviations are presented in this paper for several parameters at specific points of interest: for the initial steady-state 2, at hot-full power, radial and axial power distributions are presented, along with effective multiplication factor, power peaking factors, and axial offset. For the snapshot taken at the time of highest return-to-power in transient Scenario 2, parameters presented include axial and radial power

  18. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    Directory of Open Access Journals (Sweden)

    Claudio Ferone

    2014-08-01

    Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.

  19. Technical Review of SRS Dose Reconstrruction Methods Used By CDC

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, Ali, A

    2005-07-20

    At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. The receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.

  20. Dose conversion factors for radiation doses at normal operation discharges. F. Methods report

    International Nuclear Information System (INIS)

    A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway

  1. Manual method for dose calculation in gynecologic brachytherapy; Metodo manual para o calculo de doses em braquiterapia ginecologica

    Energy Technology Data Exchange (ETDEWEB)

    Vianello, Elizabeth A.; Almeida, Carlos E. de [Instituto Nacional do Cancer, Rio de Janeiro, RJ (Brazil); Biaggio, Maria F. de [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    1998-09-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author) 10 refs., 5 figs.

  2. Absorbed dose determination in photon fields using the tandem method

    CERN Document Server

    Marques-Pachas, J F

    1999-01-01

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF sub 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with sup 9 sup 0 Sr- sup 9 sup 0 Y, calibrated with the energy of sup 6 sup 0 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than ...

  3. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  4. Estimation of benchmark dose as the threshold levels of urinary cadmium, based on excretion of total protein, β 2-microglobulin, and N-acetyl-β-D-glucosaminidase in cadmium nonpolluted regions in Japan

    International Nuclear Information System (INIS)

    Previously, we investigated the association between urinary cadmium (Cd) concentration and indicators of renal dysfunction, including total protein, β 2-microglobulin (β 2-MG), and N-acetyl-β-D-glucosaminidase (NAG). In 2778 inhabitants ≥50 years of age (1114 men, 1664 women) in three different Cd nonpolluted areas in Japan, we showed that a dose-response relationship existed between renal effects and Cd exposure in the general environment without any known Cd pollution. However, we could not estimate the threshold levels of urinary Cd at that time. In the present study, we estimated the threshold levels of urinary Cd as the benchmark dose low (BMDL) using the benchmark dose (BMD) approach. Urinary Cd excretion was divided into 10 categories, and an abnormality rate was calculated for each. Cut-off values for urinary substances were defined as corresponding to the 84% and 95% upper limit values of the target population who have not smoked. Then we calculated the BMD and BMDL using a log-logistic model. The values of BMD and BMDL for all urinary substances could be calculated. The BMDL for the 84% cut-off value of β 2-MG, setting an abnormal value at 5%, was 2.4 μg/g creatinine (cr) in men and 3.3 μg/g cr in women. In conclusion, the present study demonstrated that the threshold level of urinary Cd could be estimated in people living in the general environment without any known Cd-pollution in Japan, and the value was inferred to be almost the same as that in Belgium, Sweden, and China

  5. Intercomparison of the finite difference and nodal discrete ordinates and surface flux transport methods for a LWR pool-reactor benchmark problem in X-Y geometry

    International Nuclear Information System (INIS)

    The aim of the present work is to compare and discuss the three of the most advanced two dimensional transport methods, the finite difference and nodal discrete ordinates and surface flux method, incorporated into the transport codes TWODANT, TWOTRAN-NODAL, MULTIMEDIUM and SURCU. For intercomparison the eigenvalue and the neutron flux distribution are calculated using these codes in the LWR pool reactor benchmark problem. Additionally the results are compared with some results obtained by French collision probability transport codes MARSYAS and TRIDENT. Because the transport solution of this benchmark problem is close to its diffusion solution some results obtained by the finite element diffusion code FINELM and the finite difference diffusion code DIFF-2D are included

  6. Benchmark calculations of power distribution within fuel assemblies. Phase 2: comparison of data reduction and power reconstruction methods in production codes

    International Nuclear Information System (INIS)

    Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)

  7. Practical methods of dose reduction to the bladder wall

    International Nuclear Information System (INIS)

    The radiation dose to the bladder wall following the administration of radionuclides to patients can be reduced by a factor between 25 percent and 75 percent when the effective half-life for the radioactivity entering the urine is two hours or less. A significant but smaller reduction in dose to the gonads may also be achieved in situations where the major fraction of the administered activity is rapidly excreted in the urine. This reduction in dose is achieved by ensuring that the patient has between 50 and 150 ml of urine in his bladder when the radioactivity is injected, and is encouraged to void between one and two hours after the activity has been administered. The interrelationship of voiding schedule, effective half-life, initial urine volume, and demand urination has been analyzed in these studies. In addition, the significance of the rate of urine production and volume of urine in the bladder on the radiation dose to the bladder is demonstrated

  8. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  9. The PRISM Benchmark Suite

    OpenAIRE

    Kwiatkowsa, Marta; Norman, Gethin; Parker, David

    2012-01-01

    We present the PRISM benchmark suite: a collection of probabilistic models and property specifications, designed to facilitate testing, benchmarking and comparisons of probabilistic verification tools and implementations.

  10. Digital Breast Tomosynthesis: Comparison of Different Methods to Calculate Patient Doses

    International Nuclear Information System (INIS)

    Different methods have been proposed in the literature to calculate the dose to the patient's breast in 3-D mammography. The methods described by Dance et al. and Sechopoulos et al. have been compared in this study using the two tomosynthesis systems available in the authors' hospitals (Siemens and Hologic). There is a small but significant difference of 23% for the first X ray system and 13% for the second system between dose calculations performed with Dance's method and Sechopoulos' method. These differences are mainly due to the fact that the two sets of authors used different breast models for their Monte Carlo calculations. For each system, the calculated breast doses were compared with the dose values indicated on the system console. Good agreement was found when the method of Dance et al. was used for a breast glandularity based on the patient age. For the Siemens system, the calculated doses were 5% lower than the indicated dose and for the Hologic system, the calculated doses were 12% higher. Finally, the 3-D dose values were compared with the doses found in a large 2-D dosimetry study. The dose values for tomosynthesis on the Siemens system were almost double the doses in one view 2-D digital mammography. For a typical breast of thickness 45 mm, the dose of one 2-D view was 0.83 mGy and for one 3-D view 1.79 mGy. (author)

  11. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  12. CALIBRATION METHODS OF A CONSTITUTIVE MODEL FOR PARTIALLY SATURATED SOILS: A BENCHMARKING EXERCISE WITHIN THE MUSE NETWORK

    OpenAIRE

    D'Onza, Francesca

    2008-01-01

    The paper presents a benchmarking exercise comparing different procedures, adopted by seven different teams of constitutive modellers, for the determination of parameter values in the Barcelona Basic Model, which is an elasto-plastic model for unsaturated soils. Each team is asked to determine a set of parameter values based on the same laboratory test data. The different set of parameters are then employed to simulate soil behaviour along a variety of stress paths. The results are finally co...

  13. Benchmark study on fine-mode aerosol in a big urban area and relevant doses deposited in the human respiratory tract.

    Science.gov (United States)

    Avino, Pasquale; Protano, Carmela; Vitali, Matteo; Manigrasso, Maurizio

    2016-09-01

    It is well-known that the health effects of PM increase as particle size decreases: particularly, great concern has risen on the role of UltraFine Particles (UFPs). Starting from the knowledge that the main fraction of atmospheric aerosol in Rome is characterized by significant levels of PM2.5 (almost 75% of PM10 fraction is PM2.5), the paper is focused on submicron particles in such great urban area. The daytime/nighttime, work-/weekdays and cold/hot seasonal trends of submicron particles will be investigated and discussed along with NOx and total PAH drifts demonstrating the primary origin of UFPs from combustion processes. Furthermore, moving from these data, the total dose of submicron particles deposited in the respiratory system (i.e., head, tracheobronchial and alveolar regions in different lung lobes) has been estimated. Dosimeter estimates were performed with the Multiple-Path Particle Dosimetry model (MPPD v.2.1). The paper discusses the aerosol doses deposited in the respiratory system of individuals exposed in proximity of traffic. During traffic peak hours, about 6.6 × 10(10) particles are deposited into the respiratory system. Such dose is almost entirely made of UFPs. According to the greater dose estimated, right lung lobes are expected to be more susceptible to respiratory pathologies than left lobes. PMID:27325547

  14. Experience with a new simple method for the determination of doses in computed tomography

    International Nuclear Information System (INIS)

    A previously published method for estimating patient doses in computed tomography which utilizes the concept of a centimeter section dose (CSD) and integral scatter factors (ISF's) has been extended by obtaining the CSD and ISF data from a simple series of phantom measurements. These measurements and the various stages required to arrive at the relevant CSDs and ISF data are discussed. In addition, a series of dose measurements have been performed on patients for a range of examination protocols. These measured doses at various positions within and outside the scanned area are compared with predicted doses obtained using the CSD method

  15. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  16. SU-E-T-280: Reconstructed Rectal Wall Dose Map-Based Verification of Rectal Dose Sparing Effect According to Rectum Definition Methods and Dose Perturbation by Air Cavity in Endo-Rectal Balloon

    International Nuclear Information System (INIS)

    Purpose: Dosimetric effect and discrepancy according to the rectum definition methods and dose perturbation by air cavity in an endo-rectal balloon (ERB) were verified using rectal-wall (Rwall) dose maps considering systematic errors in dose optimization and calculation accuracy in intensity-modulated radiation treatment (IMRT) for prostate cancer patients. Methods: When the inflated ERB having average diameter of 4.5 cm and air volume of 100 cc is used for patient, Rwall doses were predicted by pencil-beam convolution (PBC), anisotropic analytic algorithm (AAA), and AcurosXB (AXB) with material assignment function. The errors of dose optimization and calculation by separating air cavity from the whole rectum (Rwhole) were verified with measured rectal doses. The Rwall doses affected by the dose perturbation of air cavity were evaluated using a featured rectal phantom allowing insert of rolled-up gafchromic films and glass rod detectors placed along the rectum perimeter. Inner and outer Rwall doses were verified with reconstructed predicted rectal wall dose maps. Dose errors and extent at dose levels were evaluated with estimated rectal toxicity. Results: While AXB showed insignificant difference of target dose coverage, Rwall doses underestimated by up to 20% in dose optimization for the Rwhole than Rwall at all dose range except for the maximum dose. As dose optimization for Rwall was applied, the Rwall doses presented dose error less than 3% between dose calculation algorithm except for overestimation of maximum rectal dose up to 5% in PBC. Dose optimization for Rwhole caused dose difference of Rwall especially at intermediate doses. Conclusion: Dose optimization for Rwall could be suggested for more accurate prediction of rectal wall dose prediction and dose perturbation effect by air cavity in IMRT for prostate cancer. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea

  17. A new finite cloud method for calculating external exposure dose in a nuclear emergency

    International Nuclear Information System (INIS)

    A new finite cloud method (5/μ method) for calculating external exposure dose in a nuclear emergency is presented in this paper. The method calculates external exposure dose over a specially constructed three-dimensional columned space, whose underside center is the location of the receptor and underside radius and height are both five times mean free path of a gamma-photon. Then, the space is divided into many grid cells for integral to calculate external exposure dose (or dose rate). The calculation values of air external exposure dose rate conversion factors and air-absorbed dose rate conversion factors by the 5/μ method are accordant with the values presented in related references. Comparing with the discrete point approximation method (DPA) [USNRC, The MESORAD Dose Assessment Model. NUREG/CR-4000 Vol. 1, 1986] and the Nomogram method [USNRC, Nomogram for Evaluation of Doses from Finite Noble Gas Clouds, NUREG-0851, 1983], which are two traditional finite cloud methods for calculating external exposure dose, the 5/μ method has a distinct advantage of more fast calculation speed, which is very important in a nuclear emergency. What is more, the 5/μ method can be applied together with three-dimensional atmospheric dispersion models

  18. A new finite cloud method for calculating external exposure dose in a nuclear emergency

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y.; Ling, Y.S. E-mail: lingyongsheng00@mails.tsinghua.edu.cn; Shi, Z.Q

    2004-06-01

    A new finite cloud method (5/{mu} method) for calculating external exposure dose in a nuclear emergency is presented in this paper. The method calculates external exposure dose over a specially constructed three-dimensional columned space, whose underside center is the location of the receptor and underside radius and height are both five times mean free path of a gamma-photon. Then, the space is divided into many grid cells for integral to calculate external exposure dose (or dose rate). The calculation values of air external exposure dose rate conversion factors and air-absorbed dose rate conversion factors by the 5/{mu} method are accordant with the values presented in related references. Comparing with the discrete point approximation method (DPA) [USNRC, The MESORAD Dose Assessment Model. NUREG/CR-4000 Vol. 1, 1986] and the Nomogram method [USNRC, Nomogram for Evaluation of Doses from Finite Noble Gas Clouds, NUREG-0851, 1983], which are two traditional finite cloud methods for calculating external exposure dose, the 5/{mu} method has a distinct advantage of more fast calculation speed, which is very important in a nuclear emergency. What is more, the 5/{mu} method can be applied together with three-dimensional atmospheric dispersion models.

  19. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  20. METHODS AND HARDWARE OF DOSE OUTPUT VERIFICATION FOR DYNAMIC RADIOTHERAPY

    OpenAIRE

    Y. V. Tsitovich; A. I. Hmyrak; A. I. Tarutin; M. G. Kiselev

    2013-01-01

    The design of special verification phantom for dynamic radiotherapy checking is described. This phantom permits to insert the dose distribution cross-calibration before every days patients irradiation on Linac with RapidArc. Cross-calibration factor is defined by approximation of large number correction factors measured in phantom at different angles of gantry rotation and middle quantity calculation. The long range stability of all correction factors have been evaluated during checking of se...

  1. Recommended environmental dose calculation methods and Hanford-specific parameters

    Energy Technology Data Exchange (ETDEWEB)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V. (Pacific Northwest Lab., Richland, WA (United States)); Davis, J.S. (Westinghouse Hanford Co., Richland, WA (United States))

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document.

  2. The effects of anatomic resolution, respiratory variations and dose calculation methods on lung dosimetry

    Science.gov (United States)

    Babcock, Kerry Kent Ronald

    2009-04-01

    The goal of this thesis was to explore the effects of dose resolution, respiratory variation and dose calculation method on dose accuracy. To achieve this, two models of lung were created. The first model, called TISSUE, approximated the connective alveolar tissues of the lung. The second model, called BRANCH, approximated the lungs bronchial, arterial and venous branching networks. Both models were varied to represent the full inhalation, full exhalation and midbreath phases of the respiration cycle. To explore the effects of dose resolution and respiratory variation on dose accuracy, each model was converted into a CT dataset and imported into a Monte Carlo simulation. The resulting dose distributions were compared and contrasted against dose distributions from Monte Carlo simulations which included the explicit model geometries. It was concluded that, regardless of respiratory phase, the exclusion of the connective tissue structures in the CT representation did not significantly effect the accuracy of dose calculations. However, the exclusion of the BRANCH structures resulted in dose underestimations as high as 14% local to the branching structures. As lung density decreased, the overall dose accuracy marginally decreased. To explore the effects of dose calculation method on dose accuracy, CT representations of the lung models were imported into the Pinnacle 3 treatment planning system. Dose distributions were calculated using the collapsed cone convolution method and compared to those derived using the Monte Carlo method. For both lung models, it was concluded that the accuracy of the collapsed cone algorithm decreased with decreasing density. At full inhalation lung density, the collapsed cone algorithm underestimated dose by as much as 15%. Also, the accuracy of the CCC method decreased with decreasing field size. Further work is needed to determine the source of the discrepancy.

  3. Benchmarking a new closed-form thermal analysis technique against a traditional lumped parameter, finite-difference method

    Energy Technology Data Exchange (ETDEWEB)

    Huff, K. D.; Bauer, T. H. (Nuclear Engineering Division)

    2012-08-20

    A benchmarking effort was conducted to determine the accuracy of a new analytic generic geology thermal repository model developed at LLNL relative to a more traditional, numerical, lumped parameter technique. The fast-running analytical thermal transport model assumes uniform thermal properties throughout a homogenous storage medium. Arrays of time-dependent heat sources are included geometrically as arrays of line segments and points. The solver uses a source-based linear superposition of closed form analytical functions from each contributing point or line to arrive at an estimate of the thermal evolution of a generic geologic repository. Temperature rise throughout the storage medium is computed as a linear superposition of temperature rises. It is modeled using the MathCAD mathematical engine and is parameterized to allow myriad gridded repository geometries and geologic characteristics [4]. It was anticipated that the accuracy and utility of the temperature field calculated with the LLNL analytical model would provide an accurate 'birds-eye' view in regions that are many tunnel radii away from actual storage units; i.e., at distances where tunnels and individual storage units could realistically be approximated as physical lines or points. However, geometrically explicit storage units, waste packages, tunnel walls and close-in rock are not included in the MathCAD model. The present benchmarking effort therefore focuses on the ability of the analytical model to accurately represent the close-in temperature field. Specifically, close-in temperatures computed with the LLNL MathCAD model were benchmarked against temperatures computed using geometrically-explicit lumped-parameter, repository thermal modeling technique developed over several years at ANL using the SINDAG thermal modeling code [5]. Application of this numerical modeling technique to underground storage of heat generating nuclear waste streams within the proposed YMR Site has been widely

  4. Different intensity extension methods and their impact on entrance dose in breast radiotherapy: A study

    Directory of Open Access Journals (Sweden)

    Sankar A

    2009-01-01

    Full Text Available In breast radiotherapy, skin flashing of treatment fields is important to account for intrafraction movements and setup errors. This study compares the two different intensity extension methods, namely, Virtual Bolus method and skin flash tool method, to provide skin flashing in intensity modulated treatment fields. The impact of these two different intensity extension methods on skin dose was studied by measuring the entrance dose of the treatment fields using semiconductor diode detectors. We found no significant difference in entrance dose due to different methods used for intensity extension. However, in the skin flash tool method, selection of appropriate parameters is important to get optimum fluence extension.

  5. Code intercomparison and benchmark for muon fluence and absorbed dose induced by an 18-GeV electron beam after massive iron shielding

    CERN Document Server

    Fasso, Alberto; Ferrari, Anna; Mokhov, Nikolai V; Mueller, Stefan E; Nelson, Walter Ralph; Roesler, Stefan; Sanami, Toshiya; Striganov, Sergei I; Versaci, Roberto

    2015-01-01

    In 1974, Nelson, Kase, and Svenson published an experimental investigation on muon shielding using the SLAC high energy LINAC. They measured muon fluence and absorbed dose induced by a 18 GeV electron beam hitting a copper/water beam dump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical mode ls available at the time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results will then be compared between the codes, and with the SLAC data.

  6. Modeling of tube current modulation methods in computed tomography dose calculations for adult and pregnant patients

    International Nuclear Information System (INIS)

    The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)

  7. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  8. 基准地价更新方法及趋势探析%Study on Remedying Methods of Benchmark Land Prices and Development Trend

    Institute of Scientific and Technical Information of China (English)

    张磊; 王阳

    2016-01-01

    城镇基准地价制度是我国在城镇土地市场发育初期所建立的一项土地管理制度,随着土地市场的发育,原有的基准地价更新方法已不能完全满足土地管理的需求,各地都在探索适应当地土地利用管理的新的更新方法。本文通过对各类基准地价更新方法的比较,并借鉴国外的地价制度,对城镇基准地价更新方法的发展趋势作以探讨。%Urban system is China's benchmark land prices in the early development of urban land market by the establishment of a land management system. A-long with the development of the land market, the original method can not fully meet the demand for land management, all localities are exploring adapt to the local land-use management of the new method update. This essay, which bases on comparing various methods and using foreign premium system, try to update on urban land price benchmark trends for the development of methods to explore.

  9. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    When we assume a severe accident in a nuclear power plant, it is required for rescue activity in the plant, accident management, repair work of failed parts and evaluation of employees to obtain radiation dose rate distribution or map in the plant and estimated dose value for the above works. However it might be difficult to obtain them accurately along the progress of the accident, because radiation monitors are not always installed in the areas where the accident management is planned or the repair work is thought for safety-related equipments. In this work, we analyzed diffusion of radioactive materials in case of a severe accident in a pressurized water reactor plant, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system by modeling a specific part of components and buildings in the plant from this design study on dose evaluation method for employees at severe accident, and then evaluated its availability. As a result, we obtained the followings: (1) A new dose evaluation method was established to predict the radiation dose rate in any point in the plant during a severe accident scenario. (2) This evaluation of total dose including moving route and time for the accident management and the repair work is useful for estimating radiation dose limit for these actions of the employees. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  10. Detection system built from commercial integrated circuits for real-time measurement of radiation dose and quality using the variance method

    International Nuclear Information System (INIS)

    A small, specialised amplifier using commercial integrated circuits (ICs) was developed to measure radiation dose and quality in real time using a microdosimetric ion chamber and the variance method. The charges from a microdosimetric ion chamber, operated in the current mode, were repeatedly collected for a fixed period of time for 20 cycles of 100 integrations, and processed by this specialised amplifier to produce signal pulse heights between 0 and 10 V. These signals were recorded by a multi-channel analyser coupled to a computer. FORTRAN programs were written to calculate the dose and dose variance. The dose variance produced in the ion chamber is a microdosimetric measure of radiation quality. Benchmark measurements of different brands of ICs were conducted. Results demonstrate that this specialised amplifier is capable of distinguishing differences of radiation quality in various high-dose-rate radiation fields including X rays, gamma rays and mixed neutron-gamma radiation from the research reactor at Texas A and M Univ. (authors)

  11. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    International Nuclear Information System (INIS)

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  12. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in...

  13. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  14. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  15. Two New Computational Methods for Universal DNA Barcoding: A Benchmark Using Barcode Sequences of Bacteria, Archaea, Animals, Fungi, and Land Plants

    Science.gov (United States)

    Tanabe, Akifumi S.; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used “1-nearest-neighbor” (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to

  16. X-ray tube output based calculation of patient entrance surface dose: validation of the method

    Energy Technology Data Exchange (ETDEWEB)

    Harju, O.; Toivonen, M.; Tapiovaara, M.; Parviainen, T. [Radiation and Nuclear Safety Authority, Helsinki (Finland)

    2003-06-01

    X-ray departments need methods to monitor the doses delivered to the patients in order to be able to compare their dose level to established reference levels. For this purpose, patient dose per radiograph is described in terms of the entrance surface dose (ESD) or dose-area product (DAP). The actual measurement is often made by using a DAP-meter or thermoluminescent dosimeters (TLD). The third possibility, the calculation of ESD from the examination technique factors, is likely to be a common method for x-ray departments that do not have the other methods at their disposal or for examinations where the dose may be too low to be measured by the other means (e.g. chest radiography). We have developed a program for the determination of ESD by the calculation method and analysed the accuracy that can be achieved by this indirect method. The program calculates the ESD from the current time product, x-ray tube voltage, beam filtration and focus- to-skin distance (FSD). Additionally, for calibrating the dose calculation method and thereby improving the accuracy of the calculation, the x-ray tube output should be measured for at least one x-ray tube voltage value in each x-ray unit. The aim of the present work is to point out the restrictions of the method and details of its practical application. The first experiences from the use of the method will be summarised. (orig.)

  17. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  18. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  19. Benchmark calculations for EGS5

    International Nuclear Information System (INIS)

    In the past few years, EGS4 has undergone an extensive upgrade to EGS5, in particularly in the areas of low-energy electron physics, low-energy photon physics, PEGS cross section generation, and the coding from Mortran to Fortran programming. Benchmark calculations have been made to assure the accuracy, reliability and high quality of the EGS5 code system. This study reports three benchmark examples that show the successful upgrade from EGS4 to EGS5 based on the excellent agreements among EGS4, EGS5 and measurements. The first benchmark example is the 1969 Crannell Experiment to measure the three-dimensional distribution of energy deposition for 1-GeV electrons shower in water and aluminum tanks. The second example is the 1995 Compton-scattered spectra measurements for 20-40 keV, linearly polarized photon by Namito et. al., in KEK, which was a main part of the low-energy photon expansion work for both EGS4 and EGS5. The third example is the 1986 heterogeneity benchmark experiment by Shortt et. al., who used a monoenergetic 20-MeV electron beam to hit the front face of a water tank containing both air and aluminum cylinders and measured spatial depth dose distribution using a small solid-state detector. (author)

  20. Beta-ray dose assessment from skin contamination using a point kernel method

    International Nuclear Information System (INIS)

    In this study, a point kernel method to calculate beta-ray dose rate from skin contamination was introduced. The beta-ray doses rates were computed by performing numerical integration of the radial dose distribution around an isotropic point source of monoenergetic electrons called as point kernel. In this study, in-house code, based on MATLAB version 7.0.4 was developed to perform a numerical integration. The code generated dose distributions for beta-ray emitters from interpolated point kernel, and beta-ray dose rates from skin contamination were calculated by numerical integration. Generated dose distributions for selected beta-ray emitters agreed with those calculated by Cross et al within 20%, except at a longer distance where there are differences up to more than 100%. For a point source, calculated beta-ray doses were agreed well with those derived from Monte Carlo simulation. For a disk source, the differences were up to 17% at a deep region. Point kernel method underestimated beta-ray doses than Monte Carlo simulation. The code will be improved to deal with a three-dimensional source, shielding by cover material, air gap and contribution of photon to skin dose. For the sake of user's convenience, the code will be equipped with graphic user interface. (author)

  1. Design study on dose evaluation method for employees at severe accident

    International Nuclear Information System (INIS)

    If a severe accident occurs in a pressurized water reactor plant, it is required to estimate dose values of operators engaged in emergency such as accident management, repair of failed parts. However, it might be difficult to measure radiation dose rate during the progress of an accident, because radiation monitors are not always installed in areas where the emergency activities are required. In this study, we analyzed the transport of radioactive materials in case of a severe accident, investigated a method to obtain radiation dose rate in the plant from estimated radioactive sources, made up a prototype analyzing system from this design study, and then evaluated its availability. As a result, we obtained the following: (1) A new dose evaluation method was established to predict the radiation dose rate at any point in the plant during a severe accident scenario. (2) This evaluation of total dose including access route and time for emergency activities is useful for estimating radiation dose limit for these employee actions. (3) The radiation dose rate map is effective for identifying high radiation areas and for choosing a route with lower radiation dose rate. (author)

  2. Method for recovery of thyroidal radiation dose due to 131I incorporation

    International Nuclear Information System (INIS)

    Method for retrospective recovery of the radiation dose in the thyroid of humans of dirrerent age groups due to 131I incroporation is developed. Method is based on the analysis of density of 137Cs fallout, dose of the mixture of desimented gamma-sources, and the measured adiation dose in the thyroid. The technique is developed using the available data on Chernobyl accident. Ckrrelations were found between the examined parameters in a wide range of the sedimented radionuclides concentrations. The resultant estimated dose dostribution in the thyroid virtually does not differ from that measured in the known settlements. Thyroid radiations doses in similar fallout density of 137Cs and in the same gamma-radiation doses vary by scores; the share of subjects with the maximal radiaiton doses makes up 0.01-0.005%. Highest value of the correlation factor of thyroid radiation dose fallout with the dose of external gamma-radiaiton was found within the risk 8 months after the accident

  3. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  4. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  5. Accuracy of effective dose estimation in personal dosimetry: a comparison between single-badge and double-badge methods and the MOSFET method.

    Science.gov (United States)

    Januzis, Natalie; Belley, Matthew D; Nguyen, Giao; Toncheva, Greta; Lowry, Carolyn; Miller, Michael J; Smith, Tony P; Yoshizumi, Terry T

    2014-05-01

    The purpose of this study was three-fold: (1) to measure the transmission properties of various lead shielding materials, (2) to benchmark the accuracy of commercial film badge readings, and (3) to compare the accuracy of effective dose (ED) conversion factors (CF) of the U.S. Nuclear Regulatory Commission methods to the MOSFET method. The transmission properties of lead aprons and the accuracy of film badges were studied using an ion chamber and monitor. ED was determined using an adult male anthropomorphic phantom that was loaded with 20 diagnostic MOSFET detectors and scanned with a whole body CT protocol at 80, 100, and 120 kVp. One commercial film badge was placed at the collar and one at the waist. Individual organ doses and waist badge readings were corrected for lead apron attenuation. ED was computed using ICRP 103 tissue weighting factors, and ED CFs were calculated by taking the ratio of ED and badge reading. The measured single badge CFs were 0.01 (±14.9%), 0.02 (±9.49%), and 0.04 (±15.7%) for 80, 100, and 120 kVp, respectively. Current regulatory ED CF for the single badge method is 0.3; for the double-badge system, they are 0.04 (collar) and 1.5 (under lead apron at the waist). The double-badge system provides a better coefficient for the collar at 0.04; however, exposure readings under the apron are usually negligible to zero. Based on these findings, the authors recommend the use of ED CF of 0.01 for the single badge system from 80 kVp (effective energy 50.4 keV) data. PMID:24670903

  6. Method for inserting noise in digital mammography to simulate reduction in radiation dose

    Science.gov (United States)

    Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Vieira, Marcelo A. C.

    2015-03-01

    The quality of clinical x-ray images is closely related to the radiation dose used in the imaging study. The general principle for selecting the radiation is ALARA ("as low as reasonably achievable"). The practical optimization, however, remains challenging. It is well known that reducing the radiation dose increases the quantum noise, which could compromise the image quality. In order to conduct studies about dose reduction in mammography, it would be necessary to acquire repeated clinical images, from the same patient, with different dose levels. However, such practice would be unethical due to radiation related risks. One solution is to simulate the effects of dose reduction in clinical images. This work proposes a new method, based on the Anscombe transformation, which simulates dose reduction in digital mammography by inserting quantum noise into clinical mammograms acquired with the standard radiation dose. Thus, it is possible to simulate different levels of radiation doses without exposing the patient to new levels of radiation. Results showed that the achieved quality of simulated images generated with our method is the same as when using other methods found in the literature, with the novelty of using the Anscombe transformation for converting signal-independent Gaussian noise into signal-dependent quantum noise.

  7. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  8. The simple exposure dose calculation method in interventional radiology and one case of radiation injury (alopecia)

    International Nuclear Information System (INIS)

    Interventional radiology (IVR) is less invasive than surgery, and has rapidly become widespread due to advances in instruments and X-ray apparatuses. However, radiation exposure of long-time fluoroscopy induces the risk of radiation injury. We estimated the exposure dose in the patient who underwent IVR therapy and developed radiation injury (alopecia). The patient outcome and the method of estimating the exposure dose are reported. The estimation method of exposure dose was roughly estimated by real-time expose dose during exam. It is a useful indicator for the operator to know the exposure dose during IVR. We, radiological technologist must to know call attention to the role of radiological technicians during IVR. (author)

  9. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  10. Digital radiography of scoliosis with a scanning method: radiation dose optimization

    Energy Technology Data Exchange (ETDEWEB)

    Geijer, Haakan; Andersson, Torbjoern [Department of Radiology, Oerebro University Hospital, 701 85 Oerebro (Sweden); Verdonck, Bert [Philips Medical Systems, P.O. Box 10,000, 5680 Best (Netherlands); Beckman, Karl-Wilhelm; Persliden, Jan [Department of Medical Physics, Oerebro University Hospital, 701 85 Oerebro (Sweden)

    2003-03-01

    The aim of this study was optimization of the radiation dose-image quality relationship for a digital scanning method of scoliosis radiography. The examination is performed as a digital multi-image translation scan that is reconstructed to a single image in a workstation. Entrance dose was recorded with thermoluminescent dosimeters placed dorsally on an Alderson phantom. At the same time, kerma area product (KAP) values were recorded. A Monte Carlo calculation of effective dose was also made. Image quality was evaluated with a contrast-detail phantom and Visual Grading. The radiation dose was reduced by lowering the image intensifier entrance dose request, adjusting pulse frequency and scan speed, and by raising tube voltage. The calculated effective dose was reduced from 0.15 to 0.05 mSv with reduction of KAP from 1.07 to 0.25 Gy cm{sup 2} and entrance dose from 0.90 to 0.21 mGy. The image quality was reduced with the Image Quality Figure going from 52 to 62 and a corresponding reduction in image quality as assessed with Visual Grading. The optimization resulted in a dose reduction to 31% of the original effective dose with an acceptable reduction in image quality considering the intended use of the images for angle measurements. (orig.)

  11. Liver tumour segmentation using contrast-enhanced multi-detector CT data: performance benchmarking of three semiautomated methods

    International Nuclear Information System (INIS)

    Automatic tumour segmentation and volumetry is useful in cancer staging and treatment outcome assessment. This paper presents a performance benchmarking study on liver tumour segmentation for three semiautomatic algorithms: 2D region growing with knowledge-based constraints (A1), 2D voxel classification with propagational learning (A2) and Bayesian rule-based 3D region growing (A3). CT data from 30 patients were studied, and 47 liver tumours were isolated and manually segmented by experts to obtain the reference standard. Four datasets with ten tumours were used for algorithm training and the remaining 37 tumours for testing. Three evaluation metrics, relative absolute volume difference (RAVD), volumetric overlap error (VOE) and average symmetric surface distance (ASSD), were computed based on computerised and reference segmentations. A1, A2 and A3 obtained mean/median RAVD scores of 17.93/10.53%, 17.92/9.61% and 34.74/28.75%, mean/median VOEs of 30.47/26.79%, 25.70/22.64% and 39.95/38.54%, and mean/median ASSDs of 2.05/1.41 mm, 1.57/1.15 mm and 4.12/3.41 mm, respectively. For each metric, we obtained significantly lower values of A1 and A2 than A3 (P < 0.01), suggesting that A1 and A2 outperformed A3. Compared with the reference standard, the overall performance of A1 and A2 is promising. Further development and validation is necessary before reliable tumour segmentation and volumetry can be widely used clinically. (orig.)

  12. A CT-based analytical dose calculation method for HDR 192Ir brachytherapy

    International Nuclear Information System (INIS)

    Purpose: This article presents an analytical dose calculation method for high-dose-rate 192Ir brachytherapy, taking into account the effects of inhomogeneities and reduced photon backscatter near the skin. The adequacy of the Task Group 43 (TG-43) two-dimensional formalism for treatment planning is also assessed. Methods: The proposed method uses material composition and density data derived from computed tomography images. The primary and scatter dose distributions for each dwell position are calculated first as if the patient is an infinite water phantom. This is done using either TG-43 or a database of Monte Carlo (MC) dose distributions. The latter can be used to account for the effects of shielding in water. Subsequently, corrections for photon attenuation, scatter, and spectral variations along medium- or low-Z inhomogeneities are made according to the radiological paths determined by ray tracing. The scatter dose is then scaled by a correction factor that depends on the distances between the point of interest, the body contour, and the source position. Dose calculations are done for phantoms with tissue and lead inserts, as well as patient plans for head-and-neck, esophagus, and MammoSite balloon breast brachytherapy treatments. Gamma indices are evaluated using a dose-difference criterion of 3% and a distance-to-agreement criterion of 2 mm. PTRANCT MC calculations are used as the reference dose distributions. Results: For the phantom with tissue and lead inserts, the percentages of the voxels of interest passing the gamma criteria (Pγ≥1) are 100% for the analytical calculation and 91% for TG-43. For the breast patient plan, TG-43 overestimates the target volume receiving the prescribed dose by 4% and the dose to the hottest 0.1 cm3 of the skin by 9%, whereas the analytical and MC results agree within 0.4%. Pγ≥1 are 100% and 48% for the analytical and TG-43 calculations, respectively. For the head-and-neck and esophagus patient plans, Pγ≥1 are ≥99

  13. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The applica......nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  14. A Novel Method for the Evaluation of Uncertainty in Dose-Volume Histogram Computation

    International Nuclear Information System (INIS)

    Purpose: Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. Methods and Materials: To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. Results: This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Conclusions: Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger

  15. Monte Carlo simulation methods of determining red bone marrow dose from external radiation

    International Nuclear Information System (INIS)

    Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)

  16. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  17. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States); Ghadyani, HR [SUNY Farmingdale State College, Farmingdale, NY (United States); Bastien, AD; Lutz, NN [Univeristy Massachusetts Lowell, Lowell, MA (United States); Hepel, JT [Rhode Island Hospital, Providence, RI (United States)

    2015-06-15

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  18. A method of dose reconstruction for moving targets compatible with dynamic treatments

    Energy Technology Data Exchange (ETDEWEB)

    Rugaard Poulsen, Per; Lykkegaard Schmidt, Mai; Keall, Paul; Schjodt Worm, Esben; Fledelius, Walther; Hoffmann, Lone [Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C, Institute of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus C (Denmark); Radiation Physics Laboratory, Sydney Medical School, University of Sydney, NSW 2006 (Australia); Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C, Department of Medical Physics, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark); Department of Oncology, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark); Department of Medical Physics, Aarhus University Hospital, Norrebrogade 44, 8000 Aarhus C (Denmark)

    2012-10-15

    Purpose: To develop a method that allows a commercial treatment planning system (TPS) to perform accurate dose reconstruction for rigidly moving targets and to validate the method in phantom measurements for a range of treatments including intensity modulated radiation therapy (IMRT), volumetric arc therapy (VMAT), and dynamic multileaf collimator (DMLC) tracking. Methods: An in-house computer program was developed to manipulate Dicom treatment plans exported from a TPS (Eclipse, Varian Medical Systems) such that target motion during treatment delivery was incorporated into the plans. For each treatment, a motion including plan was generated by dividing the intratreatment target motion into 1 mm position bins and construct sub-beams that represented the parts of the treatment that were delivered, while the target was located within each position bin. For each sub-beam, the target shift was modeled by a corresponding isocenter shift. The motion incorporating Dicom plans were reimported into the TPS, where dose calculation resulted in motion including target dose distributions. For experimental validation of the dose reconstruction a thorax phantom with a moveable lung equivalent rod with a tumor insert of solid water was first CT scanned. The tumor insert was delineated as a gross tumor volume (GTV), and a planning target volume (PTV) was formed by adding margins. A conformal plan, two IMRT plans (step-and-shoot and sliding windows), and a VMAT plan were generated giving minimum target doses of 95% (GTV) and 67% (PTV) of the prescription dose (3 Gy). Two conformal fields with MLC leaves perpendicular and parallel to the tumor motion, respectively, were generated for DMLC tracking. All treatment plans were delivered to the thorax phantom without tumor motion and with a sinusoidal tumor motion. The two conformal fields were delivered with and without portal image guided DMLC tracking based on an embedded gold marker. The target dose distribution was measured with a

  19. A novel method for the evaluation of uncertainty in dose volume histogram computation

    CERN Document Server

    Cutanda-Henriquez, Francisco

    2007-01-01

    Dose volume histograms are a useful tool in state-of-the-art radiotherapy planning, and it is essential to be aware of their limitations. Dose distributions computed by treatment planning systems are affected by several sources of uncertainty such as algorithm limitations, measurement uncertainty in the data used to model the beam and residual differences between measured and computed dose, once the model is optimized. In order to take into account the effect of uncertainty, a probabilistic approach is proposed and a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal or greater than a certain value is found using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a relationship is given for practical computations. This method is applied to a set of dose volume histograms for different regions of interest for 6 brain pat...

  20. Benchmark Database on Isolated Small Peptides Containing an Aromatic Side Chain: Comparison Between Wave Function and Density Functional Theory Methods and Empirical Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, Haydee; Pluhackova, Kristyna; Pitonak, Michal; Rezac, Jan; Hobza, Pavel

    2008-03-13

    A detailed quantum chemical study on five peptides (WG, WGG, FGG, GGF and GFA) containing the residues phenylalanyl (F), glycyl (G), tryptophyl (W) and alanyl (A)—where F and W are of aromatic character—is presented. When investigating isolated small peptides, the dispersion interaction is the dominant attractive force in the peptide backbone–aromatic side chain intramolecular interaction. Consequently, an accurate theoretical study of these systems requires the use of a methodology covering properly the London dispersion forces. For this reason we have assessed the performance of the MP2, SCS-MP2, MP3, TPSS-D, PBE-D, M06-2X, BH&H, TPSS, B3LYP, tight-binding DFT-D methods and ff99 empirical force field compared to CCSD(T)/complete basis set (CBS) limit benchmark data. All the DFT techniques with a ‘-D’ symbol have been augmented by empirical dispersion energy while the M06-2X functional was parameterized to cover the London dispersion energy. For the systems here studied we have concluded that the use of the ff99 force field is not recommended mainly due to problems concerning the assignment of reliable atomic charges. Tight-binding DFT-D is efficient as a screening tool providing reliable geometries. Among the DFT functionals, the M06-2X and TPSS-D show the best performance what is explained by the fact that both procedures cover the dispersion energy. The B3LYP and TPSS functionals—not covering this energy—fail systematically. Both, electronic energies and geometries obtained by means of the wave-function theory methods compare satisfactorily with the CCSD(T)/CBS benchmark data.

  1. A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Da; Li, Xinhua; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Gao, Yiming; Xu, X. George [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)

    2013-08-15

    Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated

  2. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  3. A simple method for conversion of airborne gamma-ray spectra to ground level doses

    DEFF Research Database (Denmark)

    Korsbech, Uffe C C; Bargholz, Kim

    1996-01-01

    A new and simple method for conversion of airborne NaI(Tl) gamma-ray spectra to dose rates at ground level has been developed. By weighting the channel count rates with the channel numbers a spectrum dose index (SDI) is calculated for each spectrum. Ground level dose rates then are determined...... by multiplying the SDI by an altitude dependent conversion factor. The conversion factors are determined from spectra based on Monte Carlo calculations. The results are compared with measurements in a laboratory calibration set-up. IT-NT-27. June 1996. 27 p....

  4. Aeroelastic Benchmark Experiments Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to conduct canonical aeroelastic benchmark experiments. These experiments will augment existing sources for aeroelastic data in the...

  5. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  6. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy.

    Science.gov (United States)

    Koger, B; Kirkby, C

    2016-03-01

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP. PMID:26895030

  7. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy

    Science.gov (United States)

    Koger, B.; Kirkby, C.

    2016-03-01

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.

  8. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    International Nuclear Information System (INIS)

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs

  9. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  10. Optimization in radiotherapy treatment planning thanks to a fast dose calculation method

    International Nuclear Information System (INIS)

    This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues. The treatment planning aims to determine the best suited radiation parameters for each patient's treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multi-criteria with linear constraints. The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient's phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh. A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization. (author)

  11. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  12. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  13. SU-E-T-91: Correction Method to Determine Surface Dose for OSL Detectors

    International Nuclear Information System (INIS)

    Purpose: OSL detectors are commonly used in clinic due to their numerous advantages, such as linear response, negligible energy, angle and temperature dependence in clinical range, for verification of the doses beyond the dmax. Although, due to the bulky shielding envelope, this type of detectors fails to measure skin dose, which is an important assessment of patient ability to finish the treatment on time and possibility of acute side effects. This study aims to optimize the methodology of determination of skin dose for conventional accelerators and a flattening filter free Tomotherapy. Methods: Measurements were done for x-ray beams: 6 MV (Varian Clinac 2300, 10×10 cm2 open field, SSD = 100 cm) and for 5.5 MV (Tomotherapy, 15×40 cm2 field, SAD = 85 cm). The detectors were placed at the surface of the solid water phantom and at the reference depth (dref=1.7cm (Varian 2300), dref =1.0 cm (Tomotherapy)). The measurements for OSLs were related to the externally exposed OSLs measurements, and further were corrected to surface dose using an extrapolation method indexed to the baseline Attix ion chamber measurements. A consistent use of the extrapolation method involved: 1) irradiation of three OSLs stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. Results: OSL measurements showed an overestimation of surface doses by the factor 2.31 for Varian 2300 and 2.65 for Tomotherapy. The relationships: SD2300 = 0.68 × M2300-12.7 and SDτoμo = 0.73 × Mτoμo-13.1 were found to correct the single OSL measurements to surface doses in agreement with Attix measurements to within 0.1% for both machines. Conclusion: This work provides simple empirical relationships for surface dose measurements using single OSL detectors

  14. A study on the indirect urea dosing method in the Selective Catalytic Reduction system

    Science.gov (United States)

    Brzeżański, M.; Sala, R.

    2016-09-01

    This article presents the results of studies on concept solution of dosing urea in a gas phase in a selective catalytic reduction system. The idea of the concept was to heat-up and evaporate the water urea solution before introducing it into the exhaust gas stream. The aim was to enhance the processes of urea converting into ammonia, what is the target reductant for nitrogen oxides treatment. The study was conducted on a medium-duty Euro 5 diesel engine with exhaust line consisting of DOC catalyst, DPF filter and an SCR system with a changeable setup allowing to dose the urea in liquid phase (regular solution) and to dose it in a gas phase (concept solution). The main criteria was to assess the effect of physical state of urea dosed on the NOx conversion ratio in the SCR catalyst. In order to compare both urea dosing methods a special test procedure was developed which consisted of six test steps covering a wide temperature range of exhaust gas generated at steady state engine operation condition. Tests were conducted for different urea dosing quantities defined by the a equivalence ratio. Based on the obtained results, a remarkable improvement in NOx reduction was found for gas urea application in comparison to the standard liquid urea dosing. Measured results indicate a high potential to increase an efficiency of the SCR catalyst by using a gas phase urea and provide the basis for further scientific research on this type of concept.

  15. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  16. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  17. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  18. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  19. Computational benchmark problem for deep penetration in iron

    International Nuclear Information System (INIS)

    A calculational benchmark problem which is simple to model and easy to interpret is described. The benchmark consists of monoenergetic 2-, 4-, or 40-MeV neutrons normally incident upon a 3-m-thick pure iron slab. Currents, fluxes, and radiation doses are tabulated throughout the slab

  20. Radioactivity in food and the environment: calculations of UK radiation doses using integrated assessment methods

    International Nuclear Information System (INIS)

    A new method for estimating radiation doses to UK critical groups is proposed for discussion. Amongst others, the Food Standards Agency (FSA) and the Scottish Environment Protection Agency (SEPA) undertake surveillance of UK food and the environment as a check on the effect of discharges of radioactive wastes. Discharges in gaseous and liquid form are made under authorisation by the Environment Agency and SEPA under powers in the Radioactive Substance Act. Results of surveillance by the FSA and SEPA are published in the Radioactivity in Food and the Environment (RIFE) report series. In these reports, doses to critical groups are normally estimated separately for gaseous and liquid discharge pathways. Simple summation of these doses would tend to overestimate doses actually received. Three different methods of combining the effects of both types of discharge in an integrated assessment are considered and ranked according to their ease of application, transparency, scientific rigour and presentational issues. A single integrated assessment method is then chosen for further study. Doses are calculated for surveillance data for the calendar year 2000 and compared with those from the existing RIFE method

  1. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Directory of Open Access Journals (Sweden)

    Jodi L. Pawluski

    2014-01-01

    Full Text Available Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1 cookie and (2 osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat.

  2. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  3. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  4. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  5. In-situ gamma spectroscopy; An alternative method to evaluate external effective radiation dose

    International Nuclear Information System (INIS)

    Two types of approaches are possible to estimate radiation doses from environmental radiations:(1)Measure radiation fields in the place of interest and presume that people are exposed to the same field. (2) Actual measurement on the individual members of the population studied by the use of thermoluminescent dosimeters. (TLD). The latter approach though difficult is ideal. The objective of the present study was to investigate the possibility of using the first approach using in-situ gamma spectrometry as an alternative method to evaluate the external effective dose. The results obtained in this way provide a means of evaluating both approaches. Six houses were selected for this study from an area where an average radiation dose of 5.0 micro Sv per hour was measured using a hand held survey meter. In all study sites both TLD and in-situ measurements with portable HPGE detector were carried out. The detector was calibrated for field measurements and activity concentrations of the radionuclides identified in the gamma spectra were calculated. The calculated detector efficiency values for field measurements for 1461, 1764, and 2615 keV were 2.40, 2.03 and 1.44 respectively. External effective dose was calculated using the corresponding kerma rates for the analysed radionuclides. Evaluation of the effective dose by the two approaches are reasonably correlated (r sup 2=0.87) for dose measurements between 2.0 - 6.0 mSv. In-situ measurements gave higher values than the TL readings because in-situ data are more representative of the surrounding. This study suggests that in-situ gamma spectrometry permits rapid and efficient identification and quantification of gamma emitting radionuclides on surface and subsurface soil and can be used as an alternative rapid method to determine population doses from environmental radiations particularly in an event such as a radiation contamination. TL measurements provide only an integrated dose and would require an extended time period

  6. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  7. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  8. Comparing the accuracy of high-dimensional neural network potentials and the systematic molecular fragmentation method: A benchmark study for all-trans alkanes

    Science.gov (United States)

    Gastegger, Michael; Kauffmann, Clemens; Behler, Jörg; Marquetand, Philipp

    2016-05-01

    Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system's total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy is constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.

  9. Benchmarking of corporate social responsibility: Methodological problems and robustness.

    OpenAIRE

    Graafland, J.J.; Eijffinger, S.C.W.; Smid, H.

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  10. Determination of gelation dose of poly(vinyl acetate) by a spectrophotometric method

    Energy Technology Data Exchange (ETDEWEB)

    Guven, Olgun; Yigit, Fatma

    1986-01-01

    The gelation point is an important property of polymers undergoing crosslinking when subjected to high energy radiation. This point is generally determined by viscometric and solubility methods or by mechanical measurements. When crosslinking and discoloration take place simultaneously, gelation doses can be determined spectrophotometrically. In this work it is demonstrated that the gelation dose of poly (vinyl acetate) can be determined by simply recording the u.v.-vis. spectra of the solutions of ..gamma..-irradiated polymer. The reliability of the method is verified by viscometric and solubility measurements.

  11. Simple Evaluation Method of Atmospheric Plasma Irradiation Dose using pH of Water

    Science.gov (United States)

    Koga, Kazunori; Sarinont, Thapanut; Amano, Takaaki; Seo, Hyunwoong; Itagaki, Naho; Nakatsu, Yoshimichi; Tanaka, Akiyo; Shiratani, Masaharu

    2015-09-01

    Atmospheric discharge plasmas are promising for agricultural productivity improvements and novel medical therapies, because plasma provides high flux of short-lifetime reactive species at low temperature, leading to low damage to living body. For the plasma-bio applications, various kinds of plasma systems are employed, thus common evaluation methods are needed to compare plasma irradiation dose quantitatively among the systems. Here we offer simple evaluation method of plasma irradiation dose using pH of water. Experiments were carried out with a scalable DBD device. 300 μl of deionized water was prepared into the quartz 96 microwell plate at 3 mm below electrode. The pH value has been measured just after 10 minutes irradiation. The pH value was evaluated as a function of plasma irradiation dose. Atmospheric air plasma irradiation decreases pH of water with increasing the dose. We also measured concentrations of chemical species such as nitrites, nitrates and H2O2. The results indicate our method is promising to evaluate plasma irradiation dose quantitatively.

  12. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have...... not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web......-accessible predictors NetMHCpan, Adaptive-Double-Threading (ADT), and KISS were evaluated. The performance of the pan-specific predictors was also compared to a well performing allele-specific MHC class I predictor, NetMHC, as well as a consensus approach integrating the predictions from the NetMHC and Net...

  13. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    Energy Technology Data Exchange (ETDEWEB)

    Benton, E.R. [Eril Research, Inc., Stillwater, Oklahoma (United States); Deme, S.; Apathy, I. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    2006-07-01

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca{sub 2}SO{sub 4}:Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET{sub {infinity}}H{sub 2}O {>=} 10 keV/{mu}m, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component {>=} 10 keV/{mu}m measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/{mu}m measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 {mu}Gy/day and dose equivalent rates ranging from 340 to 450 {mu}Sv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  14. Evaluation of Deformable Image Registration Methods for Dose Monitoring in Head and Neck Radiotherapy

    Directory of Open Access Journals (Sweden)

    Bastien Rigaud

    2015-01-01

    Full Text Available In the context of head and neck cancer (HNC adaptive radiation therapy (ART, the two purposes of the study were to compare the performance of multiple deformable image registration (DIR methods and to quantify their impact for dose accumulation, in healthy structures. Fifteen HNC patients had a planning computed tomography (CT0 and weekly CTs during the 7 weeks of intensity-modulated radiation therapy (IMRT. Ten DIR approaches using different registration methods (demons or B-spline free form deformation (FFD, preprocessing, and similarity metrics were tested. Two observers identified 14 landmarks (LM on each CT-scan to compute LM registration error. The cumulated doses estimated by each method were compared. The two most effective DIR methods were the demons and the FFD, with both the mutual information (MI metric and the filtered CTs. The corresponding LM registration accuracy (precision was 2.44 mm (1.30 mm and 2.54 mm (1.33 mm, respectively. The corresponding LM estimated cumulated dose accuracy (dose precision was 0.85 Gy (0.93 Gy and 0.88 Gy (0.95 Gy, respectively. The mean uncertainty (difference between maximal and minimal dose considering all the 10 methods to estimate the cumulated mean dose to the parotid gland (PG was 4.03 Gy (SD = 2.27 Gy, range: 1.06–8.91 Gy.

  15. Verification of the method of average angular response for dose measurement on different detectors

    International Nuclear Information System (INIS)

    At present most radiation dose meters have serious problems on aspects of energy response and angular response. In order to improve the accuracy of dose measurements, a method of average angular response has been proposed. The method can not only correct the energy response, but also the angular response. This method has been verified on NaI(Tl)(50 mm× 50 mm) scintillation detectors, but has not been proved on other types and sizes of detectors, In this paper the method is also verified for LaBr3(Ce) scintillation detectors and HPGe detector To apply the method, first of all, five detectors are simulated by Geant4 and average angular response values are calculated. Then experiments are performed to get the count rates of full energy peak by standard point source of 137Cs, 60Co and 152Eu. After that the dose values of five detectors are calculated with the method of average angular response. Finally experimental results are got. These results are divided into two groups to analyze the impact of detectors of various types and sizes. The result of the first group shows that the method is appropriate for different types of detector to measure dose, with deviations of less than 5% compared with theoretical values. Moreover, when the detector's energy resolution is better and the count rate of the full energy peak is calculated more precisely, the measured dose can be obtained more precisely. At the same time, the result of the second group illustrates that the method is also suited for different sizes of detectors, with deviations of less than 8% compared with theoretical values

  16. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-fi

  17. A method to efficiently simulate absorbed dose in radio-sensitive instrumentation components

    International Nuclear Information System (INIS)

    Components installed in tunnels of high-power accelerators are prone to radiation-induced damage and malfunction. Such machines are usually modeled in detail and the radiation cascades are transported through the three-dimensional models in Monte Carlo codes. Very often those codes are used to compute energy deposition in beam components or radiation fields to the public and the environment. However, sensitive components such as electronic boards or insulator cables are less easily simulated, as their small size makes dose scoring a (statistically) inefficient process. Moreover the process to decide their location is iterative, as in order to define where these will be safely installed, the dose needs to be computed, but to do so the location needs to be known. This note presents a different approach to indirectly asses the potential absorbed dose by certain components when those are installed within a given radiation field. The method consists first in finding the energy and particle-dependent absorbed dose to fluence response function, and then programming those in a radiation transport Monte Carlo code, so that fluences in vacuum/air can be automatically converted real-time into potential absorbed doses and then mapped in the same way as fluences or dose equivalent magnitudes

  18. Methods used to estimate the collective dose in Denmark from diagnostic radiology

    International Nuclear Information System (INIS)

    According to EU directive 97/43/Euratom all member states must estimate doses to the public from diagnostic radiology. In Denmark the National Institute of Radiation Hygiene (NIRH) is about to finish a project with the purpose of estimating the collective dose in Denmark from diagnostic radiology. In this paper methods, problems and preliminary results will be presented. Patient doses were obtained from x-ray departments, dentist and chiropractors. Information about the frequencies of examination was collected from each of the Danish hospitals or counties. It was possible to collect information for nearly all of the hospitals. The measurements were done by means of dose area product meters in x-ray departments and by thermoluminescent dosimetry at chiropractors and solid-sate detectors at dentists. Twenty hospitals, 3,200 patients and 23,000 radiographs were measured in this study. All data were stored in a database for quick retrieval. The DAP (Dose Area Product) measurements was done 'automatically' controlled by PC based software. Later these recordings could be analysed by means of specially designed software and transferred to the database. Data from the chiropractors were obtained by mail. NIRH sent each chiropractor TLD's and registration form. The chiropractor did the measurements him self and returned afterwards the TLD's and registration forms. On the registration form height, weight, age etc. of the patient was noted and so was information about applied high-tension, current-time product and projection. Calculation of the effective dose from the DAP values and the surface entrance dose were done by Monte Carlo techniques. For each radiographs two pictures of the mathematical phantom were generated to ensure that the x-ray field where properly placed. The program 'diagnostic dose' developed by NIRH did the Monte Carlo calculations. (author)

  19. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    OpenAIRE

    Claudio Ferone; Francesco Colangelo; Domenico Frattini; Giuseppina Roviello; Raffaele Cioffi; Rosa di Maggio

    2014-01-01

    Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES) systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM). Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration vario...

  20. [Al2O4](-), a Benchmark Gas-Phase Class II Mixed-Valence Radical Anion for the Evaluation of Quantum-Chemical Methods.

    Science.gov (United States)

    Kaupp, Martin; Karton, Amir; Bischoff, Florian A

    2016-08-01

    The radical anion [Al2O4](-) has been identified as a rare example of a small gas-phase mixed-valence system with partially localized, weakly coupled class II character in the Robin/Day classification. It exhibits a low-lying C2v minimum with one terminal oxyl radical ligand and a high-lying D2h minimum at about 70 kJ/mol relative energy with predominantly bridge-localized-hole character. Two identical C2v minima and the D2h minimum are connected by two C2v-symmetrical transition states, which are only ca. 6-10 kJ/mol above the D2h local minimum. The small size of the system and the absence of environmental effects has for the first time enabled the computation of accurate ab initio benchmark energies, at the CCSDT(Q)/CBS level using W3-F12 theory, for a class-II mixed-valence system. These energies have been used to evaluate wave function-based methods [CCSD(T), CCSD, SCS-MP2, MP2, UHF] and density functionals ranging from semilocal (e.g., BLYP, PBE, M06L, M11L, N12) via global hybrids (B3LYP, PBE0, BLYP35, BMK, M06, M062X, M06HF, PW6B95) and range-separated hybrids (CAM-B3LYP, ωB97, ωB97X-D, LC-BLYP, LC-ωPBE, M11, N12SX), the B2PLYP double hybrid, and some local hybrid functionals. Global hybrids with about 35-43% exact-exchange (EXX) admixture (e.g., BLYP35, BMK), several range hybrids (CAM-B3LYP, ωB97X-D, ω-B97), and a local hybrid provide good to excellent agreement with benchmark energetics. In contrast, too low EXX admixture leads to an incorrect delocalized class III picture, while too large EXX overlocalizes and gives too large energy differences. These results provide support for previous method choices for mixed-valence systems in solution and for the treatment of oxyl defect sites in alumosilicates and SiO2. Vibrational gas-phase spectra at various computational levels have been compared directly to experiment and to CCSD(T)/aug-cc-pV(T+d)Z data. PMID:27434425

  1. TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)

    2008-07-01

    We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)

  2. Determination of the delivered hemodialysis dose using standard methods and on-line clearance monitoring

    Directory of Open Access Journals (Sweden)

    Vlatković Vlastimir

    2006-01-01

    Full Text Available Background/aim: Delivered dialysis dose has a cumulative effect and significant influence upon the adequacy of dialysis, quality of life and development of co-morbidity at patients on dialysis. Thus, a great attention is given to the optimization of dialysis treatment. On-line Clearance Monitoring (OCM allows a precise and continuous measurement of the delivered dialysis dose. Kt/V index (K = dialyzer clearance of urea; t = dialysis time; V = patient's total body water, measured in real time is used as a unit for expressing the dialysis dose. The aim of this research was to perform a comparative assessment of the delivered dialysis dose by the application of the standard measurement methods and a module for continuous clearance monitoring. Methods. The study encompassed 105 patients who had been on the chronic hemodialysis program for more than three months, three times a week. By random choice, one treatment per each controlled patient was taken. All the treatments understood bicarbonate dialysis. The delivered dialysis dose was determined by the calculation of mathematical models: Urea Reduction Ratio (URR singlepool index Kt/V (spKt/V and by the application of OCM. Results. Urea Reduction Ratio was the most sensitive parameter for the assessment and, at the same time, it was in the strongest correlation with the other two, spKt/V indexes and OCM. The values pointed out an adequate dialysis dose. The URR values were significantly higher in women than in men, p < 0.05. The other applied model for the delivered dialysis dose measurement was Kt/V index. The obtained values showed that the dialysis dose was adequate, and that, according to this parameter, the women had significantly better dialysis, then the men p < 0.05. According to the OCM, the average value was slightly lower than the adequate one. The women had a satisfactory dialysis according to this index as well, while the delivered dialysis dose was insufficient in men. The difference

  3. Dose calculation method with 60-cobalt gamma rays in total body irradiation

    CERN Document Server

    Scaff, L A M

    2001-01-01

    Physical factors associated to total body irradiation using sup 6 sup 0 Co gamma rays beams, were studied in order to develop a calculation method of the dose distribution that could be reproduced in any radiotherapy center with good precision. The method is based on considering total body irradiation as a large and irregular field with heterogeneities. To calculate doses, or doses rates, of each area of interest (head, thorax, thigh, etc.), scattered radiation is determined. It was observed that if dismagnified fields were considered to calculate the scattered radiation, the resulting values could be applied on a projection to the real size to obtain the values for dose rate calculations. In a parallel work it was determined the variation of the dose rate in the air, for the distance of treatment, and for points out of the central axis. This confirm that the use of the inverse square law is not valid. An attenuation curve for a broad beam was also determined in order to allow the use of absorbers. In this wo...

  4. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  5. A benchmark study of the two-dimensional Hubbard model with auxiliary-field quantum Monte Carlo method

    CERN Document Server

    Qin, Mingpu; Zhang, Shiwei

    2016-01-01

    Ground state properties of the Hubbard model on a two-dimensional square lattice are studied by the auxiliary-field quantum Monte Carlo method. Accurate results for energy, double occupancy, effective hopping, magnetization, and momentum distribution are calculated for interaction strengths of U/t from 2 to 8, for a range of densities including half-filling and n = 0.3, 0.5, 0.6, 0.75, and 0.875. At half-filling, the results are numerically exact. Away from half-filling, the constrained path Monte Carlo method is employed to control the sign problem. Our results are obtained with several advances in the computational algorithm, which are described in detail. We discuss the advantages of generalized Hartree-Fock trial wave functions and its connection to pairing wave functions, as well as the interplay with different forms of Hubbard-Stratonovich decompositions. We study the use of different twist angle sets when applying the twist averaged boundary conditions. We propose the use of quasi-random sequences, whi...

  6. A New System For Recording The Radiological Effective Doses For Pacients Investigated by Imaging Methods

    CERN Document Server

    Stanciu, Silviu

    2014-01-01

    In this paper the project of an integrated system for radiation safety and security of the patients investigated by radiological imaging methods is presented. The new system is based on smart cards and Public Key Infrastructure. The new system allows radiation effective dose data storage and a more accurate reporting system.

  7. A novel dose-based positioning method for CT image-guided proton therapy

    OpenAIRE

    Cheung, Joey P.; Park, Peter C.; Court, Laurence E.; Ronald Zhu, X.; Kudchadker, Rajat J.; Frank, Steven J.; Dong, Lei

    2013-01-01

    Purpose: Proton dose distributions can potentially be altered by anatomical changes in the beam path despite perfect target alignment using traditional image guidance methods. In this simulation study, the authors explored the use of dosimetric factors instead of only anatomy to set up patients for proton therapy using in-room volumetric computed tomographic (CT) images.

  8. Research on Benchmarking Evaluation Method for Green Construction Based on Individual Advantage Identification%基于个性优势识别的绿色施工标杆评定方法研究

    Institute of Scientific and Technical Information of China (English)

    邵必林; 臧朋; 赵欢欢

    2012-01-01

    Green construction in China is still at an early development stage,which requires benchmarking projects as reference for construction enterprises. In this paper, using the method of individual advantage identification and starting from the individual perspective, the evaluation methods for individual benchmarking and group benchmarking were presented based on the advantages and disadvantages of green construction projects, which was attempt to provide an innovative and more objective method of benchmarking evaluation for green construction in China.%绿色施工在我国尚处于起步阶段,亟需树立起可供施工企业学习和借鉴的标杆项目.本文采用个性优势识别的方法,从个体的角度出发,通过客观地识别各个绿色施工项目的优劣点,给出了“个体标杆”和“群体标杆”的评定方法,以期为我国绿色施工的标杆评定工作提供一种新的和更趋客观的理论方法.

  9. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    Science.gov (United States)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  10. Respiratory triggered 4D cone-beam computed tomography: A novel method to reduce imaging dose

    Science.gov (United States)

    Cooper, Benjamin J.; O’Brien, Ricky T.; Balik, Salim; Hugo, Geoffrey D.; Keall, Paul J.

    2013-01-01

    Purpose: A novel method called respiratory triggered 4D cone-beam computed tomography (RT 4D CBCT) is described whereby imaging dose can be reduced without degrading image quality. RT 4D CBCT utilizes a respiratory signal to trigger projections such that only a single projection is assigned to a given respiratory bin for each breathing cycle. In contrast, commercial 4D CBCT does not actively use the respiratory signal to minimize image dose. Methods: To compare RT 4D CBCT with conventional 4D CBCT, 3600 CBCT projections of a thorax phantom were gathered and reconstructed to generate a ground truth CBCT dataset. Simulation pairs of conventional 4D CBCT acquisitions and RT 4D CBCT acquisitions were developed assuming a sinusoidal respiratory signal which governs the selection of projections from the pool of 3600 original projections. The RT 4D CBCT acquisition triggers a single projection when the respiratory signal enters a desired acquisition bin; the conventional acquisition does not use a respiratory trigger and projections are acquired at a constant frequency. Acquisition parameters studied were breathing period, acquisition time, and imager frequency. The performance of RT 4D CBCT using phase based and displacement based sorting was also studied. Image quality was quantified by calculating difference images of the test dataset from the ground truth dataset. Imaging dose was calculated by counting projections. Results: Using phase based sorting RT 4D CBCT results in 47% less imaging dose on average compared to conventional 4D CBCT. Image quality differences were less than 4% at worst. Using displacement based sorting RT 4D CBCT results in 57% less imaging dose on average, than conventional 4D CBCT methods; however, image quality was 26% worse with RT 4D CBCT. Conclusions: Simulation studies have shown that RT 4D CBCT reduces imaging dose while maintaining comparable image quality for phase based 4D CBCT; image quality is degraded for displacement based RT 4D

  11. Full CI benchmark calculations on N2, NO, and O2 - A comparison of methods for describing multiple bonds

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1987-01-01

    Full configuration interaction (CI) calculations on the ground states of N2, NO, and O2 using a DZP Gaussian basis are compared with single-reference SDCI and coupled pair approaches (CPF), as well as with CASSCF multireference CI approaches. The CASSCF/MRCI technique is found to describe multiple bonds as well as single bonds. Although the coupled pair functional approach gave chemical accuracy (1 kcal/mol) for bonds involving hydrogen, larger errors occur in the CPF approach for the multiple bonded systems considered here. CI studies on the 1Sigma(g +) state of N2, including all single, double, triple, and quadruple excitations show that triple excitations are very important for the multiple bond case, and accounts for most of the deficiency in the coupled pair functional methods.

  12. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  13. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  14. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  15. A robustness analysis method with fast estimation of dose uncertainty distributions for carbon-ion therapy treatment planning

    Science.gov (United States)

    Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku

    2016-08-01

    A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the

  16. A robustness analysis method with fast estimation of dose uncertainty distributions for carbon-ion therapy treatment planning.

    Science.gov (United States)

    Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku

    2016-08-01

    A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the

  17. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Directory of Open Access Journals (Sweden)

    Erik M Salomons

    Full Text Available Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i reduction of the kinematic viscosity and ii reduction of the lattice spacing.

  18. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  19. Dose conversion factors for radiation doses at normal operation discharges. F. Methods report; Dosomraekningsfaktorer foer normaldriftutslaepp. F. Metodrapport

    Energy Technology Data Exchange (ETDEWEB)

    Bergstroem, Ulla; Hallberg, Bengt; Karlsson, Sara

    2001-10-01

    A study has been performed in order to develop and extend existing models for dose estimations at emissions of radioactive substances from nuclear facilities in Sweden. This report gives a review of the different exposure pathways that have been considered in the study. Radioecological data that should be used in calculations of radiation doses are based on the actual situation at the nuclear sites. Dose factors for children have been split in different age groups. The exposure pathways have been carefully re-examined, like the radioecological data; leading to some new pathways (e.g. doses from consumption of forest berries, mushrooms and game) for cesium and strontium. Carbon 14 was given a special treatment by using a model for uptake of carbon by growing plants. For exposure from aquatic emissions, a simplification was done by focussing on the territory for fish species, since consumption of fish is the most important pathway.

  20. Radiation doses in diagnostic radiology and methods for dose reduction. Report of a co-ordinated research programme (1991-1993)

    International Nuclear Information System (INIS)

    It is well recognized that diagnostic radiology is the largest contributor to the collective dose from all man-made sources of radiation. Large differences in radiation doses from the same procedures among different X ray rooms have led to the conclusion that there is a potential for dose reduction. A Co-ordinated Research Programme on Radiation Doses in Diagnostic Radiology and Methods for Dose Reduction, involving Member States with different degrees of development, was launched by the IAEA in co-operation with the CEC. This report summarizes the results of the second and final Research Co-ordination Meeting held in Vienna from 4 to 8 October 1993. 22 refs, 6 figs and tabs

  1. Characteristics of radiation dose accumulation and methods of dose calculation for internal inflow of 137Cs into experimental rats body

    International Nuclear Information System (INIS)

    Problem of formation doses are considered at the peroral entering of 137Cs in the organism of laboratory rats. First the functions of isotopes retention and values of biokinetic constants have been determined for different organs and tissues. Multicamerate model for description of biokinetics of radionuclides in the organism is proposed. Advantages of application of this model for estimation of absorbed doses are discussed in comparison to existent models

  2. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  3. A study on the dose analysis of pottery shards by thermoluminescence dating method

    International Nuclear Information System (INIS)

    A method for measuring archaeological dose of Packjae pottery shards using thermoluminescence dosimetry(TLD) has been studied. TL measurement has been achieved using quartz crystals in the size range of 90 to 125 μm diameter extracted from the pottery shards. The stable temperature region of the TL glow curve which is devoid of anomalous fading components was identified by the plateau test and found to exist from 265 to 300.deg.C. The archaeological dose of the pottery shards was estimated to be 7.43 Gy using the dose calibration curves obtained from sequential irradiation of 137Cs gamma source to the samples and TL measurement of natural samples

  4. TH-A-19A-03: Impact of Proton Dose Calculation Method On Delivered Dose to Lung Tumors: Experiments in Thorax Phantom and Planning Study in Patient Cohort

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, C; Daartz, J; Dowdell, S; Ruggieri, T; Sharp, G; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2014-06-15

    Purpose: Evaluate Monte Carlo (MC) dose calculation and the prediction of the treatment planning system (TPS) in a lung phantom and compare them in a cohort of 20 lung patients treated with protons. Methods: A 2-dimensional array of ionization chambers was used to evaluate the dose across the target in a lung phantom. 20 lung cancer patients on clinical trials were re-simulated using a validated Monte Carlo toolkit (TOPAS) and compared to the TPS. Results: MC increases dose calculation accuracy in lung compared to the clinical TPS significantly and predicts the dose to the target in the phantom within ±2%: the average difference between measured and predicted dose in a plane through the center of the target is 5.6% for the TPS and 1.6% for MC. MC recalculations in patients show a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. The lower dose correlates significantly with aperture size and the distance of the tumor to the chest wall (Spearman's p=0.0002/0.004). For large tumors MC also predicts consistently higher V{sub 5} and V{sub 10} to the normal lung, due to a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target can show large deviations, though this effect is very patient-specific. Conclusion: Advanced dose calculation techniques, such as MC, would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. This would increase the accuracy of the relationships between dose and effect, concerning tumor control as well as normal tissue toxicity. As the role of proton therapy in the treatment of lung cancer continues to be evaluated in clinical trials, this is of ever-increasing importance. This work was supported by National Cancer Institute Grant R01CA111590.

  5. Radiation dose determines the method for quantification of DNA double strand breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra [University of Belgrade, Vinča Institute of Nuclear Sciences, Belgrade (Serbia); Todorović, Danijela, E-mail: dtodorovic@medf.kg.ac.rs [University of Kragujevac, Faculty of Medical Sciences, Kragujevac (Serbia)

    2016-03-15

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  6. Benchmarking Passive Seismic Methods of Imaging Surface Wave Velocity Interfaces Down to 300 m — Mapping Murray Basin Thickness in Southeastern Australia

    Science.gov (United States)

    Gorbatov, A.; Czarnota, K.

    2015-12-01

    In shallow passive seismology it is generally thought that the spatial autocorrelation (SPAC) method is more robust than the horizontal over vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Drilled depths of this interface are between 27 and 300 m. A three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium compact broadband seismometers was deployed at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighborhood algorithm of Sambridge (1999) implemented in geopsy by Wathelet et al (2005). The HVSR technique yielded depth estimates, of the target interface (Vs > 1000 m/s), generally within 20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments. Half of the SPAC estimates showed significantly higher errors than obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low degrees of anthropogenic noise at the benchmark sites. At a few sites SPAC curves showed clear overtones suggesting that more reliable SPAC estimates maybe obtained utilizing a multi modal inversion. Nevertheless, our study seems to indicate that reliable basin thickness estimates in remote Australia can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.

  7. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  8. ARN Training on Advance Methods for Internal Dose Assessment: Application of Ideas Guidelines

    International Nuclear Information System (INIS)

    Dose assessment in case of internal exposure involves the estimation of committed effective dose based on the interpretation of bioassay measurement, and the assumptions of hypotheses on the characteristics of the radioactive material and the time pattern and the pathway of intake. The IDEAS Guidelines provide a method to harmonize dose evaluations using criteria and flow chart procedures to be followed step by step. The EURADOS Working Group 7 'Internal Dosimetry', in collaboration with IAEA and Czech Technical University (CTU) in Prague, promoted the 'EURADOS/IAEA Regional Training Course on Advanced Methods for Internal Dose Assessment: Application of IDEAS Guidelines' to broaden and encourage the use of IDEAS Guidelines, which took place in Prague (Czech Republic) from 2-6 February 2009. The ARN identified the relevance of this training and asked for a place for participating on this activity. After that, the first training course in Argentina took place from 24-28 August for training local internal dosimetry experts. This paper resumes the main characteristics of this activity. (authors)

  9. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  10. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  11. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  12. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  13. Paleodose evaluation of porcelain: a practical regression method of saturation exponential in pre-dose technique

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A practical regression method of saturation exponential in pre-dose technique is proposed. The method is mainly applied for porcelain dating. To test, the method, some simulated paleodoses of the imitations of ancient porcelain were used. The measured results are in good agreement with the simulated values of the paleodoses, and the average ratios of the two values by using the two ways are 1.05 and 0.99 with standard deviations (±lσ) of 19% and 15% respectively. Such errors can be accepted in porcelain dating.

  14. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  15. A point dose method for in vivo range verification in proton therapy

    International Nuclear Information System (INIS)

    Range uncertainty in proton therapy is a recognized concern. For certain treatment sites, less optimal beam directions are used to avoid the potential risk, but also with reduced benefit. In vivo dosimetry, with implanted or intra-cavity dosimeters, has been widely used for treatment verification in photon/electron therapy. The method cannot, however, verify the beam range for proton treatment, unless we deliver the treatment in a different manner. Specifically, we split the spread-out Bragg peaks in a proton field into two separate fields, each delivering a 'sloped' depth-dose distribution, rather than the usual plateau in a typical proton field. The two fields are 'sloped' in opposite directions so that the total depth-dose distribution retains the constant dose plateau covering the target volume. By measuring the doses received from both fields and calculating the ratio, the water-equivalent path length to the location of the implanted dosimeter can be verified, thus limiting range uncertainty to only the remaining part of the beam path. Production of such subfields has been experimented with a passive scattering beam delivery system. Phantom measurements have been performed to illustrate the application for in vivo beam range verification. (note)

  16. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  17. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol.2

    International Nuclear Information System (INIS)

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios

  18. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  19. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities; Metodos de calibracao e de intercomparacao de calibradores de dose utilizados em servicos de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alessandro Martins da

    1999-07-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  20. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  1. An in vivo dose verification method for SBRT–VMAT delivery using the EPID

    Energy Technology Data Exchange (ETDEWEB)

    McCowan, P. M., E-mail: peter.mccowan@cancercare.mb.ca [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Van Uytven, E.; Van Beek, T.; Asuni, G. [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCurdy, B. M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)

    2015-12-15

    Purpose: Radiation treatments have become increasingly more complex with the development of volumetric modulated arc therapy (VMAT) and the use of stereotactic body radiation therapy (SBRT). SBRT involves the delivery of substantially larger doses over fewer fractions than conventional therapy. SBRT–VMAT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. Electronic portal imaging devices (EPIDs) are available on most commercial linear accelerators (Linacs) and their documented use for dosimetry makes them valuable tools for patient dose verification. In this work, the authors customize and validate a physics-based model which utilizes on-treatment EPID images to reconstruct the 3D dose delivered to the patient during SBRT–VMAT delivery. Methods: The SBRT Linac head, including jaws, multileaf collimators, and flattening filter, were modeled using Monte Carlo methods and verified with measured data. The simulation provides energy spectrum data that are used by their “forward” model to then accurately predict fluence generated by a SBRT beam at a plane above the patient. This fluence is then transported through the patient and then the dose to the phosphor layer in the EPID is calculated. Their “inverse” model back-projects the EPID measured focal fluence to a plane upstream of the patient and recombines it with the extra-focal fluence predicted by the forward model. This estimate of total delivered fluence is then forward projected onto the patient’s density matrix and a collapsed cone convolution algorithm calculates the dose delivered to the patient. The model was tested by reconstructing the dose for two prostate, three lung, and two spine SBRT–VMAT treatment fractions delivered to an anthropomorphic phantom. It was further validated against actual patient data for a lung and spine SBRT–VMAT plan. The

  2. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  3. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  4. Prediction of imipramine serum levels in enuretic children by a Bayesian method: comparison with two other conventional dosing methods.

    Science.gov (United States)

    Fernández de Gatta, M M; Tamayo, M; García, M J; Amador, D; Rey, F; Gutiérrez, J R; Domínguez-Gil Hurlé, A

    1989-11-01

    The aim of the present study was to characterize the kinetic behavior of imipramine (IMI) and desipramine in enuretic children and to evaluate the performance of different methods for dosage prediction based on individual and/or population data. The study was carried out in 135 enuretic children (93 boys) ranging in age between 5 and 13 years undergoing treatment with IMI in variable single doses (25-75 mg/day) administered at night. Sampling time was one-half the dosage interval at steady state. The number of data available for each patient varied (1-4) and was essentially limited by clinical criteria. Pharmacokinetic calculations were performed using a simple proportional relationship (method 1) and a multiple nonlinear regression program (MULTI 2 BAYES) with two different options: using the ordinary least-squares method (method 2) and the least-squares method based on the Bayesian algorithm (method 3). The results obtained point to a coefficient of variation for the level/dose ratio of the drug (58%) that is significantly lower than that of the metabolite (101.4%). The forecasting capacity of method 1 is deficient both regarding accuracy [mean prediction error (MPE) = -5.48 +/- 69.15] and precision (root mean squared error = 46.42 +/- 51.39). The standard deviation of the MPE (69) makes the method unacceptable from the clinical point of view. The more information that is available concerning the serum levels, the greater are the accuracy and precision of methods (2 and 3). With the Bayesian method, less information on drug serum levels is needed to achieve clinically acceptable predictions. PMID:2595743

  5. Radiation Dose Reduction Methods For Use With Fluoroscopic Imaging, Computers And Implications For Image Quality

    Science.gov (United States)

    Edmonds, E. W.; Hynes, D. M.; Rowlands, J. A.; Toth, B. D.; Porter, A. J.

    1988-06-01

    The use of a beam splitting device for medical gastro-intestinal fluoroscopy has demonstrated that clinical images obtained with a 100mm photofluorographic camera, and a 1024 X 1024 digital matrix with pulsed progressive readout acquisition techniques, are identical. In addition, it has been found that clinical images can be obtained with digital systems at dose levels lower than those possible with film. The use of pulsed fluoroscopy with intermittent storage of the fluoroscopic image has also been demonstrated to reduce the fluoroscopy part of the examination to very low dose levels, particularly when low repetition rates of about 2 frames per second (fps) are used. The use of digital methods reduces the amount of radiation required and also the heat generated by the x-ray tube. Images can therefore be produced using a very small focal spot on the x-ray tube, which can produce further improvement in the resolution of the clinical images.

  6. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Science.gov (United States)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  7. Use of rank sum method in identifying high occupational dose jobs for ALARA implementation

    International Nuclear Information System (INIS)

    The cost-effective reduction of occupational radiation exposure (ORE) dose at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORE dose data of existing plants. It is necessary to identify what are high ORE jobs for ALARA implementation. In this study, the Rank Sum Method (RSM) is used in identifying high ORE jobs. As a case study, the database of ORE-related maintenance and repair jobs for Kori Units 3 and 4 is used for assessment, and top twenty high ORE jobs are identified. The results are also verified and validated using the Friedman test, and RSM is found to be a very efficient way of analyzing the data. (author)

  8. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Chen Chaobin; Huang Qunying; Wu Yican

    2005-01-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  9. DOSE MEASURMENT IN ULTRAVIOLET DISINFECTION OF WATER AND WASTE WATER BY CHEMICAL METHOD

    Directory of Open Access Journals (Sweden)

    F.Vaezi

    1995-06-01

    Full Text Available Chemical methods ( actinometry depend on the measurement of the extent to which a chemical reaction occurs under the influence of UV light. Two chemical actinometers have been used in this research. In one method, the mixtures of potassium peroxidisuiphate butanol solutions were irradiated for various time intervals, and pH-changes were determined. A linear relationship was observed between these changes and UV-dose applied. In another method, the acidic solutions of ammonium molybdate and ethyl alcohol were irradiated and the intensity of blue color developed was determined by titration with potassium permanganate solutions. The volumes of titrant used were then plotted versus the UV-doses. This showed a linear relationship which could be used for dosimeiry. Both of these actometers proved to be reliable. The first is the method of choice with a view to have much accuracy and the second method is preferred because of its feasibility and having advantages of no need to any equipment and non-accessible raw materials.

  10. Using MCNP and Monte Carlo method for Investigation of dose field of Irradiation facility at Hanoi Irradiation Center

    International Nuclear Information System (INIS)

    MCNP and Monte Carlo method was used to calculate dose rate in the air-space of irradiation room at Hanoi Irradiation Center. Experiment measurements were also carried out to investigate the real distribution of dose field in air of the irradiator as well as the distribution of absorbed dose in sample product containers. The results show that there is a deviation between calculated data given by MCNP and measurements. The data of MCNP give a symmetric distribution of dose field against the axes going through the center of the source rack meanwhile the experiment data show that dose rate get higher values in the lower part of the space. Going to lower position to the floor dose rate getting higher value. This phenomenon was also occurred for the measurements of absorbed dose in sample product container. (author)

  11. Results of the IAEA-CEC coordinated research programme on radiation doses in diagnostic radiology and methods for reduction

    International Nuclear Information System (INIS)

    In 1991, a Coordinated Research Programme on assessment of radiation doses in diagnostic radiology and studying methods for reduction was started in IAEA Member States in cooperation with the CEC Radiation Protection Research Action. It was agreed to carry out a pilot exercise consisting of assessing patients' Entrance Surface Doses, followed by: analysis of the relevant parameters; quality control and corrections, and reassessment of doses where applicable. The results show that dose reduction was achieved without deterioration of the diagnostic information of the images, by applying simple and inexpensive methods. (Author)

  12. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  13. Clinically meaningful performance benchmarks in MS

    Science.gov (United States)

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (10 seconds) ranges of performance. PMID:24174581

  14. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  15. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  16. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  17. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  18. Benchmarking Public Procurement 2016

    OpenAIRE

    World Bank Group

    2015-01-01

    Benchmarking Public Procurement 2016 Report aims to develop actionable indicators which will help countries identify and monitor policies and regulations that impact how private sector companies do business with the government. The project builds on the Doing Business methodology and was initiated at the request of the G20 Anti-Corruption Working Group.

  19. Comparison of passive and active radon measurement methods for personal occupational dose assessment

    Directory of Open Access Journals (Sweden)

    Hasanzadeh Elham

    2016-01-01

    Full Text Available To compare the performance of the active short-term and passive long-term radon measurement methods, a study was carried out in several closed spaces, including a uranium mine in Iran. For the passive method, solid-state nuclear track detectors based on Lexan polycarbonate were utilized, for the active method, AlphaGUARD. The study focused on the correlation between the results obtained for estimating the average indoor radon concentrations and consequent personal occupational doses in various working places. The repeatability of each method was investigated, too. In addition, it was shown that the radon concentrations in different stations of the continually ventilated uranium mine were comparable to the ground floor laboratories or storage rooms (without continual ventilation and lower than underground laboratories.

  20. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision......-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  1. Standardized benchmarking in the quest for orthologs.

    Science.gov (United States)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  2. Experimental method for calculation of effective doses in interventional radiology; Metodo experimental para calculo de dosis efectivas en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz Lblanca, M. D.; Diaz Romero, F.; Casares Magaz, O.; Garrido Breton, C.; Catalan Acosta, A.; Hernandez Armas, J.

    2013-07-01

    This paper proposes a method that allows you to calculate the effective dose in any interventional radiology procedure using an anthropomorphic mannequin Alderson RANDO and dosimeters TLD 100 chip. This method has been applied to an angio Radiology procedure: the biliary drainage. The objectives that have been proposed are: to) put together a method that, on an experimental basis, allows to know dosis en organs to calculate effective dose in complex procedures and b) apply the method to the calculation of the effective dose of biliary drainage. (Author)

  3. A method for comparison of animal and human alveolar dose and toxic effect of inhaled ozone

    International Nuclear Information System (INIS)

    Present models for predicting the pulmonary toxicity of O3 in humans from the toxic effects observed in animals rely on dosimetric measurements of O3 mass balance and species comparisons of mechanisms that protect tissue against O3. The goal of the study described was to identify a method to directly compare O3 dose and effect in animals and humans using bronchoalveolar lavage fluid markers. The feasibility of estimating O3 dose to alveoli of animals and humans was demonstrated through assay of reaction products of 18O-labeled O3 in lung surfactant and macrophage pellets of rabbits. The feasibility of using lung lavage fluid protein measurements to quantify the O3 toxic response in humans was demonstrated by the finding of significantly increased lung lavage protein in 10 subjects exposed to 0.4 ppm O3 for 2 h with intermittent periods of heavy exercise. The validity of using the lavage protein marker to quantify the response in animals has already been established. The positive results obtained in both the 18O3 and the lavage protein studies reported here suggest that it should be possible to obtain a direct comparison of both alveolar dose and toxic effect of O3 to alveoli of animals or humans

  4. Regulatory guide relating to the determination of whole-body doses due to internal radiation exposure (principles and methods)

    International Nuclear Information System (INIS)

    This compilation defines the principles and methods to be applied for determining the doses emanating from internal radiation exposure in persons with dose levels exceeding the critical levels defined in the ''Regulatory guide for health physics controls''. The obligatory procedure is intended to guarantee that measurements and interpretations of personnel doses and intakes are done on a standardized basis by a standardized procedure, so as to obtain comparable results. (orig.)

  5. Neutron equivalent dose-rate measuring according to the single-sphere albedo method

    International Nuclear Information System (INIS)

    This report reproduces the results of calibration radiation using the single-sphere albedo measuring method. It was done for the purpose of optimising the arrangement of detectors on the surface of the sphere and reduce the diameter of the moderator sphere from hitherto 30 cm whilst in addition determining the energy- and direction-dependency of a neutron equivalent dose-rate meter with He-3 detectors. Optimisation of the detector arrangement on the sphere's surface resulted in a corresponding boron-plastic capsulation with detector depths inside or outside the moderator di=-6 mm, and da=5 mm with albedo neutron detectors and thermal neutron detectors, respectively. (orig./DG)

  6. Gaia FGK benchmark stars: Metallicity

    Science.gov (United States)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  7. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  8. Experimental validation of a kV source model and dose computation method for CBCT imaging in an anthropomorphic phantom.

    Science.gov (United States)

    Poirier, Yannick; Tambasco, Mauro

    2016-01-01

    We present an experimental validation of a kilovoltage (kV) X-ray source characterization model in an anthropomorphic phantom to estimate patient-specific absorbed dose from kV cone-beam computed tomography (CBCT) imaging procedures and compare these doses to nominal weighted CT-dose index (CTDIw) dose estimates. We simulated the default Varian on-board imager 1.4 (OBI) default CBCT imaging protocols (i.e., standard-dose head, low-dose thorax, pelvis, and pelvis spotlight) using our previously developed and easy to implement X-ray point-source model and source characterization approach. We used this characterized source model to compute absorbed dose in homogeneous and anthropomorphic phantoms using our previously validated in-house kV dose computation software (kVDoseCalc). We compared these computed absorbed doses to doses derived from ionization chamber measurements acquired at several points in a homogeneous cylindrical phantom and from thermoluminescent detectors (TLDs) placed in the anthropomorphic phantom. In the homogeneous cylindrical phantom, computed values of absorbed dose relative to the center of the phantom agreed with measured values within ≤2% of local dose, except in regions of high-dose gradient where the distance to agreement (DTA) was 2 mm. The computed absorbed dose in the anthropomorphic phantom generally agreed with TLD measurements, with an average percent dose difference ranging from 2.4% ± 6.0% to 5.7% ± 10.3%, depending on the characterized CBCT imaging protocol. The low-dose thorax and the standard dose scans showed the best and worst agreement, respectively. Our results also broadly agree with published values, which are approximately twice as high as the nominal CTDIw would suggest. The results demonstrate that our previously developed method for modeling and characterizing a kV X-ray source could be used to accurately assess patient-specific absorbed dose from kV CBCT procedures within reasonable accuracy, and serve as further

  9. Reconstruction of high-resolution 3D dose from matrix measurements : error detection capability of the COMPASS correction kernel method

    NARCIS (Netherlands)

    Godart, J.; Korevaar, E. W.; Visser, R.; Wauben, D. J. L.; van t Veld, Aart

    2011-01-01

    TheCOMPASS system (IBADosimetry) is a quality assurance (QA) tool which reconstructs 3D doses inside a phantom or a patient CT. The dose is predicted according to the RT plan with a correction derived from 2D measurements of a matrix detector. This correction method is necessary since a direct recon

  10. Establishment and validation of a method for multi-dose irradiation of cells in 96-well microplates

    International Nuclear Information System (INIS)

    Highlights: ► We established a method for multi-dose irradiation of cell cultures within a 96-well plate. ► Equations to adjust to preferable dose levels are produced and provided. ► Up to eight different dose levels can be tested in one microplate. ► This method results in fast and reliable estimation of radiation dose–response curves. -- Abstract: Microplates are useful tools in chemistry, biotechnology and molecular biology. In radiobiology research, these can be also applied to assess the effect of a certain radiation dose delivered to the whole microplate, to test radio-sensitivity, radio-sensitization or radio-protection. Whether different radiation doses can be accurately applied to a single 96-well plate to further facilitate and accelerated research by one hand and spare funds on the other, is a question dealt in the current paper. Following repeated ion-chamber, TLD and radiotherapy planning dosimetry we established a method for multi-dose irradiation of cell cultures within a 96-well plate, which allows an accurate delivery of desired doses in sequential columns of the microplate. Up to eight different dose levels can be tested in one microplate. This method results in fast and reliable estimation of radiation dose–response curves

  11. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  12. A novel method of estimating dose responses for polymer gels using texture analysis of scanning electron microscopy images.

    Directory of Open Access Journals (Sweden)

    Cheng-Ting Shih

    Full Text Available Polymer gels are regarded as a potential dosimeter for independent validation of absorbed doses in clinical radiotherapy. Several imaging modalities have been used to convert radiation-induced polymerization to absorbed doses from a macro-scale viewpoint. This study developed a novel dose conversion mechanism by texture analysis of scanning electron microscopy (SEM images. The modified N-isopropyl-acrylamide (NIPAM gels were prepared under normoxic conditions, and were administered radiation doses from 5 to 20 Gy. After freeze drying, the gel samples were sliced for SEM scanning with 50×, 500×, and 3500× magnifications. Four texture indices were calculated based on the gray level co-occurrence matrix (GLCM. The results showed that entropy and homogeneity were more suitable than contrast and energy as dose indices for higher linearity and sensitivity of the dose response curves. After parameter optimization, an R (2 value of 0.993 can be achieved for homogeneity using 500× magnified SEM images with 27 pixel offsets and no outlier exclusion. For dose verification, the percentage errors between the prescribed dose and the measured dose for 5, 10, 15, and 20 Gy were -7.60%, 5.80%, 2.53%, and -0.95%, respectively. We conclude that texture analysis can be applied to the SEM images of gel dosimeters to accurately convert micro-scale structural features to absorbed doses. The proposed method may extend the feasibility of applying gel dosimeters in the fields of diagnostic radiology and radiation protection.

  13. Radiography benchmark 2014

    Science.gov (United States)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  14. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    An infrastructure is emerging that enables the positioning of populations of on-line, mobile service users. In step with this, research in the management of moving objects has attracted substantial attention. In particular, quite a few proposals now exist for the indexing of moving objects, and m...... of the benchmark to three spatio-temporal indexes - the TPR-, TPR*-, and Bx-trees. Representative experimental results and consequent guidelines for the usage of these indexes are reported....

  15. Accounting method for radiation doses due to long-lived natural radionuclides

    International Nuclear Information System (INIS)

    A method to evaluate radiation doses occurring in the very far future from current nuclear fuel production and waste management practices has been developed. The method may be applied to compare possible nuclear fuel and radioactive waste management schemes; here, it has been used mainly to evaluate the additional radiological impact from a global nuclear power production programme as compared to exposure to natural radionuclides in undisturbed formations. The main results of this study are that the highest possible increase of the radiological impact of natural uranium and decay products through anthropogenic activity is around 1%. This additional impact is of the same order of magnitude as the non-uranium-related natural background exposure

  16. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  17. MO-E-17A-04: Size-Specific Dose Estimate (SSDE) Provides a Simple Method to Calculate Organ Dose for Pediatric CT Examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, B; Brady, S; Kaufman, R [St Jude Children' s Research Hospital, Memphis, TN (United States); Mirro, A [Washington University, St. Louis, MO (United States)

    2014-06-15

    Purpose: Investigate the correlation of SSDE with organ dose in a pediatric population. Methods: Four anthropomorphic phantoms, representing a range of pediatric body habitus, were scanned with MOSFET dosimeters placed at 23 organ locations to determine absolute organ dosimetry. Phantom organ dosimetry was divided by phantom SSDE to determine correlation between organ dose and SSDE. Correlation factors were then multiplied by patient SSDE to estimate patient organ dose. Patient demographics consisted of 352 chest and 241 abdominopelvic CT examinations, 22 ± 15 kg (range 5−55 kg) mean weight, and 6 ± 5 years (range 4 mon to 23 years) mean age. Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm. 23 organ correlation factors were determined in the chest and abdominopelvic region across nine pediatric weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7−1.4) and abdominopelvic (average 0.9; range 0.7−1.3) was near unity. For organs that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1−0.4) for both the chest and abdominopelvic regions, respectively. Pediatric organ dosimetry was compared to published values and was found to agree in the chest to better than an average of 5% (27.6/26.2) and in the abdominopelvic region to better than 2% (73.4/75.0). Conclusion: Average correlation of SSDE and organ dosimetry was found to be better than ± 10% for fully covered organs within the scan volume. This study provides a list of organ dose correlation factors for the chest and abdominopelvic regions, and describes a simple methodology to estimate individual pediatric patient organ dose based on patient SSDE.

  18. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  19. A simplified method of four-dimensional dose accumulation using the mean patient density representation

    OpenAIRE

    Glide-Hurst, Carri K.; Hugo, Geoffrey D.; Liang, Jian; Yan, Di

    2008-01-01

    The purpose of this work was to demonstrate, both in phantom and patient, the feasibility of using an average 4DCT image set (AVG-CT) for 4D cumulative dose estimation. A series of 4DCT numerical phantoms and corresponding AVG-CTs were generated. For full 4D dose summation, static dose was calculated on each phase and cumulative dose was determined by combining each phase’s static dose distribution with known tumor displacement. The AVG-CT cumulative dose was calculated similarly, although th...

  20. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  1. A new method for synthesizing radiation dose-response data from multiple trials applied to prostate cancer

    DEFF Research Database (Denmark)

    Diez, Patricia; Vogelius, Ivan S; Bentzen, Søren M

    2010-01-01

    A new method is presented for synthesizing dose-response data for biochemical control of prostate cancer according to study design (randomized vs. nonrandomized) and risk group (low vs. intermediate-high)....

  2. Effect of Radiation Monitoring Method and Formula Differences on Estimated Physician Dose during Percutaneous Coronary Intervention

    Energy Technology Data Exchange (ETDEWEB)

    Chida, K.; Morishima, Y.; Masuyama, H.; Chiba, H.; Katahira, Y.; Inaba, Y.; Mori, I.; Maruoka, S.; Takahashi, S.; Kohzuki, M.; Zuguchi, M. (Dept. of Radiological Technology, School of Health Sciences, Faculty of Medicine, Tohoku Univ., Sendai (Japan))

    2009-02-15

    Background: Currently, one or two dosimeters are used to monitor radiation exposure in most cardiac laboratories. In addition, several different formulas are used to convert exposure data into an effective dose (ED). Purpose: To clarify the effect of monitoring methods and formula selection on the estimated ED for physicians during percutaneous coronary interventions (PCIs). Material and Methods: The ED of physicians during cardiac catheterization was determined using an optically stimulated luminescence dosimeter (Luxel badge). Two Luxel badges were worn: one beneath a personal lead apron (0.35-mm lead equivalent) at the chest and one outside of the apron at the neck. Results: The difference in the average ED of seven physicians was approximately fivefold (range 1.13-5.43 mSv/year) using the six different formulas in the clinical evaluation. The estimated physician ED differed markedly according to both the monitoring method and formula selected. Conclusion: ED estimation is dependent on both the monitoring method and the formula used. Therefore, it is important that comparisons among laboratories are based on the same monitoring method and same formula for calculating the ED

  3. BiodosEPR-2006 consensus committee report on biodosimetric methods to evaluate radiation doses at long times after exposure

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Steven L. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, MD (United States)], E-mail: ssimon@mail.nih.gov; Bailiff, Ian [Luminescence Dating and Dosimetry Laboratory, Durham University, Durham (United Kingdom); Bouville, Andre [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, MD (United States); Fattibene, Paola [Istituto Superiore di Sanita and Istituto Nazionale di Fisica Nucleare, Rome (Italy); Kleinerman, Ruth A. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Bethesda, MD (United States); Lloyd, David C. [Health Protection Agency, Radiation Protection Division, Chilton, Didcot, Oxfordshire (United Kingdom); McKeever, Stephen W.S. [Office of the Vice President for Research and Technology Transfer, Oklahoma State University, Stillwater, OK (United States); Romanyukha, Alexander [Department of Radiology, Uniformed Services, University of the Health Sciences, Bethesda, MD (United States); Sevan' kaev, Alexander V. [Medical Radiological Research Centre, Obninsk (Russian Federation); Tucker, James D. [Department of Biological Sciences, Wayne State University, Detroit, MI (United States); Wieser, Albrecht [GSF National Research Center, Institute of Radiation Protection, Neuherberg (Germany)

    2007-07-15

    The requirements for biodosimetric techniques used at long times after exposure, i.e., 6 months to more than 50 years, are unique compared to the requirements for methods used for immediate dose estimation. In addition to the fundamental requirement that the assay measures a physical or biologic change that is proportional to the energy absorbed, the signal must be highly stable over time to enable reasonably precise determinations of the absorbed dose decades later. The primary uses of these biodosimetric methods have been to support long-term health risk (epidemiologic) studies or to support compensation (damage) claims. For these reasons, the methods must be capable of estimating individual doses, rather than group mean doses. Even when individual dose estimates can be obtained, inter-individual variability remains as one of the most difficult problems in using biodosimetry measurements to rigorously quantify individual exposures. Other important criteria for biodosimetry methods include obtaining samples with minimal invasiveness, low detection limits, and high precision. Cost and other practical limitations generally prohibit biodosimetry measurements on a large enough sample to replace analytical dose reconstruction in epidemiologic investigations. However, these measurements can be extremely valuable as a means to corroborate analytical or model-based dose estimates, to help reduce uncertainty in individual doses estimated by other methods and techniques, and to assess bias in dose reconstruction models. There has been extensive use of three biodosimetric techniques in irradiated populations: EPR (using tooth enamel), FISH (using blood lymphocytes), and GPA (also using blood); these methods have been supplemented with luminescent methods applied to building materials and artifacts. A large number of investigations have used biodosimetric methods many years after external and, to a lesser extent, internal exposure to reconstruct doses received from accidents

  4. BNCT dose calculation in irregular fields using the sector integration method

    Energy Technology Data Exchange (ETDEWEB)

    Blaumann, H.R. E-mail: blaumann@cab.cnea.gov.ar; Sanz, D.E.; Longhino, J.M.; Larrieu, O.A. Calzetta

    2004-11-01

    Irregular fields for boron neutron capture therapy (BNCT) have been already proposed to spare normal tissue in the treatment of superficial tumors. This added dependence would require custom measurements and/or to have a secondary calculation system. As a first step, we implemented the sector-integration method for irregular field calculation in a homogeneous medium and on the central beam axis. The dosimetric responses (fast neutron and photon dose and thermal neutron flux), are calculated by sector integrating the measured responses of circular fields over the field boundary. The measurements were carried out at our BNCT facility, the RA-6 reactor (Argentina). The input data were dosimetric responses for circular fields measured at different depths in a water phantom using ionisation and activation techniques. Circular fields were formed by shielding the beam with two plates: borated polyethilene plus lead. As a test, the dosimetric responses of a 7x4 cm{sup 2} rectangular field, were measured and compared to calculations, yielding differences less than 3% in equivalent dose at any depth indicating that the tool is suitable for redundant calculations.

  5. BNCT dose calculation in irregular fields using the sector integration method.

    Science.gov (United States)

    Blaumann, H R; Sanz, D E; Longhino, J M; Larrieu, O A Calzetta

    2004-11-01

    Irregular fields for boron neutron capture therapy (BNCT) have been already proposed to spare normal tissue in the treatment of superficial tumors. This added dependence would require custom measurements and/or to have a secondary calculation system. As a first step, we implemented the sector-integration method for irregular field calculation in a homogeneous medium and on the central beam axis. The dosimetric responses (fast neutron and photon dose and thermal neutron flux), are calculated by sector integrating the measured responses of circular fields over the field boundary. The measurements were carried out at our BNCT facility, the RA-6 reactor (Argentina). The input data were dosimetric responses for circular fields measured at different depths in a water phantom using ionisation and activation techniques. Circular fields were formed by shielding the beam with two plates: borated polyethilene plus lead. As a test, the dosimetric responses of a 7x4 cm(2) rectangular field, were measured and compared to calculations, yielding differences less than 3% in equivalent dose at any depth indicating that the tool is suitable for redundant calculations.

  6. Evaluation of the stepwise collimation method for the reduction of the patient dose in full spine radiography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Boram [Korea University, Seoul (Korea, Republic of); Sun Medical Center, Daejeon (Korea, Republic of); Lee, Sunyoung [Sun Medical Center, Daejeon (Korea, Republic of); Yang, Injeong [Seoul National University Hospital Medical Center, Seoul (Korea, Republic of); Yoon, Myeonggeun [Korea University, Seoul (Korea, Republic of)

    2014-05-15

    The purpose of this study is to evaluate the dose reduction when using the stepwise collimation method for scoliosis patients undergoing full spine radiography. A Monte Carlo simulation was carried out to acquire dose vs. volume data for organs at risk (OAR) in the human body. While the effective doses in full spine radiography were reduced by 8, 15, 27 and 44% by using four different sizes of the collimation, the doses to the skin were reduced by 31, 44, 55 and 66%, indicating that the reduction of the dose to the skin is higher than that to organs inside the body. Although the reduction rates were low for the gonad, being 9, 14, 18 and 23%, there was more than a 30% reduction in the dose to the heart, suggesting that the dose reduction depends significantly on the location of the OARs in the human body. The reduction rate of the secondary cancer risk based on the excess absolute risk (EAR) varied from 0.6 to 3.4 per 10,000 persons, depending on the size of the collimation. Our results suggest that the stepwise collimation method in full spine radiography can effectively reduce the patient dose and the radiation-induced secondary cancer risk.

  7. Investigation of the HU-density conversion method and comparison of dose distribution for dose calculation on MV cone beam CT images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Joo; Lee, Seu Ran; Suh, Tae Suk [Dept. of Biomedical Engineering, The Catholic University of Korea, Bucheon (Korea, Republic of)

    2011-11-15

    Modern radiation therapy techniques, such as Image-guided radiation therapy (IGRT), Adaptive radiation therapy (ART) has become a routine clinical practice on linear accelerators for the increase the tumor dose conformity and improvement of normal tissue sparing at the same time. For these highly developed techniques, megavoltage cone beam computed tomography (MVCBCT) system produce volumetric images at just one rotation of the x-ray beam source and detector on the bottom of conventional linear accelerator for real-time application of patient condition into treatment planning. MV CBCT image scan be directly registered to a reference CT data set which is usually kilo-voltage fan-beam computed tomography (kVFBCT) on treatment planning system and the registered image scan be used to adjust patient set-up error. However, to use MV CBCT images in radiotherapy, reliable electron density (ED) distribution are required. Patients scattering, beam hardening and softening effect caused by different energy application between kVCT, MV CBCT can cause cupping artifacts in MV CBCT images and distortion of Houns field Unit (HU) to ED conversion. The goal of this study, for reliable application of MV CBCT images into dose calculation, MV CBCT images was modified to correct distortion of HU to ED using the relationship of HU and ED from kV FBCT and MV CBCT images. The HU-density conversion was performed on MV CBCT image set using Dose difference map was showing in Figure 1. Finally, percentage differences above 3% were reduced depending on applying density calibration method. As a result, total error co uld be reduced to under 3%. The present study demonstrates that dose calculation accuracy using MV CBCT image set can be improved my applying HU-density conversion method. The dose calculation and comparison of dose distribution from MV CBCT image set with/without HU-density conversion method was performed. An advantage of this study compared to other approaches is that HU

  8. IMRT dose delivery effects in radiotherapy treatment planning using Monte Carlo methods

    Science.gov (United States)

    Tyagi, Neelam

    Inter- and intra-leaf transmission and head scatter can play significant roles in Intensity Modulated Radiation Therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head be accurately modeled. In this thesis Monte Carlo (MC) methods have been used to model the treatment head of a Varian linear accelerator. A comprehensive model of the Varian 120-leaf MLC has been developed within the DPM MC code and has been verified against measurements in homogeneous and heterogeneous phantom geometries under different IMRT delivery circumstances. Accuracy of the MLC model in simulating details in the leaf geometry has been established over a range of arbitrarily shaped fields and IMRT fields. A sensitivity analysis of the effect of the electron-on-target parameters and the structure of the flattening filter on the accuracy of calculated dose distributions has been conducted. Adjustment of the electron-on-target parameters resulting in optimal agreement with measurements was an iterative process, with the final parameters representing a tradeoff between small (3x3 cm2) and large (40x40 cm2) field sizes. A novel method based on adaptive kernel density estimation, in the phase space simulation process is also presented as an alternative to particle recycling. Using this model dosimetric differences between MLC-based static (SMLC) and dynamic (DMLC) deliveries have been investigated. Differences between SMLC and DMLC, possibly related to fluence and/or spectral changes, appear to vary systematically with the density of the medium. The effect of fluence modulation due to leaf sequencing shows differences, up to 10% between plans developed with 1% and 10% fluence intervals for both SMLC and DMLC-delivered sequences. Dose differences between planned and delivered leaf sequences

  9. Thoron-in-breath as a method of internal dose assessment

    International Nuclear Information System (INIS)

    The most promising bioassay methodology for measuring the internal exposure from the chronic inhalation of thorium ore dusts is thoron-in-breath. The current detection limits are of the order 2.5 to 3 Bq of thorium lung burden and it is predicted that this can be readily reduced to better than 1 Bq. An extensive field study of the technique has been undertaken involving tests at six monthly intervals over a twelve, or in one case, eighteen month period. In excess of 350 tests on 115 workers have been made. Thorium lung burdens have been detected over the range -1 over 5 years. The technique is under review as an approved method for assessing the internal dose arising from the chronic inhalation of thorium ore dusts

  10. Preliminary Study on the Quantitative Value Transfer Method of Absorbed Dose to Water in 60Co γ Radiation

    Directory of Open Access Journals (Sweden)

    SONG Ming-zhe

    2015-01-01

    Full Text Available Absorbed dose to water in 60Co γ radiation is the basic physics quantity in the quantitative value system of radiation therapy, it is very necessary for radiation therapy. The study on the quantitative value transfer method of absorbed dose to water in 60Co γ Radiation could provide important technical support to the establishment of Chinese absorbed dose to water quantity system. Based on PTW-30013 ionization chamber, PMMA water phantom and 3D mobile platform, quantitative value transfer standard instrument was established, combined with the requirement of IAEA-TRS398, developed preliminary study of 60Co absorbed dose to water quantity value transfer method. After the quantity value transfer, the expanded uncertainty of absorbed dose to water calibration factor of PTW-30013 was 0.90% (k=2, the expanded uncertainty of absorbed dose to water of 60Co γ reference radiation in Radiation Metrology Center (SSDL of IAEA was 1.4% (k=2. The results showed that, this value transfer method can reduce the uncertainty of 60Co absorbed dose to water effectively in Secondary Standard Dosimetry Laboratory.

  11. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  12. Calculation of organ doses from environmental gamma rays using human phantoms and Monte Carlo methods. Pt. 1

    International Nuclear Information System (INIS)

    Organ doses from environmental γ-rays (U-238, Th-232, K-40) were calculated using Monte Carlo methods for three typical sources of a semi-infinite volume source in the air, an infinite plane source in the ground and a volume source in the ground. γ-ray fields in the natural environment were simulated rigourously without approximations or simplifications in the intermediate steps except for the disturbance of the radiation field by the human body which was neglected. Organ doses were calculated for four anthropomorphic phantoms representing a baby, a child, a female and a male adult. The dose of a fetus is given by the dose to the uterus of the adult female. Air kerma and dose conversion factors normalised to air kerma and to source intensity are given for monoenergetic sources and for the natural radionuclides. (orig./HP)

  13. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  14. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    International Nuclear Information System (INIS)

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  15. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  16. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  17. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different ty

  18. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  19. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method

    Science.gov (United States)

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-01

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose

  20. A method for verification of treatment times for high-dose-rate intraluminal brachytherapy treatment

    Directory of Open Access Journals (Sweden)

    Muhammad Asghar Gadhi

    2016-06-01

    Full Text Available Purpose: This study was aimed to increase the quality of high dose rate (HDR intraluminal brachytherapy treatment. For this purpose, an easy, fast and accurate patient-specific quality assurance (QA tool has been developed. This tool has been implemented at Bahawalpur Institute of Nuclear Medicine and Oncology (BINO, Bahawalpur, Pakistan.Methods: ABACUS 3.1 Treatment planning system (TPS has been used for treatment planning and calculation of total dwell time and then results were compared with the time calculated using the proposed method. This method has been used to verify the total dwell time for different rectum applicators for relevant treatment lengths (2-7 cm and depths (1.5-2.5 cm, different oesophagus applicators of relevant treatment lengths (6-10 cm and depths (0.9 & 1.0 cm, and a bronchus applicator for relevant treatment lengths (4-7.5 cm and depth (0.5 cm.Results: The average percentage differences between treatment time TM with manual calculation and as calculated by the TPS is 0.32% (standard deviation 1.32% for rectum, 0.24% (standard deviation 2.36% for oesophagus and 1.96% (standard deviation 0.55% for bronchus, respectively. These results advocate that the proposed method is valuable for independent verification of patient-specific treatment planning QA.Conclusion: The technique illustrated in the current study is an easy, simple, quick and useful for independent verification of the total dwell time for HDR intraluminal brachytherapy. Our method is able to identify human error-related planning mistakes and to evaluate the quality of treatment planning. It enhances the quality of brachytherapy treatment and reliability of the system.

  1. Determination of surface dose rate of indigenous 32P patch brachytherapy source by experimental and Monte Carlo methods

    International Nuclear Information System (INIS)

    Isotope production and Application Division of Bhabha Atomic Research Center developed 32P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed 32P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the 32P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed 32P patch sources for contact brachytherapy applications. - Highlights: • Surface dose rates of 25 mm nominal diameter newly developed 32P patch sources were measured experimentally using extrapolation chamber and Gafchromic EBT2 film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed. • The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and

  2. A new method to estimate doses to the normal tissues after past extended and involved field radiotherapy for Hodgkin lymphoma

    DEFF Research Database (Denmark)

    Maraldo, Maja V; Lundemann, Michael; Vogelius, Ivan R;

    2015-01-01

    with Hodgkin lymphoma was used. MATERIALS AND METHODS: For 46 model patients, 29 organs at risk (OARs) were contoured and seven treatment fields reconstructed (mantle, mediastinal, right/left neck, right/left axillary, and spleen field). Extended and involved field RT were simulated by generating RT plans...... by superpositions of the seven individual fields. The mean (standard deviation) of the 46 individual mean organ doses were extracted as percent of prescribed dose for each field superposition. RESULTS: The estimated mean doses to the OARs from 17 field combinations were presented. The inter-patient variability...

  3. Benchmark analysis of MCNP{trademark} ENDF/B-VI iron

    Energy Technology Data Exchange (ETDEWEB)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets.

  4. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    OpenAIRE

    van Lent Wineke AM; de Beer Relinde D; van Harten Wim H

    2010-01-01

    Abstract Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations managem...

  5. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  6. Characterization of an absorbed dose standard in water through ionometric methods

    International Nuclear Information System (INIS)

    In this work the unit of absorbed dose at the Secondary Standard Dosimetry Laboratory (SSDL) of Mexico, is characterized by means of the development of a primary standard of absorbed dose to water, Dagua. The main purpose is to diminish the uncertainty in the service of dosimetric calibration of ionization chambers (employed in radiotherapy of extemal beams) that offers this laboratory. This thesis is composed of seven chapters: In Chapter 1 the position and justification of the problem is described, as well as the general and specific objectives. In Chapter 2, a presentation of the main quantities and units used in dosimetry is made, in accordance with the recommendations of the International Commission on Radiation Units and Measurements (ICRU) that establish the necessity to have a coherent system with the international system of units and dosimetric quantities. The concepts of equilibrium and transient equilibrium of charged particles (TCPE) are also presented, which are used later in the quantitative determination of Dagua. Finally, since the proposed standard of Dagua is of ionometric type, an explanation of the Bragg-Gray and Spencer-Attix cavity theories is made. These theories are the foundation of this type of standards. On the other hand, to guarantee the complete validity of the conditions demanded by these theories it is necessary to introduce correction factors. These factors are determined in Chapters 5 and 6. Since for the calculation of the correction factors Monte Carlo (MC) method is used in an important way, in Chapter 3 the fundamental concepts of this method are presented; in particular the principles of the code MCNP4C [Briesmeister 2000] are detailed, making emphasis on the basis of electron transport and variance reduction techniques used in this thesis. Because a phenomenological approach is carried out in the development of the standard of Dagua, in Chapter 4 the characteristics of the Picker C/9 unit, the ionization chamber type CC01

  7. Coincidence in the dose estimation in a OEP by different methods

    International Nuclear Information System (INIS)

    The case of an apparent overexposure to radiation according to that indicated for the thermoluminescent dosemeter 81.59 mSv (TLD) of a occupationally exposed hard-working (POE), for that was practiced the study of biological dosimetry. The estimated dose was 0.12 Gy with which was proven the marked dose registration by the TLD dosemeter. It was concluded that both doses are the same ones. (Author)

  8. A new method of real-time skin dose visualization. Clinical evaluation of fluoroscopically guided interventions

    International Nuclear Information System (INIS)

    We have conducted a prospective study to clinically evaluate a new radiation dose observing tool that displays patient's peak skin dose (PSD) map in real time. The skin dose map (SDM) prototype quantifies the air kerma based on exposure parameters from the X-ray system. The accuracy of this prototype was evaluated with radiochromic films, which were used as a mean for PSD measurement. The SDM is a reliable tool that provides an accurate PSD estimation and location. SDM also has many advantages over the radiochromic films, such as real-time dose evaluation and easy access to critical operational parameters for physicians and technicians. (orig.)

  9. New method of gamma dose-rate measurement using energy-sensitive counters

    International Nuclear Information System (INIS)

    A new concept of charge quantization and pulse-rate measurement was developed to monitor low-level gamma dose rates using energy-sensitive, air-equivalent counters. Applying this concept, the charge from each detected photon is quantized by level-sensitive comparators so that the resulting total output pulse rate is proportional to dose rate. The concept was tested with a proportional counter and a solid-state detector for wide-range dose-rate monitoring applications. The prototypic monitors cover a dose-rate range from background radiation levels 10 μR/h) to 10 R/h

  10. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  11. A Novel Method to Incorporate the Spatial Location of the Lung Dose Distribution into Predictive Radiation Pneumonitis Modeling

    International Nuclear Information System (INIS)

    Purpose: Studies have proposed that patients who receive radiation therapy to the base of the lung are more susceptible to radiation pneumonitis than patients who receive therapy to the apex of the lung. The primary purpose of the present study was to develop a novel method to incorporate the lung dose spatial information into a predictive radiation pneumonitis model. A secondary goal was to apply the method to a 547 lung cancer patient database to determine whether including the spatial information could improve the fit of our model. Methods and Materials: The three-dimensional dose distribution of each patient was mapped onto one common coordinate system. The boundaries of the coordinate system were defined by the extreme points of each individual patient lung. Once all dose distributions were mapped onto the common coordinate system, the spatial information was incorporated into a Lyman-Kutcher-Burman predictive radiation pneumonitis model. Specifically, the lung dose voxels were weighted using a user-defined spatial weighting matrix. We investigated spatial weighting matrices that linearly scaled each dose voxel according to the following orientations: superior-inferior, anterior-posterior, medial–lateral, left–right, and radial. The model parameters were fit to our patient cohort with the endpoint of severe radiation pneumonitis. The spatial dose model was compared against a conventional dose–volume model to determine whether adding a spatial component improved the fit of the model. Results: Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. Adding in a spatial parameter did not significantly increase the accuracy of the model for any of the weighting schemes. Conclusions: A novel method was developed to investigate the relationship between the location of the deposited lung dose and pneumonitis rate. The method was applied to a patient database, and we found that for our patient cohort, the spatial location does not

  12. Novel iterative reconstruction method for optimal dose usage in redundant CT - acquisitions

    Science.gov (United States)

    Bruder, H.; Raupach, R.; Allmendinger, T.; Kappler, S.; Sunnegardh, J.; Stierstorfer, K.; Flohr, T.

    2014-03-01

    In CT imaging, a variety of applications exist where reconstructions are SNR and/or resolution limited. However, if the measured data provide redundant information, composite image data with high SNR can be computed. Generally, these composite image volumes will compromise spectral information and/or spatial resolution and/or temporal resolution. This brings us to the idea of transferring the high SNR of the composite image data to low SNR (but high resolution) `source' image data. It was shown that the SNR of CT image data can be improved using iterative reconstruction [1] .We present a novel iterative reconstruction method enabling optimal dose usage of redundant CT measurements of the same body region. The generalized update equation is formulated in image space without further referring to raw data after initial reconstruction of source and composite image data. The update equation consists of a linear combination of the previous update, a correction term constrained by the source data, and a regularization prior initialized by the composite data. The efficiency of the method is demonstrated for different applications: (i) Spectral imaging: we have analysed material decomposition data from dual energy data of our photon counting prototype scanner: the material images can be significantly improved transferring the good noise statistics of the 20 keV threshold image data to each of the material images. (ii) Multi-phase liver imaging: Reconstructions of multi-phase liver data can be optimized by utilizing the noise statistics of combined data from all measured phases (iii) Helical reconstruction with optimized temporal resolution: splitting up reconstruction of redundant helical acquisition data into a short scan reconstruction with Tam window optimizes the temporal resolution The reconstruction of full helical data is then used to optimize the SNR. (iv) Cardiac imaging: the optimal phase image (`best phase') can be improved by transferring all applied over

  13. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells

    International Nuclear Information System (INIS)

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424–7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20–30%) extent of Hartree–Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO–LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. (paper)

  14. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    Science.gov (United States)

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed. PMID:26808717

  15. Thermoluminescence dating of chinese porcelain using a regression method of saturating exponential in pre-dose technique

    International Nuclear Information System (INIS)

    Thermoluminescence (TL) dating using a regression method of saturating exponential in pre-dose technique was described. 23 porcelain samples from past dynasties of China were dated by this method. The results show that the TL ages are in reasonable agreement with archaeological dates within a standard deviation of 27%. Such error can be accepted in porcelain dating

  16. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  17. Evaluation of a New Method for Calculation of Cumulative Doses in the Rectum Wall using Repeat CT Scans

    International Nuclear Information System (INIS)

    The rectum wall is an important organ at risk during irradiation of the prostate, the bladder and other organs in the pelvis. It is therefore of great interest to be able reliably to predict normal tissue complication probabilities (NTCPs) for this organ. Because the rectum wall is a hollow organ capable of large deformations between fractions, dose estimates from a single CT are unreliable, and thereby also NTCP estimates. In this study two methods for calculations of cumulative dose distributions from repetitive CT scans are compared. The first is a method presented in this article that uses tracking of volume elements for a direct summation of the doses delivered in the treatment fractions. The other, presented earlier, is based on information from dose-volume histograms. The comparisons were made in terms of equivalent uniform doses (EUDs) and NTCPs. The methods were also compared with mean values of EUD and NTCP values from individual CT scans. The study showed that with the relatively symmetric beam arrangements normally used for treatment of prostate and bladder cancer, it is not necessary to use the more laborious method of element tracking. However, an introduction of artificial lateral rectum movements revealed that element tracking is necessary in less symmetric situations

  18. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  19. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  20. Possibilities of electric resistance method in study of metals and alloys irradiated to a high doses

    International Nuclear Information System (INIS)

    On the base of metals and alloys and reactor core materials fulfilled its task after high neutron dose and charged particles irradiation the electric resistance method possibilities are shown. It is determined that in the pure metal with BCC-structure (α-Fe and Mo) and TiAl alloy the point defects and its fine clusters and defect areas are mostly generating on atomic cascade shifts places partly relaxing into the dislocation loops. The loops contribution in electric resistance increase does not excess 3-4 and 1.6 % relatively for BCC-metals and TiAl alloy. The addition of impurity leads to impunity atom - radiation defect complex formation. The martensite decay in the U-7 hardened steels is carried out during the process of high power proton damage accompanying with the carbon isolation from the solids solution and dispersion carbide phase particles generation. In the materials of the WWR-K reactor control rod (12Cr18Ni9Ti, SAV-1), EhP-172 steel the process of radiation induced impurities redistributions has been carried out with its isolation from solid solution in the form of the fine phase on the grain boundaries or other preferable discharge

  1. Quantum benchmarks for Gaussian states

    CERN Document Server

    Chiribella, Giulio

    2014-01-01

    Teleportation and storage of continuous variable states of light and atoms are essential building blocks for the realization of large scale quantum networks. Rigorous validation of these implementations require identifying, and surpassing, benchmarks set by the most effective strategies attainable without the use of quantum resources. Such benchmarks have been established for special families of input states, like coherent states and particular subclasses of squeezed states. Here we solve the longstanding problem of defining quantum benchmarks for general pure Gaussian states with arbitrary phase, displacement, and squeezing, randomly sampled according to a realistic prior distribution. As a special case, we show that the fidelity benchmark for teleporting squeezed states with totally random phase and squeezing degree is 1/2, equal to the corresponding one for coherent states. We discuss the use of entangled resources to beat the benchmarks in experiments.

  2. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  3. Intra and inter-organizational learning from benchmarking IS services

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Kræmmergaard, Pernille; Hansen, Bettina

    2016-01-01

    This paper reports a case study of benchmarking IS services in Danish municipalities. Drawing on Holmqvist’s (2004) organizational learning model of exploration and exploitation, the paper explores intra and inter-organizational learning dynamics among Danish municipalities that are involved...... in benchmarking their IS services and functions since 2006. Particularly, this research tackled existing IS benchmarking approaches and methods by turning to a learning-oriented perspective and by empirically exploring the dynamic process of intra and inter-organizational learning from benchmarking IS/IT services....... The paper also makes a contribution by emphasizing the importance of informal cross-municipality consortiums to facilitate learning and experience sharing across municipalities. The findings of the case study demonstrated that the IS benchmarking scheme is relatively successful in sharing good practices...

  4. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  5. The Nature and Predictive Validity of a Benchmark Assessment Program in an American Indian School District

    Science.gov (United States)

    Payne, Beverly J. R.

    2013-01-01

    This mixed methods study explored the nature of a benchmark assessment program and how well the benchmark assessments predicted End-of-Grade (EOG) and End-of-Course (EOC) test scores in an American Indian school district. Five major themes were identified and used to develop a Dimensions of Benchmark Assessment Program Effectiveness model:…

  6. DETECTORS AND EXPERIMENTAL METHODS: ELDRS and dose-rate dependence of vertical NPN transistor

    Science.gov (United States)

    Zheng, Yu-Zhan; Lu, Wu; Ren, Di-Yuan; Wang, Gai-Li; Yu, Xue-Feng; Guo, Qi

    2009-01-01

    The enhanced low-dose-rate sensitivity (ELDRS) and dose-rate dependence of vertical NPN transistors are investigated in this article. The results show that the vertical NPN transistors exhibit more degradation at low dose rate, and that this degradation is attributed to the increase on base current. The oxide trapped positive charge near the SiO2-Si interface and interface traps at the interface can contribute to the increase on base current and the two-stage hydrogen mechanism associated with space charge effect can well explain the experimental results.

  7. Methods for calculating dose conversion coefficients for terrestrial and aquatic biota

    International Nuclear Information System (INIS)

    Plants and animals may be exposed to ionizing radiation from radionuclides in the environment. This paper describes the underlying data and assumptions to assess doses to biota due to internal and external exposure for a wide range of masses and shapes living in various habitats. A dosimetric module is implemented which is a user-friendly and flexible possibility to assess dose conversion coefficients for aquatic and terrestrial biota. The dose conversion coefficients have been derived for internal and various external exposure scenarios. The dosimetric model is linked to radionuclide decay and emission database, compatible with the ICRP Publication 38, thus providing a capability to compute dose conversion coefficients for any nuclide from the database and its daughter nuclides. The dosimetric module has been integrated into the ERICA Tool, but it can also be used as a stand-alone version

  8. Distinguishing of artificial irradiation by α dose: a method of discriminating imitations of ancient pottery

    International Nuclear Information System (INIS)

    If a modern pottery is artificially irradiated by γ-rays of 60Co source, the modern will become ancient when the pottery is dated by the thermoluminescence technique. For distinguishing artificial irradiation a study was made. Meanwhile the 'fine-grain' and 'pre-dose' techniques were used respectively for measurement of the paleodose in a fine-grain sample from the same pottery. If the paleodose measured by the fine-grain technique is greater than that by the pre-dose techniques, we can affirm that the difference between two paleodoses is due to α dose and this paleodose containing α component results from natural radiation, the pottery therefore is ancient. If two paleodoses are equal approximately, i.e. α dose is not included in the paleodose, the paleodose comes from artificial γ irradiation and the pottery is an imitation

  9. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  10. Benchmarking Dosimetric Quality Assessment of Prostate Intensity-Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Purpose: To benchmark the dosimetric quality assessment of prostate intensity-modulated radiotherapy and determine whether the quality is influenced by disease or treatment factors. Patients and Methods: We retrospectively analyzed the data from 155 consecutive men treated radically for prostate cancer using intensity-modulated radiotherapy to 78 Gy between January 2007 and March 2009 across six radiotherapy treatment centers. The plan quality was determined by the measures of coverage, homogeneity, and conformity. Tumor coverage was measured using the planning target volume (PTV) receiving 95% and 100% of the prescribed dose (V95% and V100%, respectively) and the clinical target volume (CTV) receiving 95% and 100% of the prescribed dose. Homogeneity was measured using the sigma index of the PTV and CTV. Conformity was measured using the lesion coverage factor, healthy tissue conformity index, and the conformity number. Multivariate regression models were created to determine the relationship between these and T stage, risk status, androgen deprivation therapy use, treatment center, planning system, and treatment date. Results: The largest discriminatory measurements of coverage, homogeneity, and conformity were the PTV V95%, PTV sigma index, and conformity number. The mean PTV V95% was 92.5% (95% confidence interval, 91.3–93.7%). The mean PTV sigma index was 2.10 Gy (95% confidence interval, 1.90–2.20). The mean conformity number was 0.78 (95% confidence interval, 0.76–0.79). The treatment center independently influenced the coverage, homogeneity, and conformity (all p 95% only, with it being better at the start (p = .013). Risk status, T stage, and the use of androgen deprivation therapy did not influence any aspect of plan quality. Conclusion: Our study has benchmarked measures of coverage, homogeneity, and conformity for the treatment of prostate cancer using IMRT. The differences seen between centers and planning systems and the coverage deterioration

  11. Benchmarking Dosimetric Quality Assessment of Prostate Intensity-Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Senthi, Sashendra, E-mail: sasha.senthi@petermac.org [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Gill, Suki S. [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Haworth, Annette; Kron, Tomas; Cramb, Jim [Department of Physical Sciences, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Rolfo, Aldo [Radiation Therapy Services, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Thomas, Jessica [Biostatistics and Clinical Trials, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Duchesne, Gillian M. [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Hamilton, Christopher H.; Joon, Daryl Lim [Radiation Oncology Department, Austin Repatriation Hospital, Heidelberg, VIC (Australia); Bowden, Patrick [Radiation Oncology Department, Tattersall' s Cancer Center, East Melbourne, VIC (Australia); Foroudi, Farshad [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia)

    2012-02-01

    Purpose: To benchmark the dosimetric quality assessment of prostate intensity-modulated radiotherapy and determine whether the quality is influenced by disease or treatment factors. Patients and Methods: We retrospectively analyzed the data from 155 consecutive men treated radically for prostate cancer using intensity-modulated radiotherapy to 78 Gy between January 2007 and March 2009 across six radiotherapy treatment centers. The plan quality was determined by the measures of coverage, homogeneity, and conformity. Tumor coverage was measured using the planning target volume (PTV) receiving 95% and 100% of the prescribed dose (V{sub 95%} and V{sub 100%}, respectively) and the clinical target volume (CTV) receiving 95% and 100% of the prescribed dose. Homogeneity was measured using the sigma index of the PTV and CTV. Conformity was measured using the lesion coverage factor, healthy tissue conformity index, and the conformity number. Multivariate regression models were created to determine the relationship between these and T stage, risk status, androgen deprivation therapy use, treatment center, planning system, and treatment date. Results: The largest discriminatory measurements of coverage, homogeneity, and conformity were the PTV V{sub 95%}, PTV sigma index, and conformity number. The mean PTV V{sub 95%} was 92.5% (95% confidence interval, 91.3-93.7%). The mean PTV sigma index was 2.10 Gy (95% confidence interval, 1.90-2.20). The mean conformity number was 0.78 (95% confidence interval, 0.76-0.79). The treatment center independently influenced the coverage, homogeneity, and conformity (all p < .0001). The planning system independently influenced homogeneity (p = .038) and conformity (p = .021). The treatment date independently influenced the PTV V{sub 95%} only, with it being better at the start (p = .013). Risk status, T stage, and the use of androgen deprivation therapy did not influence any aspect of plan quality. Conclusion: Our study has benchmarked measures

  12. Quantitative consistency testing of thermal benchmark lattice experiments

    International Nuclear Information System (INIS)

    The paper sets forth a general method to demonstrate the quantitative consistency (or inconsistency) of results of thermal reactor lattice experiments. The method is of particular importance in selecting standard ''benchmark'' experiments for comparison testing of lattice analysis codes and neutron cross sections. ''Benchmark'' thermal lattice experiments are currently selected by consensus, which usually means the experiment is geometrically simple, well-documented, reasonably complete, and qualitatively consistent. A literature search has not revealed any general quantitative test that has been applied to experimental results to demonstrate consistency, although some experiments must have been subjected to some form or other of quantitative test. The consistency method is based on a two-group neutron balance condition that is capable of revealing the quantitative consistency (or inconsistency) of reported thermal benchmark lattice integral parameters. This equation is used in conjunction with a second equation in the following discussion to assess the consistency (or inconsistency) of: (1) several Cross Section Evaluation Working Group (CSEWG) defined thermal benchmark lattices, (2) SRL experiments on the Mark 5R and Mark 15 lattices, and (3) several D2O lattices encountered as proposed thermal benchmark lattices. Nineteen thermal benchmark lattice experiments were subjected to a quantitative test of consistency between the reported experimental integral parameters. Results of this testing showed only two lattice experiments to be generally useful as ''benchmarks,'' three lattice experiments to be of limited usefulness, three lattice experiments to be potentially useful, and 11 lattice experiments to be not useful. These results are tabulated with the lattices identified

  13. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Supanich, MP [Rush University Medical Center, Chicago, IL (United States)

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  14. Thermoluminescence dating of the ancient Chinese porcelain using a regression method of saturation exponential in pre-dose technique

    Institute of Scientific and Technical Information of China (English)

    WANG; Weida; XIA; Junding; ZHOU; Zhixin

    2006-01-01

    This paper studies the thermoluminescence (TL) dating of the ancient porcelain using a regression method of saturation exponential in the pre-dose technique. The experimental results show that the measured errors are 15% (±1σ) for the paleodose and 17% (±1σ) for the annual dose respectively, and the TL age error is 23% (±1σ) in this method. The larger Chinese porcelains from the museum and the nation-wide collectors have been dated by this method. The results show that the certainty about the authenticity testing is larger than 95%, and the measurable porcelains make up about 95% of the porcelain dated. It is very successful in discrimination for the imitations of ancient Chinese porcelains. This paper describes the measured principle and method for the paleodose of porcelains. The TL ages are dated by this method for the 39 shards and porcelains from past dynasties of China and the detailed data in the measurement are reported.

  15. The Type of Container and Filling Method Have Consequences on Semen Quality in Swine AI Doses

    Directory of Open Access Journals (Sweden)

    Iulian Ibanescu

    2016-05-01

    Full Text Available The automatic filling of semen doses for artificial insemination in swine shows economic advantages over the old-style, manual filling. However, no data could be found regarding the impact, if any, of this packing method on semen quality. This study aimed to compare two types of containers for boar semen, namely the automatically-filled tube and the manually-filled bottle, in terms of preserving the quality of boar semen. Five ejaculates from five different boars were diluted with the same extender and then divided in two aliquots. First aliquot was loaded in tubes filled by an automatic machine while the second was loaded manually in special plastic bottles. The semen was stored in liquid state at 17°C, regardless of the type of container and examined daily, for five days of storage by means of a computer-assisted sperm analyzer. Both types of containers maintained the semen within acceptable values, but after five days of storage significant differences (p<0.05 between the container types were observed in terms of all selected kinetic parameters. The tube showed better values for sperm motility and velocity, while the bottle showed superior values for straightness and linearity of sperm movement. The automatically-filled tubes offered better sperm motility in every day of the study. Given the fact that sperm motility is still the main criterion in assessing semen quality in semen production centers, the main conclusion of this study is that the automatic loading in tubes is superior and recommended over the old-style manual loading in bottles.

  16. Estimation of effective doses to adult and pediatric patients from multislice computed tomography: A method based on energy imparted

    International Nuclear Information System (INIS)

    The purpose of this study is to provide a method and required data for the estimation of effective dose (E) values to adult and pediatric patients from computed tomography (CT) scans of the head, chest abdomen, and pelvis, performed on multi-slice scanners. Mean section radiation dose (dm) to cylindrical water phantoms of varying radius normalized over CT dose index free-in-air (CTDIF) were calculated for the head and body scanning modes of a multislice scanner with use of Monte Carlo techniques. Patients were modeled as equivalent water phantoms and the energy imparted (ε) to simulated pediatric and adult patients was calculated on the basis of measured CTDIF values. Body region specific energy imparted to effective dose conversion coefficients (E/ε) for adult male and female patients were generated from previous data. Effective doses to patients aged newborn to adult were derived for all available helical and axial beam collimations, taking into account age specific patient mass and scanning length. Depending on high voltage, body region, and patient sex, E/ε values ranged from 0.008 mSv/mJ for head scans to 0.024 mSv/mJ for chest scans. When scanned with the same technique factors as the adults, pediatric patients absorb as little as 5% of the energy imparted to adults, but corresponding effective dose values are up to a factor of 1.6 higher. On average, pediatric patients absorb 44% less energy per examination but have a 24% higher effective dose, compared with adults. In clinical practice, effective dose values to pediatric patients are 2.5 to 10 times lower than in adults due to the adaptation of tube current. A method is provided for the calculation of effective dose to adult and pediatric patients on the basis of individual patient characteristics such as sex, mass, dimensions, and density of imaged anatomy, and the technical features of modern multislice scanners. It allows the optimum selection of scanning parameters regarding patient doses at CT

  17. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  18. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  19. A method to reduce patient's eye lens dose in neuro-interventional radiology procedures

    Science.gov (United States)

    Safari, M. J.; Wong, J. H. D.; Kadir, K. A. A.; Sani, F. M.; Ng, K. H.

    2016-08-01

    Complex and prolonged neuro-interventional radiology procedures using the biplane angiography system increase the patient's risk of radiation-induced cataract. Physical collimation is the most effective way of reducing the radiation dose to the patient's eye lens, but in instances where collimation is not possible, an attenuator may be useful in protecting the eyes. In this study, an eye lens protector was designed and fabricated to reduce the radiation dose to the patients' eye lens during neuro-interventional procedures. The eye protector was characterised before being tested on its effectiveness in a simulated aneurysm procedure on an anthropomorphic phantom. Effects on the automatic dose rate control (ADRC) and image quality are also evaluated. The eye protector reduced the radiation dose by up to 62.1% at the eye lens. The eye protector is faintly visible in the fluoroscopy images and increased the tube current by a maximum of 3.7%. It is completely invisible in the acquisition mode and does not interfere with the clinical procedure. The eye protector placed within the radiation field of view was able to reduce the radiation dose to the eye lens by direct radiation beam of the lateral x-ray tube with minimal effect on the ADRC system.

  20. Track 3: growth of nuclear technology and research numerical and computational aspects of the coupled three-dimensional core/plant simulations: organization for economic cooperation and development/U.S. nuclear regulatory commission pressurized water reactor main-steam-line-break benchmark-I. 4. Methods and Results for the MSLB NEA Benchmark Using SIMTRAN and RELAP-5

    International Nuclear Information System (INIS)

    The purpose of this work is to discuss the methods developed in our three-dimensional (3-D) pressurized water reactor (PWR) SIMTRAN Core Dynamics code and its coupling to the RELAP-5 system code for general transient and safety analysis, as well as its demonstration application to the Nuclear Energy Agency/Organization for Economic Cooperation and Development (NEA/OECD) Benchmark on Main Steam Line Break (MSLB), cosponsored by the U.S. Nuclear Regulatory Commission (NRC) and other regulatory institutions. In particular, our work has been supported by the Spanish Consejo de Seguridad Nuclear (CSN) under a CSN research project. SIMTRAN is our 3-D PWR core dynamics code,1 which has been under development and validation for ∼10 yr (Refs. 1, 2, and 3). It was developed as a single code merge, with data sharing through standard FORTRAN commons, of our SIMULA 3-D neutronics nodal code and the COBRA-IIIC/MIT-2 multichannel, with cross-flows, thermal-hydraulics (T-H) code. Both codes solve the 3-D neutronic and T-H fields with maximum implicitness, using direct and iterative methods for the inversion of the linearized systems. SIMULA uses synthetic coarse-mesh discontinuity factors, in the XY directions, pre-calculated by two-dimensional (2-D) pin-by-pin two-group diffusion calculations of whole core planes, and embedded iterative one-dimensional (1-D) fine-mesh two-group diffusion solutions in the axial direction. COBRA uses direct inversion at each plane of the axial flow equations, with cross-flows updated over an outer iteration loop, for the homogenous model single-phase coolant, and finite element direct solution of the fuel rod radial temperatures. The 3-D core N-T-H coupling is done internally by a semi-implicit scheme, using a staggered alternate time mesh, where the T-H solution is done at the half of the neutronic time step (thus conserving energy by taking the power centered in the time step) and extrapolating the 3-D T-H variables over a half of the time step

  1. Experimental Research of High-Energy Capabilities of Material Recognition by Dual-Energy Method for the Low- Dose Radiation

    Science.gov (United States)

    Abashkin, A.; Osipov, S.; Chakhlov, S.; Shteyn, A.

    2016-06-01

    The algorithm to produce primary radiographs, its transformation by dual energy method and recognition of the object materials were enhanced based on the analysis of experimental results. The experiments were carried out at the inspection complex with high X- ray source - betatron MIB 4/9 in Tomsk Polytechnic University. For the reduced X -ray dose rate, the possibility of recognition of the object materials with thickness from 20 to 120 g/cm2 was proved under the condition that as the dose rate is reduced by the defined number of times, the segment of the image fragment with the reliably identified material will increase by the same number of times.

  2. Reanalysis of cancer mortality in Japanese A-bomb survivors exposed to low doses of radiation: bootstrap and simulation methods

    Directory of Open Access Journals (Sweden)

    Dropkin Greg

    2009-12-01

    Full Text Available Abstract Background The International Commission on Radiological Protection (ICRP recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years, liver (36.9, lung (13.6, leukaemia (23.66, and pancreas (11.86 and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of

  3. Characteristics and performance of the Sunna high dose dosemeter using green photoluminescence and UV absorption readout methods

    Energy Technology Data Exchange (ETDEWEB)

    Miller, S.D.; Murphy, M.K.; Tinker, M.R.; Kovacs, A.; McLaughlin, W

    2002-07-01

    Growth in the use of ionising radiation for medical sterilisation and the potential for wide-scale international food irradiation have created the need for robust, mass-producible, inexpensive, and highly accurate radiation dosemeters. The Sunna dosemeter, lithium fluoride injection-moulded in a polyethylene matrix, can be read out using either green photoluminescence or ultraviolet (UV) absorption. The Sunna dosemeter can be mass-produced inexpensively with high precision. Both the photoluminescent and the UV absorption reader are simple and inexpensive. Both methods of analysis display negligible humidity effects, minimal dose rate dependence, acceptable post-irradiation effects, and permit measurements with a precision of nearly 1% 1s. The UV method shows negligible irradiation temperature effects from -30 deg. C to +60 deg. C. The photoluminescence method shows negligible irradiation temperature effects above room temperature for sterilisation dose levels and above. The dosimetry characteristics of these two readout methods are presented along with performance data in commercial sterilisation facilities. (author)

  4. The evaluation of radiation dose by exposure method in digital magnification mammography

    International Nuclear Information System (INIS)

    In digital mammography, Exposure factor were automatically chosen using by measurement breast thickness and the density of mammary gland. It may cause a increase glandular dose. The purpose of this study was to investigate optimal image quality in digital magnification mammography to decrease radiation exposure of patient dose. Auto mode gives the best image quality however, AGD showed better image quality. Image quality of manual mode passed phantom test and SNR at 55% mAs of auto mode commonly used in the digital magnification mammography. Also it could reduce AGD. According to result, manual mode may reduce the unnecessary radiation exposure in digital magnification mammography

  5. The evaluation of radiation dose by exposure method in digital magnification mammography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Mi Young [Dankook Univ. Hospital, Cheonan (Korea, Republic of); Kim, Hwa Sun [Ansan Univ., Ansan (Korea, Republic of)

    2012-12-15

    In digital mammography, Exposure factor were automatically chosen using by measurement breast thickness and the density of mammary gland. It may cause a increase glandular dose. The purpose of this study was to investigate optimal image quality in digital magnification mammography to decrease radiation exposure of patient dose. Auto mode gives the best image quality however, AGD showed better image quality. Image quality of manual mode passed phantom test and SNR at 55% mAs of auto mode commonly used in the digital magnification mammography. Also it could reduce AGD. According to result, manual mode may reduce the unnecessary radiation exposure in digital magnification mammography.

  6. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K [Juntendo University, Bunkyo, Tokyo, JP (Japan); Utsunomiya, S [Niigata University, Niigata, Nigata, JP (Japan); Ebe, K [Joetsu General Hospital, Joetsu, Niigata, JP (Japan)

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  7. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  8. Integration method of 3D MR spectroscopy into treatment planning system for glioblastoma IMRT dose painting with integrated simultaneous boost

    Directory of Open Access Journals (Sweden)

    Ken Soléakhéna

    2013-01-01

    Full Text Available Abstract Background To integrate 3D MR spectroscopy imaging (MRSI in the treatment planning system (TPS for glioblastoma dose painting to guide simultaneous integrated boost (SIB in intensity-modulated radiation therapy (IMRT. Methods For sixteen glioblastoma patients, we have simulated three types of dosimetry plans, one conventional plan of 60-Gy in 3D conformational radiotherapy (3D-CRT, one 60-Gy plan in IMRT and one 72-Gy plan in SIB-IMRT. All sixteen MRSI metabolic maps were integrated into TPS, using normalization with color-space conversion and threshold-based segmentation. The fusion between the metabolic maps and the planning CT scans were assessed. Dosimetry comparisons were performed between the different plans of 60-Gy 3D-CRT, 60-Gy IMRT and 72-Gy SIB-IMRT, the last plan was targeted on MRSI abnormalities and contrast enhancement (CE. Results Fusion assessment was performed for 160 transformations. It resulted in maximum differences p  Conclusions Delivering standard doses to conventional target and higher doses to new target volumes characterized by MRSI and CE is now possible and does not increase dose to organs at risk. MRSI and CE abnormalities are now integrated for glioblastoma SIB-IMRT, concomitant with temozolomide, in an ongoing multi-institutional phase-III clinical trial. Our method of MR spectroscopy maps integration to TPS is robust and reliable; integration to neuronavigation systems with this method could also improve glioblastoma resection or guide biopsies.

  9. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  10. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  11. The effect of different lung densities on the accuracy of various radiotherapy dose calculation methods: implications for tumour coverage

    DEFF Research Database (Denmark)

    Aarup, Lasse Rye; Nahum, Alan E; Zacharatou, Christina;

    2009-01-01

    PURPOSE: To evaluate against Monte-Carlo the performance of various dose calculations algorithms regarding lung tumour coverage in stereotactic body radiotherapy (SBRT) conditions. MATERIALS AND METHODS: Dose distributions in virtual lung phantoms have been calculated using four commercial...... Treatment Planning System (TPS) algorithms and one Monte Carlo (MC) system (EGSnrc). We compared the performance of the algorithms in calculating the target dose for different degrees of lung inflation. The phantoms had a cubic 'body' and 'lung' and a central 2-cm diameter spherical 'tumour' (the body...... and tumour have unit density). The lung tissue was assigned five densities (rho(lung)): 0.01, 0.1, 0.2, 0.4 and 1g/cm(3). Four-field treatment plans were calculated with 6- and 18 MV narrow beams for each value of rho(lung). We considered the Pencil Beam Convolution (PBC(Ecl)) and the Analytical Anisotropic...

  12. DEEP code to calculate dose equivalents in human phantom for external photon exposure by Monte Carlo method

    International Nuclear Information System (INIS)

    The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)

  13. A calculational method of photon dose equivalent based on the revised technical standards of radiological protection law

    International Nuclear Information System (INIS)

    The effective conversion factor for photons from 0.03 to 10 MeV were calculated to convert the absorbed dose in air to the 1 cm, 3 mm, and 70 μm depth dose equivalents behind iron, lead, concrete, and water shields up to 30 mfp thickness. The effective conversion factor changes slightly with thickness of the shields and becomes nearly constant at 5 to 10 mfp. The difference of the effective conversion factor was less than 2% between plane normal and point isotropic geometries. It is suggested that the present method, making the data base of the exposure buildup factors useful, would be very effective as compared to a new evaluation of the dose equivalent buildup factors. 5 refs., 7 figs., 22 tabs

  14. Simulating diffusion processes in discontinuous media: Benchmark tests

    Science.gov (United States)

    Lejay, Antoine; Pichot, Géraldine

    2016-06-01

    We present several benchmark tests for Monte Carlo methods simulating diffusion in one-dimensional discontinuous media. These benchmark tests aim at studying the potential bias of the schemes and their impact on the estimation of micro- or macroscopic quantities (repartition of masses, fluxes, mean residence time, …). These benchmark tests are backed by a statistical analysis to filter out the bias from the unavoidable Monte Carlo error. We apply them on four different algorithms. The results of the numerical tests give a valuable insight into the fine behavior of these schemes, as well as rules to choose between them.

  15. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  16. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  17. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    Science.gov (United States)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  18. Integration method of 3D MR spectroscopy into treatment planning system for glioblastoma IMRT dose painting with integrated simultaneous boost

    International Nuclear Information System (INIS)

    To integrate 3D MR spectroscopy imaging (MRSI) in the treatment planning system (TPS) for glioblastoma dose painting to guide simultaneous integrated boost (SIB) in intensity-modulated radiation therapy (IMRT). For sixteen glioblastoma patients, we have simulated three types of dosimetry plans, one conventional plan of 60-Gy in 3D conformational radiotherapy (3D-CRT), one 60-Gy plan in IMRT and one 72-Gy plan in SIB-IMRT. All sixteen MRSI metabolic maps were integrated into TPS, using normalization with color-space conversion and threshold-based segmentation. The fusion between the metabolic maps and the planning CT scans were assessed. Dosimetry comparisons were performed between the different plans of 60-Gy 3D-CRT, 60-Gy IMRT and 72-Gy SIB-IMRT, the last plan was targeted on MRSI abnormalities and contrast enhancement (CE). Fusion assessment was performed for 160 transformations. It resulted in maximum differences <1.00 mm for translation parameters and ≤1.15° for rotation. Dosimetry plans of 72-Gy SIB-IMRT and 60-Gy IMRT showed a significantly decreased maximum dose to the brainstem (44.00 and 44.30 vs. 57.01 Gy) and decreased high dose-volumes to normal brain (19 and 20 vs. 23% and 7 and 7 vs. 12%) compared to 60-Gy 3D-CRT (p < 0.05). Delivering standard doses to conventional target and higher doses to new target volumes characterized by MRSI and CE is now possible and does not increase dose to organs at risk. MRSI and CE abnormalities are now integrated for glioblastoma SIB-IMRT, concomitant with temozolomide, in an ongoing multi-institutional phase-III clinical trial. Our method of MR spectroscopy maps integration to TPS is robust and reliable; integration to neuronavigation systems with this method could also improve glioblastoma resection or guide biopsies

  19. Review of radionuclides released from the nuclear fuel cycle and methods of assessing dose to man

    International Nuclear Information System (INIS)

    There are two broad subject areas associated with releases of radionuclides from nuclear fuel cycle installations to the environment in which there are biological implications. One concerns interpretation of doses to man in terms of their radiological significance; the other concerns estimation of environmental transfer of radionuclides and of associated radiation doses to man. The radiation protection philosophy on which past practice regarding effluent releases of radionuclides to the environment was based is illustrated by drawing upon estimates of the associated radiation doses to man given in the 1977 report of the United Nations Scientific Committee on the Effects of Atomic Radiation. The present emphasis in radiation protection philosophy is illustrated by summarizing a review of environmental models relevant to estimation of radiation doses to population groups with reference to effluent releases of 3H, 14C, 85Kr and 129I; the author carried out the review as a contribution to a current study by an expert group set up by the Nuclear Energy Agency of OECD. Radionuclides of significance in the future may differ from those currently released to the environment because of possible developments in nuclear fuel cycles and options which may be exercised for disposal of high-level radioactive wastes, already in storage or postulated to be produced in the future. (author)

  20. Dose estimation of the radiation workers in the SK cyclotron center using dual-TLD method

    International Nuclear Information System (INIS)

    The Cyclotron Center in Shin Kong Wu Ho-Su Memorial Hospital (SK Cyclotron Center) produced the 18F-FDG compound and provided it to the Positron Emission Tomography (PET) center for diagnosis services. The works in SK Cyclotron Center are distinguished into three procedures including production, dispensation, and carry of the compound. As the medical cyclotron was operated to produce the radioactive compounds, secondary radiations such as neutrons and γ-rays were also induced. To estimate the exposure for the staffs working in the SK Cyclotron Center, the dual-TLD (TLD-600/700) chips were used to measure the doses contributed from photons and neutrons during the operation of the cyclotron, and the doses contributed from photons during dispensation and carry of the nuclear compounds. In the results, the mean Hp(10) and Hp(0.07) of the finger for a worker were 2.11 mSv y-1 and 96.19 mSv y-1, respectively. Results estimated by the regular personal chest badges and finger ring dosimeters which considered only the doses of photons were compared. By means of the results of this work, doses contributed from different working procedures and from different types of radiation to the workers in the SK Cyclotron Center were realized.

  1. A dose optimization method for electron radiotherapy using randomized aperture beams.

    Science.gov (United States)

    Engel, Konrad; Gauer, Tobias

    2009-09-01

    The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  4. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    International Nuclear Information System (INIS)

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  5. Studies of the dose distribution for patients undergoing various examinations in x-ray diagnosis and methods optimization

    International Nuclear Information System (INIS)

    The analysis of the status of x-ray diagnosis in Ghana revealed that Ghana is in the health care Category III, since there are about 4,2000 people to each physicians-ray departments have no quality management and quality control system in place for monitoring the quality of diagnostic images. Education and training in radiation protection and cost-effective use of x-rays are needed as part of the educational programme for radiologists, radiographers, x-ray technical officers and darkroom attendants. The dose and dose distribution for adult patients undergoing chest PA, lumber spine AP, pelvis/abdomen AP, and Skull AP examinations were determined using thermoluminescence dosemeters and compared with Commission of the European Communities guideline values. Analysis of the data show that 86%, 58% and 50% of the radiographic room delivered doses to patients compared the CEC value for Chest PA, lumber spine AP, pelvis/Abdomen AP and Skull AP respectively. Radiographic departments therefore should review their radiographic procedures to bring their does to optimum levels. Three methods were investigated for use as dose reduction optimization options. With the establishment of administrative procedures for the control of indiscriminate requests and referral criteria for x-ray examinations, patient dose can be averted. It is estimated about 10man.Sv can be averted annually. Authorized exposures can be minimized by standardizing the parameters which have significant influence on patient dose, taking into account screen-film system and film processing. By optimization the techniques factors, entrance surface dose and effective dose can be reduced. For chest PA examination the reduction factors are 4 and 3 respectively. Corresponding values for lumber spine AP, pelvis/abdomen AP and skull AP are 2 and 1.8, 1.4 and 1.4, 2.0 and 1.8 respectively. Three local materials, Ghanaian Anum Serpentine (SGA), Ghanaian Peki-Dzake Serpentine (SGP) and Ghanaian Golokwati Serpentine (SGG

  6. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  7. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  8. A Method of the Benchmarks Selection Based on the Choice Preference of Competitive Strategies%基于企业竞争战略选择偏好的标杆筛选方法

    Institute of Scientific and Technical Information of China (English)

    葛虹; 张艳霞

    2013-01-01

    This study uses self-organizing map to identify potential benchmarks based on the similarity of input use.The supper DEA efficiencies are used to identify the industry leaders.Different from other researches,the appropriate target is finally determined based not only on the DEA efficiency scores but also on the choice preference of competitive strategies such as cost leadership,product differentiation and focus strategy for the decision making units.An empirical study on 50 Chinese banks with 2011 Annual Report data shows that the proposed method is very practical in selection of competitive strategies-oriented benchmarks for inefficient units.%通过企业生产性投入的相似性程度来确认标杆的可追赶性,利用自组织映射图来对企业的相似性进行划分;通过与同类企业的DEA超效率比较来确认标杆的超前性.根据企业对成本领先战略、差异化战略以及集中化竞争战略的选择偏好,筛选出有利于企业未来发展的标杆企业.利用我国50家银行2011年年报数据进行实证分析的结论表明:新方法能为企业提供具有战略导向的标杆选择方案.

  9. Recommended environmental dose calculation methods and Hanford-specific parameters. Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Schreckhise, R.G.; Rhoads, K.; Napier, B.A.; Ramsdell, J.V. [Pacific Northwest Lab., Richland, WA (United States); Davis, J.S. [Westinghouse Hanford Co., Richland, WA (United States)

    1993-03-01

    This document was developed to support the Hanford Environmental Dose overview Panel (HEDOP). The Panel is responsible for reviewing all assessments of potential doses received by humans and other biota resulting from the actual or possible environmental releases of radioactive and other hazardous materials from facilities and/or operations belonging to the US Department of Energy on the Hanford Site in south-central Washington. This document serves as a guide to be used for developing estimates of potential radiation doses, or other measures of risk or health impacts, to people and other biota in the environs on and around the Hanford Site. It provides information to develop technically sound estimates of exposure (i.e., potential or actual) to humans or other biotic receptors that could result from the environmental transport of potentially harmful materials that have been, or could be, released from Hanford operations or facilities. Parameter values and information that are specific to the Hanford environs as well as other supporting material are included in this document.

  10. Determination of dose distributions for clinical linear accelerators using Monte Carlo method in water phantom

    International Nuclear Information System (INIS)

    Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.

  11. For Clinical Linear Accelerators, Obtaining of Dose Distributions in Water Phantoms by Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.

  12. Neutron and photon doses in high energy radiotherapy facilities and evaluation of shielding performance by Monte Carlo method

    International Nuclear Information System (INIS)

    Highlights: → The MCNP5 code has been used to model a radiotherapy room of a 18 MV linear accelerator. → The neutron and the secondary gamma ray dose equivalents were evaluated at various points inside the treatment room and along the the maze. → To reduce the neutron and gamma ray doses, we have also investigated the radiotherapy room shielding performance. → The use of paraffin wax containing boron carbide indicates much better shielding effects. - Abstract: Medical accelerators operating above 10 MV are a source of undesirable neutron radiations which contaminate the therapeutic photon beam. These photoneutrons can also generate secondary gamma rays which increases undesirable dose to the patient body and to personnel and general public. In this study, the Monte Carlo N-Particle MCNP5 code has been used to model the radiotherapy room of a medical linear accelerator operating at 18 MV and to calculate the neutron and the secondary gamma ray energy spectra and the dose equivalents at various points inside the treatment room and along the maze. To validate our Monte Carlo simulation we compared our results with those evaluated by the recommended analytical methods of IAEA Report No. 47, and with experimental and simulated values published in the literature. After validation, the Monte Carlo simulation has been used to evaluate the shielding performance of the radiotherapy room. The obtained results showed that the use of paraffin wax containing boron carbide, in the lining of the radiotherapy room walls, presents enough effectiveness to reduce both neutron and gamma ray doses inside the treatment room and at the maze entrance. Such evaluation cannot be performed by the analytical methods since room material and wall surface lining are not taken into consideration.

  13. Effects of high-dose gamma irradiation on tensile properties of human cortical bone: Comparison of different radioprotective treatment methods.

    Science.gov (United States)

    Allaveisi, Farzaneh; Mirzaei, Majid

    2016-08-01

    There are growing interests in the radioprotective methods that can reduce the damaging effects of ionizing radiation on sterilized bone allografts. The aim of this study was to investigate the effects of 50kGy (single dose, and fractionated) gamma irradiation, in presence and absence of l-Cysteine (LC) free radical scavenger, on tensile properties of human femoral cortical bone. A total of 48 standard tensile test specimens was prepared from diaphysis of femurs of three male cadavers (age: 52, 52, and 54 years). The specimens were assigned to six groups (n=8) according to different irradiation schemes, i.e.; Control (Non-irradiated), LC-treated control, a single dose of 50kGy (sole irradiation), a single dose of 50kGy in presence of LC, 10 fractions of 5kGy (sole irradiation), and 10 fractions of 5kGy in presence of LC. Uniaxial tensile tests were carried out to evaluate the variations in tensile properties of the specimens. Fractographic analysis was performed to examine the microstructural features of the fracture surfaces. The results of multivariate analysis showed that fractionation of the radiation dose, as well as the LC treatment of the 50kGy irradiated specimens, significantly reduced the radiation-induced impairment of the tensile properties of the specimens (Ptest results. In summary, this study showed that the detrimental effects of gamma sterilization on tensile properties of human cortical bone can be substantially reduced by free radical scavenger treatment, dose fractionation, and the combined treatment of these two methods. PMID:27124804

  14. Towards dose reduction for dual-energy CT: A non-local image improvement method and its application

    International Nuclear Information System (INIS)

    Dual-energy CT (DECT) has better material discrimination capability than single-energy CT. In this paper, we propose a low-dose DECT scanning strategy with double scans under different energy spectra. In the strategy a low-energy scan is performed with a complete number of views, and the high-energy scan is in much sparser views to reduce radiation dose. In such conditions, the high-energy reconstruction image suffers from severe streak artifacts due to under-sampling. To address this problem a novel non-local image restoration method is proposed. Because the low and high energy scans are performed on the same object, the reconstructed images should have identical object structures. Therefore the low-energy reconstruction that comes from the complete scan may serve as a reference image to help improving the image quality of the high-energy reconstruction. The method is implemented with the following steps. First, the structure information is obtained by a non-local pixel similarity measurement on the low-energy CT image, and second after a registration between the high and low reconstructions the high-energy image is restored by normalized weighted average using the calculated similarity relationship. Compared with previous methods, the new method achieves better image quality in both structure preservation and artifact reduction. Besides, the computation is much cheaper than iterative reconstruction methods, which makes the method of practical value. Numerical and pre-clinical experiments have been performed to illustrate the effectiveness of the proposed method. With the novel DECT scanning configuration and non-local image restoration method, the total dose is significantly reduced while maintaining a high reconstruction quality

  15. Monte Carlo method studies and a comparative between GEANT4 tool kit and MCNPX to depth dose in medical physics

    International Nuclear Information System (INIS)

    Knowing the depth dose at the central axis is fundamental for the accurate planning of medical treatment systems involving ionizing radiation. With the evolution of the informatics it is possible the utilization of various computational tools such as GEANT4 and the MCNPX, which use the Monte Carlo Method for simulation of such situations, This paper makes a comparative between the two tools for the this type of application

  16. Comparing the effects of low-dose contraceptive pills to control dysfunctional uterine bleeding by oral and vaginal methods

    OpenAIRE

    Mehrabian, Ferdous; Abbassi, Fariba

    2013-01-01

    Background and Objective : Contraceptive pills are generally taken orally and can cause side effects such as nausea, vomiting and hypertension. The vaginal use of these pills can reduce such complications. Our objective was to compare the efficacy and side effects of low dose contraceptive pills by oral and vaginal route in the management of dysfunctional uterine bleeding-(DUB) Methods: This comparative observational study was conducted at Beheshti and Alzahra (SA) teaching hospitals, affilia...

  17. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    International Nuclear Information System (INIS)

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principal components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V65Gy was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data

  18. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  19. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  20. Work management practices that reduce dose and improve efficiency

    International Nuclear Information System (INIS)

    Work management practices at nuclear power plants can dramatically affect the outcome of annual site dose goals and outage costs. This presentation discusses global work management practices that contribute to dose reduction including work philosophy, work selection, work planning, work scheduling, worker training, work implementation and worker feedback. The presentation is based on a two-year international effort (sponsored by NEA/IAEA ISOE) to study effective work practices that reduce dose. Experts in this area believe that effective work selection and planning practices can substantially reduce occupational dose during refueling outages. For example, some plants represented in the expert group complete refueling outages in 12-18 days (Finland) with doses below 0,90 person-Sv. Other plants typically have 50-75 day outages with substantially higher site doses. The fundamental reasons for longer outages and higher occupational doses are examined. Good work management principles that have a proven track record of reducing occupational dose are summarized. Practical methods to reduce work duration and dose are explained. For example, scheduling at nuclear power plants can be improved by not only sequencing jobs on a time line but also including zone and resource-based considerations to avoid zone congestion and manpower delays. An ongoing, global, benchmarking effort is described which provides current duration and dose information for repetitive jobs to participating utilities world-wide. (author)

  1. Dose estimation by biological methods; Estimacion de dosis por metodos biologicos

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M. [Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico)

    1997-07-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  2. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  3. BN-600 full MOX core benchmark analysis

    International Nuclear Information System (INIS)

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  4. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... of the ‘inside’ costs of the sub-component, technical specifications of the product, opportunistic behavior from the suppliers and cognitive limitation. These are all aspects that easily can dismantle the market mechanism and make it counter-productive in the organization. Thus, by directing more attention...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...

  5. A numerical method to optimise the spatial dose distribution in carbon ion radiotherapy planning.

    Science.gov (United States)

    Grzanka, L; Korcyl, M; Olko, P; Waligorski, M P R

    2015-09-01

    The authors describe a numerical algorithm to optimise the entrance spectra of a composition of pristine carbon ion beams which delivers a pre-assumed dose-depth profile over a given depth range within the spread-out Bragg peak. The physical beam transport model is based on tabularised data generated using the SHIELD-HIT10A Monte-Carlo code. Depth-dose profile optimisation is achieved by minimising the deviation from the pre-assumed profile evaluated on a regular grid of points over a given depth range. This multi-dimensional minimisation problem is solved using the L-BFGS-B algorithm, with parallel processing support. Another multi-dimensional interpolation algorithm is used to calculate at given beam depths the cumulative energy-fluence spectra for primary and secondary ions in the optimised beam composition. Knowledge of such energy-fluence spectra for each ion is required by the mixed-field calculation of Katz's cellular Track Structure Theory (TST) that predicts the resulting depth-survival profile. The optimisation algorithm and the TST mixed-field calculation are essential tools in the development of a one-dimensional kernel of a carbon ion therapy planning system. All codes used in the work are generally accessible within the libamtrack open source platform.

  6. Fluoroscopy without the grid: a method of reducing the radiation dose

    International Nuclear Information System (INIS)

    The anti-scatter grid has been removed from the fluoroscopic set during the course of over 80 contrast examinations performed routinely during the ordinary workload of a busy paediatric radiology department. This manoeuvre approximatley halves the radiation dose to the patient during both fluoroscopy and radiography. Experience suggests that the degree of loss of contrast consequent on the abandonment of the grid is diagnostically acceptable during many examinations performed on children (of all ages), when balanced against the lower radiation dose received. In addition, an assessment has been made of the contrast improvement factor of the grids in two fluoroscopic sets in common use, using tissue-equivalent phantoms of various thicknesses. Although the contrast was significantly improved by the use of the grid, to a degree dependent on various factors, the relevance of this improvement in clinical radiology depends on exactly what information is being sought. It is recommended that radiologists should use the grid with discretion when performing fluoroscopic examinations on children and that the apparatus for such examinations should have the capability for easy removal and reintroduction of the grid. (author)

  7. The fixed-point iteration method for IMRT optimization with truncated dose deposition coefficient matrix

    CERN Document Server

    Tian, Zhen; Jia, Xun; Jiang, Steve B

    2013-01-01

    In the treatment plan optimization for intensity modulated radiation therapy (IMRT), dose-deposition coefficient (DDC) matrix is often pre-computed to parameterize the dose contribution to each voxel in the volume of interest from each beamlet of unit intensity. However, due to the limitation of computer memory and the requirement on computational efficiency, in practice matrix elements of small values are usually truncated, which inevitably compromises the quality of the resulting plan. A fixed-point iteration scheme has been applied in IMRT optimization to solve this problem, which has been reported to be effective and efficient based on the observations of the numerical experiments. In this paper, we aim to point out the mathematics behind this scheme and to answer the following three questions: 1) whether the fixed-point iteration algorithm converges or not? 2) when it converges, whether the fixed point solution is same as the original solution obtained with the complete DDC matrix? 3) if not the same, wh...

  8. A simple method to retrospectively estimate patient dose-area product for chest tomosynthesis examinations performed using VolumeRAD

    Energy Technology Data Exchange (ETDEWEB)

    Båth, Magnus, E-mail: magnus.bath@vgregion.se; Svalkvist, Angelica [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45, Sweden and Department of Medical Physics and Biomedical Engineering, Sahlgrenska University Hospital, Gothenburg SE-413 45 (Sweden); Söderman, Christina [Department of Radiation Physics, Institute of Clinical Sciences, The Sahlgrenska Academy at University of Gothenburg, Gothenburg SE-413 45 (Sweden)

    2014-10-15

    Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.

  9. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  10. Summary of results for the uranium benchmark problem of the ANS Ad Hoc Committee on Reactor Physics Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Parish, T.A. [Texas A and M Univ., College Station, TX (United States). Nuclear Engineering Dept.; Mosteller, R.D. [Los Alamos National Lab., NM (United States); Diamond, D.J. [Brookhaven National Lab., Upton, NY (United States); Gehin, J.C. [Oak Ridge National Lab., TN (United States)

    1998-12-31

    This paper presents a summary of the results obtained by all of the contributors to the Uranium Benchmark Problem of the ANS Ad hoc Committee on Reactor Physics Benchmarks. The benchmark problem was based on critical experiments which mocked-up lattices typical of PWRs. Three separate cases constituted the benchmark problem. These included a uniform lattice, an assembly-type lattice with water holes and an assembly-type lattice with pyrex rods. Calculated results were obtained from eighteen separate organizations from all over the world. Some organizations submitted more than one set of results based on different calculational methods and cross section data. Many of the most widely used assembly physics and core analysis computer codes and neutron cross section data libraries were applied by the contributors.

  11. Hidden drivers of low-dose pharmaceutical pollutant mixtures revealed by the novel GSA-QHTS screening method.

    Science.gov (United States)

    Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca

    2016-09-01

    The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294

  12. Hidden drivers of low-dose pharmaceutical pollutant mixtures revealed by the novel GSA-QHTS screening method

    Science.gov (United States)

    Rodea-Palomares, Ismael; Gonzalez-Pleiter, Miguel; Gonzalo, Soledad; Rosal, Roberto; Leganes, Francisco; Sabater, Sergi; Casellas, Maria; Muñoz-Carpena, Rafael; Fernández-Piñas, Francisca

    2016-01-01

    The ecological impacts of emerging pollutants such as pharmaceuticals are not well understood. The lack of experimental approaches for the identification of pollutant effects in realistic settings (that is, low doses, complex mixtures, and variable environmental conditions) supports the widespread perception that these effects are often unpredictable. To address this, we developed a novel screening method (GSA-QHTS) that couples the computational power of global sensitivity analysis (GSA) with the experimental efficiency of quantitative high-throughput screening (QHTS). We present a case study where GSA-QHTS allowed for the identification of the main pharmaceutical pollutants (and their interactions), driving biological effects of low-dose complex mixtures at the microbial population level. The QHTS experiments involved the integrated analysis of nearly 2700 observations from an array of 180 unique low-dose mixtures, representing the most complex and data-rich experimental mixture effect assessment of main pharmaceutical pollutants to date. An ecological scaling-up experiment confirmed that this subset of pollutants also affects typical freshwater microbial community assemblages. Contrary to our expectations and challenging established scientific opinion, the bioactivity of the mixtures was not predicted by the null mixture models, and the main drivers that were identified by GSA-QHTS were overlooked by the current effect assessment scheme. Our results suggest that current chemical effect assessment methods overlook a substantial number of ecologically dangerous chemical pollutants and introduce a new operational framework for their systematic identification. PMID:27617294

  13. Mechanisms of Fatal Cardiotoxicity following High-Dose Cyclophosphamide Therapy and a Method for Its Prevention.

    Directory of Open Access Journals (Sweden)

    Takuro Nishikawa

    Full Text Available Observed only after administration of high doses, cardiotoxicity is the dose-limiting effect of cyclophosphamide (CY. We investigated the poorly understood cardiotoxic mechanisms of high-dose CY. A rat cardiac myocardial cell line, H9c2, was exposed to CY metabolized by S9 fraction of rat liver homogenate mixed with co-factors (CYS9. Cytotoxicity was then evaluated by 3-(4,5-dimethyl-2-thiazolyl¬2,5-diphenyl¬2H-tetrazolium bromide (MTT assay, lactate dehydrogenase release, production of reactive oxygen species (ROS, and incidence of apoptosis. We also investigated how the myocardial cellular effects of CYS9 were modified by acrolein scavenger N-acetylcysteine (NAC, antioxidant isorhamnetin (ISO, and CYP inhibitor β-ionone (BIO. Quantifying CY and CY metabolites by means of liquid chromatography coupled with electrospray tandem mass spectrometry, we assayed culture supernatants of CYS9 with and without candidate cardioprotectant agents. Assay results for MTT showed that treatment with CY (125-500 μM did not induce cytotoxicity. CYS9, however, exhibited myocardial cytotoxicity when CY concentration was 250 μM or more. After 250 μM of CY was metabolized in S9 mix for 2 h, the concentration of CY was 73.6 ± 8.0 μM, 4-hydroxy-cyclophosphamide (HCY 17.6 ± 4.3, o-carboxyethyl-phosphoramide (CEPM 26.6 ± 5.3 μM, and acrolein 26.7 ± 2.5 μM. Inhibition of CYS9-induced cytotoxicity occurred with NAC, ISO, and BIO. When treated with ISO or BIO, metabolism of CY was significantly inhibited. Pre-treatment with NAC, however, did not inhibit the metabolism of CY: compared to control samples, we observed no difference in HCY, a significant increase of CEPM, and a significant decrease of acrolein. Furthermore, NAC pre-treatment did not affect intracellular amounts of ROS produced by CYS9. Since acrolein seems to be heavily implicated in the onset of cardiotoxicity, any competitive metabolic processing of CY that reduces its transformation to acrolein

  14. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    Science.gov (United States)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  15. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  16. Absorbed dose measurements in mammography using Monte Carlo method and ZrO{sub 2}+PTFE dosemeters

    Energy Technology Data Exchange (ETDEWEB)

    Duran M, H. A.; Hernandez O, M. [Departamento de Investigacion en Polimeros y Materiales, Universidad de Sonora, Blvd. Luis Encinas y Rosales s/n, Col. Centro, 83190 Hermosillo, Sonora (Mexico); Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R. [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Cipres 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Pinedo S, A.; Ventura M, J.; Chacon, F. [Hospital General de Zona No. 1, IMSS, Interior Alameda 45, 98000 Zacatecas (Mexico); Rivera M, T. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F.(Mexico)], e-mail: hduran20_1@hotmail.com

    2009-10-15

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO{sub 2}+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  17. Distribution of gamma-ray dose rate in Fukushima prefecture by a car-borne survey method

    International Nuclear Information System (INIS)

    The Tohoku Pacific Earthquake and Tsunami on March 11, 2011, caused severe damage to the TEPCO Fukushima Dai-ichi NPP. This was followed by a nuclear accident at an unprecedented scale, and huge amounts of radioactive material were released into the environment. The distributions of the gamma-ray dose rate in Fukushima prefecture were measured using a NaI(Tl) scintillation survey meter as part of a car-borne survey method on April 18-21, June 20-22, October 18-21, 2011, and on April 9-11 and July 30 - August 1, 2012. The dose rate near TEPCO Fukushima Dai-ichi NPP and at Iitate-mura, Fukushima-city was high (1 to >30 μSv/h). (author)

  18. Effect of intensiti modulated radiation therapy according to equivalent uniform dose optimization method on patients with lung cancer

    Institute of Scientific and Technical Information of China (English)

    Yu-Fu Zhou; Qian Sun; Ya-Jun Zhang; Geng-Ming Wang; Bin He; Tao Qi; An Zhou

    2016-01-01

    Objective:To analyze the effect of the intensity modulated radiation therapy according to equivalent uniform dose optimization method on patients with lung cancer.Methods:A total of 82 cases of non-small cell lung cancer were divided into observation group and control group according to the random number table method. Patients in the control group received conventional radiotherapy while observation group received intensity modulated radiotherapy based on equivalent uniform dose optimization method. The treatment effects, survival times, blood vessel-related factors, blood coagulation function and the levels of inflammatory factors and so on were compared between the two groups of patients.Results:The effective rate of the observation group after treatment was higher than that of the control group. Progression free survival and median overall survival times were longer than those of patients in the control group (P<0.05). The serum VEGF and HIF-αα levels as well as D-D, TT, PT, APTT and FIB levels were lower in observation group patients after treatment than those in the control group(P<0.05). At the same time point, serum TNF-αα, CRP and PCT levels in the observation group after treatment were lower than those in the control group (P<0.05). Serum M2-PK, CA125, CEA and SCC values of patients in the observation group after treatment were all significantly lower than those in the control group (P< 0.05).Conclusions:Intensity modulated radiation therapy based on equivalent uniform dose optimized method can improve the treatment effect, prolong the survival time, optimize micro inflammatory environment and inhibit tumor biological behavior at the same time.

  19. SU-E-T-561: Development of Depth Dose Measurement Technique Using the Multilayer Ionization Chamber for Spot Scanning Method

    Energy Technology Data Exchange (ETDEWEB)

    Takayanagi, T; Fujitaka, S; Umezawa, M [Hitachi, Ltd., Hitachi Research Laboratory, Hitachi-shi, Ibaraki-ken (Japan); Ito, Y; Nakashima, C; Matsuda, K [Hitachi, Ltd., Hitachi Works, Hitachi-shi, Ibaraki-ken (Japan)

    2014-06-01

    Purpose: To develop a measurement technique which suppresses the difference between profiles obtained with a multilayer ionization chamber (MLIC) and with a water phantom. Methods: The developed technique multiplies the raw MLIC data by a correction factor that depends on the initial beam range and water equivalent depth. The correction factor is derived based on a Bragg curve calculation formula considering range straggling and fluence loss caused by nuclear reactions. Furthermore, the correction factor is adjusted based on several integrated depth doses measured with a water phantom and the MLIC. The measured depth dose profiles along the central axis of the proton field with a nominal field size of 10 by 10 cm were compared between the MLIC using the new technique and the water phantom. The spread out Bragg peak was 20 cm for fields with a range of 30.6 cm and 6.9 cm. Raw MLIC data were obtained with each energy layer, and integrated after multiplying by the correction factor. The measurements were performed by a spot scanning nozzle at Nagoya Proton Therapy Center, Japan. Results: The profile measured with the MLIC using the new technique is consistent with that of the water phantom. Moreover, 97% of the points passed the 1% dose /1mm distance agreement criterion of the gamma index. Conclusion: We have demonstrated that the new technique suppresses the difference between profiles obtained with the MLIC and with the water phantom. It was concluded that this technique is useful for depth dose measurement in proton spot scanning method.

  20. μ-synthesis for the coupled mass benchmark problem

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.; Tøffner-Clausen, S.;

    1997-01-01

    A robust controller design for the coupled mass benchmark problem is presented in this paper. The applied design method is based on a modified D-K iteration, i.e. μ-synthesis which take care of mixed real and complex perturbations sets. This μ-synthesis method for mixed perturbation sets is a str......A robust controller design for the coupled mass benchmark problem is presented in this paper. The applied design method is based on a modified D-K iteration, i.e. μ-synthesis which take care of mixed real and complex perturbations sets. This μ-synthesis method for mixed perturbation sets...

  1. Monitoring 3D dose distributions in proton therapy by reconstruction using an iterative method.

    Science.gov (United States)

    Kim, Young-Hak; Yoon, Changyeon; Lee, Wonho

    2016-08-01

    The Bragg peak of protons can be determined by measuring prompt γ-rays. In this study, prompt γ-rays detected by single-photon emission computed tomography with a geometrically optimized collimation system were reconstructed by an iterative method. The falloff position by iterative method (52.48mm) was most similar to the Bragg peak (52mm) of an 80MeV proton compared with those of back-projection (54.11mm) and filtered back-projection (54.91mm) methods. Iterative method also showed better image performance than other methods. PMID:27179145

  2. Comparing measurement-derived (3DVH) and machine log file-derived dose reconstruction methods for VMAT QA in patient geometries.

    Science.gov (United States)

    Tyagi, Neelam; Yang, Kai; Yan, Di

    2014-07-08

    The purpose of this study was to compare the measurement-derived (3DVH) dose reconstruction method with machine log file-derived dose reconstruction method in patient geometries for VMAT delivery. A total of ten patient plans were selected from a regular fractionation plan to complex SBRT plans. Treatment sites in the lung and abdomen were chosen to explore the effects of tissue heterogeneity on the respective dose reconstruction algorithms. Single- and multiple-arc VMAT plans were generated to achieve the desired target objectives. Delivered plan in the patient geometry was reconstructed by using ArcCHECK Planned Dose Perturbation (ACPDP) within 3DVH software, and by converting the machine log file to Pinnacle3 9.0 treatment plan format and recalculating dose with CVSP algorithm. In addition, delivered gantry angles between machine log file and 3DVH 4D measurement were also compared to evaluate the accuracy of the virtual inclinometer within the 3DVH. Measured ion chamber and 3DVH-derived isocenter dose agreed with planned dose within 0.4% ± 1.2% and -1.0% ± 1.6%, respectively. 3D gamma analysis showed greater than 98% between log files and 3DVH reconstructed dose. Machine log file reconstructed doses and TPS dose agreed to within 2% in PTV and OARs over the entire treatment. 3DVH reconstructed dose showed an average maximum dose difference of 3% ± 1.2% in PTV, and an average mean difference of -4.5% ± 10.5% in OAR doses. The average virtual inclinometer error (VIE) was -0.65° ± 1.6° for all patients, with a maximum error of -5.16° ± 4.54° for an SRS case. The time averaged VIE was within 1°-2°, and did not have a large impact on the overall accuracy of the estimated patient dose from ACPDP algorithm. In this study, we have compared two independent dose reconstruction methods for VMAT QA. Both methods are capable of taking into account the measurement and delivery parameter discrepancy, and display the delivered dose in CT patient geometry rather than

  3. A new method to explore the spectral impact of the piriform fossae on the singing voice:Benchmarking using MRI-based 3D-printed vocal tracts

    OpenAIRE

    Bertrand Delvaux; David Howard

    2014-01-01

    The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics) is introduced. The findings support results in the literature...

  4. A simple method for estimating the effective dose in dental CT. Conversion factors and calculation for a clinical low-dose protocol; Eine einfache Methode zur Abschaetzung der effektiven Dosis bei Dental-CT. Konversionsfaktoren und exemplarische Berechnung fuer ein klinisches Low-Dose-Protokoll

    Energy Technology Data Exchange (ETDEWEB)

    Homolka, P.; Kudler, H.; Nowotny, R. [Inst. fuer Biomedizinische Technik und Physik, Univ. Wien (Austria); Gahleitner, A. [Wien Univ. (Austria). Abt. fuer Osteologie; Wien Univ. (Austria). Zahn-, Mund- und Kieferheilkunde

    2001-06-01

    An easily appliable method to estimate effective dose including in its definition the high radio-sensitivity of the salivary glands from dental computed tomography is presented. Effective doses were calculated for a markedly dose reduced dental CT protocol as well as for standard settings. Data are compared with effective doses from the literature obtained with other modalities frequently used in dental care. Methods: Conversion factors based on the weighted Computed Tomography Dose Index were derived from published data to calculate effective dose values for various CT exposure settings. Results: Conversion factors determined can be used for clinically used kVp settings and prefiltrations. With reduced tube current an effective dose for a CT examination of the maxilla of 22 {mu}Sv can be achieved, which compares to values typically obtained with panoramic radiography (26 {mu}Sv). A CT scan of the mandible, respectively, gives 123 {mu}Sv comparable to a full mouth survey with intraoral films (150 {mu}Sv). Conclusion: For standard CT scan protocols of the mandible, effective doses exceed 600 {mu}Sv. Hence, low dose protocols for dental CT should be considered whenever feasable, especially for paediatric patients. If hard tissue diagnoses is performed, the potential of dose reduction is significant despite the higher image noise levels as readability is still adequate. (orig.) [German] Eine Methode, die eine einfache Bestimmung der effektiven Dosis bei Dental-CT unter Beruecksichtigung der Strahlensensitivitaet der Gl. parotis und der Gl. submandibularis - sowohl bei Standard- als auch bei dosisreduzierten Protokollen - ermoeglicht, wird beschrieben. Weiters wird die effektive Dosis eines klinisch verwendeten Low-Dose-Protokolles abgeschaetzt und mit den haeufigsten dentalradiologischen Untersuchungsverfahren verglichen. Methoden: Aus publizierten effektiven Dosen fuer Maxilla und Mandibula Scans wurden Konversionsfaktoren ermittelt, mit deren Hilfe fuer abweichende

  5. Benchmarking Density Functional Theory Based Methods To Model NiOOH Material Properties: Hubbard and van der Waals Corrections vs Hybrid Functionals.

    Science.gov (United States)

    Zaffran, Jeremie; Caspary Toroker, Maytal

    2016-08-01

    NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation. PMID:27420033

  6. Nominal GDP: Target or Benchmark?

    OpenAIRE

    Hetzel, Robert L.

    2015-01-01

    Some observers have argued that the Federal Reserve would best fulfill its mandate by adopting a target for nominal gross domestic product (GDP). Insights from the monetarist tradition suggest that nominal GDP targeting could be destabilizing. However, adopting benchmarks for both nominal and real GDP could offer useful information about when monetary policy is too tight or too loose.

  7. Benchmarking biodiversity performances of farmers

    NARCIS (Netherlands)

    Snoo, de G.R.; Lokhorst, A.M.; Dijk, van J.; Staats, H.; Musters, C.J.M.

    2010-01-01

    Farmers are the key players when it comes to the enhancement of farmland biodiversity. In this study, a benchmark system that focuses on improving farmers’ nature conservation was developed and tested among Dutch arable farmers in different social settings. The results show that especially tailored

  8. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  9. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  10. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  11. A new method to explore the spectral impact of the piriform fossae on the singing voice: benchmarking using MRI-based 3D-printed vocal tracts.

    Science.gov (United States)

    Delvaux, Bertrand; Howard, David

    2014-01-01

    The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics) is introduced. The findings support results in the literature: the piriform fossae create a spectral trough in the region 4-5 kHz and act as formants repellents. Moreover, this study extends those results by demonstrating numerically and perceptually the impact of having large piriform fossae on the sung output. PMID:25048199

  12. A new method to explore the spectral impact of the piriform fossae on the singing voice: benchmarking using MRI-based 3D-printed vocal tracts.

    Directory of Open Access Journals (Sweden)

    Bertrand Delvaux

    Full Text Available The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics is introduced. The findings support results in the literature: the piriform fossae create a spectral trough in the region 4-5 kHz and act as formants repellents. Moreover, this study extends those results by demonstrating numerically and perceptually the impact of having large piriform fossae on the sung output.

  13. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Background and purpose: Delta4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  14. Standardization of the Fricke gel dosimetry method and tridimensional dose evaluation using the magnetic resonance imaging technique

    International Nuclear Information System (INIS)

    This study standardized the method for obtaining the Fricke gel solution developed at IPEN. The results for different gel qualities used in the preparation of solutions and the influence of the gelatin concentration in the response of dosimetric solutions were compared. Type tests such as: dose response dependence, minimum and maximum detection limits, response reproducibility, among others, were carried out using different radiation types and the Optical Absorption (OA) spectrophotometry and Magnetic Resonance (MR) techniques. The useful dose ranges for Co 60 gamma radiation and 6 MeV photons are 0,4 to 30,0 Gy and 0,5 to 100,0 Gy , using OA and MR techniques, respectively. A study of ferric ions diffusion in solution was performed to determine the optimum time interval between irradiation and samples evaluation; until 2,5 hours after irradiation to obtain sharp MR images. A spherical simulator consisting of Fricke gel solution prepared with 5% by weight 270 Bloom gelatine (national quality) was developed to be used to three-dimensional dose assessment using the Magnetic Resonance Imaging (MRI) technique. The Fricke gel solution prepared with 270 Bloom gelatine, that, in addition to low cost, can be easily acquired on the national market, presents satisfactory results on the ease of handling, sensitivity, response reproducibility and consistency. The results confirm their applicability in the three-dimensional dosimetry using MRI technique. (author)

  15. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  16. An investigation of methods for neutron dose measurement in high temperature irradiation fields

    Energy Technology Data Exchange (ETDEWEB)

    Kosako, Toshisou; Sugiura, Nobuyuki [Tokyo Univ. (Japan); Kudo, Kazuhiko [Kyushu Univ., Fukuoka (Japan)] [and others

    2000-10-01

    The Japan Atomic Energy Research Institute (JAERI) has been conducting the innovative basic research on high temperature since 1994, which is a series of high temperature irradiation studies using the High Temperature Engineering Test Reactor (HTTR). 'The Task Group for Evaluation of Irradiation Dose under High Temperature Radiation' was founded in the HTTR Utilization Research Committee, which is the promoting body of the innovative basic research. The present report is a summary of investigation which has been made by the Task Group on the present status and subjects of research and development of neutron detectors in high temperature irradiation fields, in view of contributing to high temperature irradiation research using the HTTR. Detectors investigated here in the domestic survey are the following five kinds of in-core detectors: 1) small fission counter, 2) small fission chamber, 3) self-powered detector, 4) activation detector, and 5) optical fiber. In addition, the research and development status in Russia has been investigated. The present report will also be useful as nuclear instrumentation of high temperature gas-cooled reactors. (author)

  17. An investigation of methods for neutron dose measurement in high temperature irradiation fields

    International Nuclear Information System (INIS)

    The Japan Atomic Energy Research Institute (JAERI) has been conducting the innovative basic research on high temperature since 1994, which is a series of high temperature irradiation studies using the High Temperature Engineering Test Reactor (HTTR). 'The Task Group for Evaluation of Irradiation Dose under High Temperature Radiation' was founded in the HTTR Utilization Research Committee, which is the promoting body of the innovative basic research. The present report is a summary of investigation which has been made by the Task Group on the present status and subjects of research and development of neutron detectors in high temperature irradiation fields, in view of contributing to high temperature irradiation research using the HTTR. Detectors investigated here in the domestic survey are the following five kinds of in-core detectors: 1) small fission counter, 2) small fission chamber, 3) self-powered detector, 4) activation detector, and 5) optical fiber. In addition, the research and development status in Russia has been investigated. The present report will also be useful as nuclear instrumentation of high temperature gas-cooled reactors. (author)

  18. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    Science.gov (United States)

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  19. Study and comparison of analytical methods for dosing molybdenum in uranium-molybdenum alloys

    International Nuclear Information System (INIS)

    Methods to determine molybdenum in uranium-molybdenum alloys are developed by various technic: molecular absorption spectrophotometry, emission spectroscopy, X ray fluorescence, atomic absorption spectrophotometry. After a comparison on samples in which molybdenum content lies between 1 and 10 per cent by weight, one concludes in the interest of some of the exposed methods for routine analysis. (author)

  20. Electronically Excited States of Vitamin B12: Benchmark Calculations Including Time-Dependent Density Functional Theory and Correlated Ab Initio Methods

    CERN Document Server

    Kornobis, Karina; Wong, Bryan M; Lodowski, Piotr; Jaworska, Maria; Andruniów, Tadeusz; Rudd, Kenneth; Kozlowski, Pawel M; 10.1021/jp110914y

    2011-01-01

    Time-dependent density functional theory (TD-DFT) and correlated ab initio methods have been applied to the electronically excited states of vitamin B12 (cyanocobalamin or CNCbl). Different experimental techniques have been used to probe the excited states of CNCbl, revealing many issues that remain poorly understood from an electronic structure point of view. Due to its efficient scaling with size, TD-DFT emerges as one of the most practical tools that can be used to predict the electronic properties of these fairly complex molecules. However, the description of excited states is strongly dependent on the type of functional used in the calculations. In the present contribution, the choice of a proper functional for vitamin B12 was evaluated in terms of its agreement with both experimental results and correlated ab initio calculations. Three different functionals, i.e. B3LYP, BP86, and LC-BLYP, were tested. In addition, the effect of relative contributions of DFT and HF to the exchange-correlation functional ...

  1. Use of Non-Parametric Statistical Method in Identifying Repetitive High Dose Jobs in a Nuclear Power Plant

    International Nuclear Information System (INIS)

    The cost-effective reduction of occupational radiation dose (ORD) at a nuclear power plant could not be achieved without going through an extensive analysis of accumulated ORD data of existing plants. Through the data analysis, it is required to identify what are the jobs of repetitive high ORD at the nuclear power plant. In this study, Percentile Rank Sum Method (PRSM) is proposed to identify repetitive high ORD jobs, which is based on non-parametric statistical theory. As a case study, the method is applied to ORD data of maintenance and repair jobs at Kori units 3 and 4 that are pressurized water reactors with 950 MWe capacity and have been operated since 1986 and 1987, respectively in Korea. The results was verified and validated, and PRSM has been demonstrated to be an efficient method of analyzing the data.

  2. Introducing a method to derive a multiple virtual point source model of linac for photon beam dose calculation using Monte Carlo method

    International Nuclear Information System (INIS)

    In this study a new simple but a very effective method is introduced for the beam modeling of the invariant part of a medical linear accelerator. In this method, instead of segmentation of scoring plane and analysis of phase space file, the mirror image of a virtual point source, energy and angular distributions and dependencies between them are derived, directly. Then, the method was used for the beam modeling of a 6 MeV photon beam of the Siemens ONCOR Impression accelerator, where the TALLYX capability of MCNP4C was used. Consequently, a multiple point source model with angular dependent photon energy spectra was obtained. Then, the percentage depth dose curves and the lateral dose distributions in water phantom were calculated using the present model for three field sizes including 4 cm x 4 cm, 10 cm x 10 cm and 40 cm x 40 cm, and the results were compared to those of full Monte Carlo simulations. The results showed excellent agreement between them for all the field sizes. The benefits of the present method were verified as compared with the phase space file analysis method, including the ease of application and the errors removal caused by the spatial segmentation of the phase space data.

  3. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda;

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care were...... assessed: Compliance with current guidelines on initiation of 1) combination antiretroviral therapy (cART), 2) chemoprophylaxis, 3) frequency of laboratory monitoring, and 4) virological response to cART (proportion of patients with HIV-RNA 90% of time on cART). RESULTS: 7097 Euro...... to North, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for East and Argentina (adjusted OR 0.16[95%CI 0.11-0.23, p HIV health care utilization...

  4. MCNP: Photon benchmark problems

    Energy Technology Data Exchange (ETDEWEB)

    Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

    1991-09-01

    The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs.

  5. Benchmarking ETL Workflows

    Science.gov (United States)

    Simitsis, Alkis; Vassiliadis, Panos; Dayal, Umeshwar; Karagiannis, Anastasios; Tziovara, Vasiliki

    Extraction-Transform-Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.

  6. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  7. Benchmark results in vector atmospheric radiative transfer

    International Nuclear Information System (INIS)

    In this paper seven vector radiative transfer codes are inter-compared for the case of underlying black surface. They include three techniques based on the discrete ordinate method (DOM), two Monte-Carlo methods, the successive orders scattering method, and a modified doubling-adding technique. It was found that all codes give very similar results. Therefore, we were able to produce benchmark results for the Stokes parameters both for reflected and transmitted light in the cases of molecular, aerosol and cloudy multiply scattering media. It was assumed that the single scattering albedo is equal to one. Benchmark results have been provided by several studies before, including Coulson et al., Garcia and Siewert, Wauben and Hovenier, and Natraj et al. among others. However, the case of the elongated phase functions such as for a cloud and with a high angular resolution is presented here for the first time. Also in difference with other studies, we make inter-comparisons using several codes for the same input dataset, which enables us to quantify the corresponding errors more accurately.

  8. The European Union benchmarking experience. From euphoria to fatigue?

    Directory of Open Access Journals (Sweden)

    Michael Zängle

    2004-06-01

    Full Text Available Even if one may agree with the possible criticism of the Lisbon process as being too vague in com-mitment or as lacking appropriate statistical techniques and indicators, the benchmarking system pro-vided by EUROSTAT seems to be sufficiently effective in warning against imminent failure. The Lisbon objectives are very demanding. This holds true even if each of the objectives is looked at in isolation. But 'Lisbon' is more demanding than that, requiring a combination of several objectives to be achieved simultaneously (GDP growth, labour productivity, job-content of growth, higher quality of jobs and greater social cohesion. Even to countries like Ireland, showing exceptionally high performance in GDP growth and employment promotion during the period under investigation, achieving potentially conflicting objectives simultaneously seems to be beyond feasibility. The European Union benchmark-ing exercise is embedded in the context of the Open Method(s of Co-ordination (OMC. This context makes the benchmarking approach part and parcel of an overarching philosophy, which relates the benchmarking indicators to each other and assigns to them their role in corroborating the increasingly dominating project of the 'embedded neo-liberalism'. Against this background, the present paper is focussed on the following point. With the EU bench-marking system being effective enough to make the imminent under-achievement visible, there is a danger of disillusionment and 'benchmarking fatigue', which may provoke an ideological crisis. The dominant project being so deeply rooted, however, chances are high that this crisis will be solved im-manently in terms of embedded neo-liberalism by strengthening the neo-liberal branch of the Euro-pean project. Confining itself to the Europe of Fifteen, the analysis draws on EUROSTAT's database of Structural Indicators. ...

  9. Climate Benchmark Missions: CLARREO

    Science.gov (United States)

    Wielicki, Bruce A.; Young, David F.

    2010-01-01

    CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in

  10. Benchmarking labour market performance and labour market policies : theoretical foundations and applications

    OpenAIRE

    Schütz, Holger; Speckesser, Stefan; Schmid, Günther

    1998-01-01

    "Over the last few years, 'benchmarking' advanced to a key word in organisational development and change management. Originally, benchmarking was a tool in business studies summarising the process of comparing your own with a similar organisational unit (mostly the competitor) in order to improve the competitive position. Benchmarking must be distinguished from purely analytical methods of comparison: First, performance indicators must be developed which differ from traditional design. Second...

  11. Ship Propulsion System as a Benchmark for Fault-Tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    1998-01-01

    -tolerant control is a fairly new area. The paper presents a ship propulsion system as a benchmark that should be useful as a platform for development of new ideas and comparison of methods. The benchmark has two main elements. One is development of efficient FDI algorithms, the other is analysis and implementation...... of autonomous fault accommodation. A benchmark kit can be obtained from the authors....

  12. Thermal Performance Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  13. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  14. A dosimetry method for low dose rate brachytherapy by EGS5 combined with regression to reflect source strength shortage

    Science.gov (United States)

    Tanaka, Kenichi; Tateoka, Kunihiko; Asanuma, Osamu; Kamo, Ken-ichi; Sato, Kaori; Takeda, Hiromitsu; Takagi, Masaru; Hareyama, Masato; Takada, Jun

    2014-01-01

    The post-implantation dosimetry for brachytherapy using Monte Carlo calculation by EGS5 code combined with the source strength regression was investigated with respect to its validity. In this method, the source strength for the EGS5 calculation was adjusted with the regression, so that the calculation would reproduce the dose monitored with the glass rod dosimeters (GRDs) on a water phantom. The experiments were performed, simulating the case where one of two 125I sources of Oncoseed 6711 was lacking strength by 4–48%. As a result, the calculation without regression was in agreement with the GRD measurement within 26–62%. In this case, the shortage in strength of a source was neglected. By the regression, in order to reflect the strength shortage, the agreement was improved up to 17–24%. This agreement was also comparable with accuracy of the dose calculation for single source geometry reported previously. These results suggest the validity of the dosimetry method proposed in this study. PMID:24449715

  15. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  16. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  17. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    Science.gov (United States)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all

  18. TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Experimental results of pulse parameters and control rod worth measurements at TRIGA Mark 2 reactor in Ljubljana are presented. The measurements were performed with a completely fresh, uniform, and compact core. Only standard fuel elements with 12 wt% uranium were used. Special efforts were made to get reliable and accurate results at well-defined experimental conditions, and it is proposed to use the results as a benchmark test case for TRIGA reactors

  19. Radon in natural waters : Analytical Methods; Correlation to Environmental Parameters; Radiation Dose Estimation; and GIS Applications

    OpenAIRE

    Salih, Isam M. Musa

    2003-01-01

    Investigations of radon in natural water and its relation to physical and chemical parameters are outlined in this thesis. In particular, a method for measuring 222Rn in water at low concentrations (~20 mBq.l-1) is described, followed by discussions concerning the design and its application to study both radon and parameters influencing radon levels in natural waters. A topic considered is the impact of fluoride and other aquatic parameters on radon in water. Moreover, variables such as urani...

  20. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past the doorstep and is now in early trials in the project and construction environment....