WorldWideScience

Sample records for benchmark dose method

  1. Benchmark Dose Modeling

    Science.gov (United States)

    Finite doses are employed in experimental toxicology studies. Under the traditional methodology, the point of departure (POD) value for low dose extrapolation is identified as one of these doses. Dose spacing necessarily precludes a more accurate description of the POD value. ...

  2. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  3. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  4. Effects of Exposure Imprecision on Estimation of the Benchmark Dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose......Environmental epidemiology; exposure measurement error; effect of prenatal mercury exposure; exposure standards; benchmark dose...

  5. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  6. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  7. Benchmarking of the dose planning method (DPM) Monte Carlo code using electron beams from a racetrack microtron.

    Science.gov (United States)

    Chetty, Indrin J; Moran, Jean M; McShan, Daniel L; Fraass, Benedick A; Wilderman, Scott J; Bielajew, Alex F

    2002-06-01

    A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +/- 2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations.

  8. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  9. Benchmarking Learning and Teaching: Developing a Method

    Science.gov (United States)

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  10. 77 FR 36533 - Notice of Availability of the Benchmark Dose Technical Guidance

    Science.gov (United States)

    2012-06-19

    ... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY Notice of Availability of the Benchmark Dose Technical Guidance AGENCY: Environmental Protection... announcing the availability of Benchmark Dose Technical Guidance (BMD). This document was developed as...

  11. Benchmark dose of lead inducing anemia at the workplace.

    Science.gov (United States)

    Karita, Kanae; Yano, Eiji; Dakeishi, Miwako; Iwata, Toyoto; Murata, Katsuyuki

    2005-08-01

    To estimate the critical dose of lead inducing anemia in humans, the effects of lead on hemoglobin (Hb) and hematocrit (Hct) levels and red blood cell (RBC) count were examined in 388 male lead-exposed workers with blood lead (BPb) levels of 0.05-5.5 (mean 1.3) micromol/L by using the benchmark dose (BMD) approach. The BPb level was significantly related to Hb (regression coefficient beta=-0.276), RBC (beta=-11.35), and Hct (beta=-0.563) among the workers (p anemia (1.85 micromol/L), based on the WHO criteria, than in those without anemia (1.26 micromol/L). The benchmark dose levels of BPb (i.e., lower 95% confidence limits of BMD), calculated from the K-power model set at an abnormal probability of 5% in unexposed workers and an excess risk of 5% in exposed workers were estimated to be 0.94 micromol/L (19.5 microg/dl) for Hb, 0.94 micromol/L (19.4 microg/dl) for RBC, and 1.43 micromol/L (29.6 microg/dl) for Hct. These findings suggest that reduction in hematopoietic indicators may be initiated at BPbs below the level currently considered without effect.

  12. Benchmarking of methods for genomic taxonomy

    DEFF Research Database (Denmark)

    Larsen, Mette Voldby; Cosentino, Salvatore; Lukjancenko, Oksana;

    2014-01-01

    . Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene...

  13. Benchmarking analytical calculations of proton doses in heterogeneous matter.

    Science.gov (United States)

    Ciangaru, George; Polf, Jerimy C; Bues, Martin; Smith, Alfred R

    2005-12-01

    A proton dose computational algorithm, performing an analytical superposition of infinitely narrow proton beamlets (ASPB) is introduced. The algorithm uses the standard pencil beam technique of laterally distributing the central axis broad beam doses according to the Moliere scattering theory extended to slablike varying density media. The purpose of this study was to determine the accuracy of our computational tool by comparing it with experimental and Monte Carlo (MC) simulation data as benchmarks. In the tests, parallel wide beams of protons were scattered in water phantoms containing embedded air and bone materials with simple geometrical forms and spatial dimensions of a few centimeters. For homogeneous water and bone phantoms, the proton doses we calculated with the ASPB algorithm were found very comparable to experimental and MC data. For layered bone slab inhomogeneity in water, the comparison between our analytical calculation and the MC simulation showed reasonable agreement, even when the inhomogeneity was placed at the Bragg peak depth. There also was reasonable agreement for the parallelepiped bone block inhomogeneity placed at various depths, except for cases in which the bone was located in the region of the Bragg peak, when discrepancies were as large as more than 10%. When the inhomogeneity was in the form of abutting air-bone slabs, discrepancies of as much as 8% occurred in the lateral dose profiles on the air cavity side of the phantom. Additionally, the analytical depth-dose calculations disagreed with the MC calculations within 3% of the Bragg peak dose, at the entry and midway depths in the phantom. The distal depth-dose 20%-80% fall-off widths and ranges calculated with our algorithm and the MC simulation were generally within 0.1 cm of agreement. The analytical lateral-dose profile calculations showed smaller (by less than 0.1 cm) 20%-80% penumbra widths and shorter fall-off tails than did those calculated by the MC simulations. Overall

  14. Dose-response modeling : Evaluation, application, and development of procedures for benchmark dose analysis in health risk assessment of chemical substances

    OpenAIRE

    Sand, Salomon

    2005-01-01

    In this thesis, dose-response modeling and procedures for benchmark dose (BMD) analysis in health risk assessment of chemical substances have been investigated. The BMD method has been proposed as an alternative to the NOAEL (no-observedadverse- effect-level) approach in health risk assessment of non-genotoxic agents. According to the BMD concept, a dose-response model is fitted to data and the BMD is defined as the dose causing a predetermined change in response. A lowe...

  15. Quality Assurance Testing of Version 1.3 of U.S. EPA Benchmark Dose Software (Presentation)

    Science.gov (United States)

    EPA benchmark dose software (BMDS) issued to evaluate chemical dose-response data in support of Agency risk assessments, and must therefore be dependable. Quality assurance testing methods developed for BMDS were designed to assess model dependability with respect to curve-fitt...

  16. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  17. Benchmark dose profiles for joint-action continuous data in quantitative risk assessment.

    Science.gov (United States)

    Deutsch, Roland C; Piegorsch, Walter W

    2013-09-01

    Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single-agent setting to joint-action, two-agent studies. Focus is on continuous response outcomes. Extending the single-exposure setting, representations of risk are based on a joint-action dose-response model involving both agents. Based on such a model, the concept of a benchmark profile-a two-dimensional analog of the single-dose BMD at which both agents achieve the specified BMR-is defined for use in quantitative risk characterization and assessment.

  18. Evaluation of the benchmark dose for point of departure determination for a variety of chemical classes in applied regulatory settings.

    Science.gov (United States)

    Izadi, Hoda; Grundy, Jean E; Bose, Ranjan

    2012-05-01

    Repeated-dose studies received by the New Substances Assessment and Control Bureau (NSACB) of Health Canada are used to provide hazard information toward risk calculation. These studies provide a point of departure (POD), traditionally the NOAEL or LOAEL, which is used to extrapolate the quantity of substance above which adverse effects can be expected in humans. This project explored the use of benchmark dose (BMD) modeling as an alternative to this approach for studies with few dose groups. Continuous data from oral repeated-dose studies for chemicals previously assessed by NSACB were reanalyzed using U.S. EPA benchmark dose software (BMDS) to determine the BMD and BMD 95% lower confidence limit (BMDL(05) ) for each endpoint critical to NOAEL or LOAEL determination for each chemical. Endpoint-specific benchmark dose-response levels , indicative of adversity, were consistently applied. An overall BMD and BMDL(05) were calculated for each chemical using the geometric mean. The POD obtained from benchmark analysis was then compared with the traditional toxicity thresholds originally used for risk assessment. The BMD and BMDL(05) generally were higher than the NOAEL, but lower than the LOAEL. BMDL(05) was generally constant at 57% of the BMD. Benchmark provided a clear advantage in health risk assessment when a LOAEL was the only POD identified, or when dose groups were widely distributed. Although the benchmark method cannot always be applied, in the selected studies with few dose groups it provided a more accurate estimate of the real no-adverse-effect level of a substance.

  19. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  20. Benchmarking of methods for genomic taxonomy.

    Science.gov (United States)

    Larsen, Mette V; Cosentino, Salvatore; Lukjancenko, Oksana; Saputra, Dhany; Rasmussen, Simon; Hasman, Henrik; Sicheritz-Pontén, Thomas; Aarestrup, Frank M; Ussery, David W; Lund, Ole

    2014-05-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.

  1. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  2. Benchmark dose profiles for joint-action quantal data in quantitative risk assessment.

    Science.gov (United States)

    Deutsch, Roland C; Piegorsch, Walter W

    2012-12-01

    Benchmark analysis is a widely used tool in public health risk analysis. Therein, estimation of minimum exposure levels, called Benchmark Doses (BMDs), that induce a prespecified Benchmark Response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This article demonstrates how the benchmark modeling paradigm can be expanded from the single-dose setting to joint-action, two-agent studies. Focus is on response outcomes expressed as proportions. Extending the single-exposure setting, representations of risk are based on a joint-action dose-response model involving both agents. Based on such a model, the concept of a benchmark profile (BMP) - a two-dimensional analog of the single-dose BMD at which both agents achieve the specified BMR - is defined for use in quantitative risk characterization and assessment. The resulting, joint, low-dose guidelines can improve public health planning and risk regulation when dealing with low-level exposures to combinations of hazardous agents.

  3. Standardizing Benchmark Dose Calculations to Improve Science-Based Decisions in Human Health Assessments

    Science.gov (United States)

    Wignall, Jessica A.; Shapiro, Andrew J.; Wright, Fred A.; Woodruff, Tracey J.; Chiu, Weihsueh A.; Guyton, Kathryn Z.

    2014-01-01

    Background: Benchmark dose (BMD) modeling computes the dose associated with a prespecified response level. While offering advantages over traditional points of departure (PODs), such as no-observed-adverse-effect-levels (NOAELs), BMD methods have lacked consistency and transparency in application, interpretation, and reporting in human health assessments of chemicals. Objectives: We aimed to apply a standardized process for conducting BMD modeling to reduce inconsistencies in model fitting and selection. Methods: We evaluated 880 dose–response data sets for 352 environmental chemicals with existing human health assessments. We calculated benchmark doses and their lower limits [10% extra risk, or change in the mean equal to 1 SD (BMD/L10/1SD)] for each chemical in a standardized way with prespecified criteria for model fit acceptance. We identified study design features associated with acceptable model fits. Results: We derived values for 255 (72%) of the chemicals. Batch-calculated BMD/L10/1SD values were significantly and highly correlated (R2 of 0.95 and 0.83, respectively, n = 42) with PODs previously used in human health assessments, with values similar to reported NOAELs. Specifically, the median ratio of BMDs10/1SD:NOAELs was 1.96, and the median ratio of BMDLs10/1SD:NOAELs was 0.89. We also observed a significant trend of increasing model viability with increasing number of dose groups. Conclusions: BMD/L10/1SD values can be calculated in a standardized way for use in health assessments on a large number of chemicals and critical effects. This facilitates the exploration of health effects across multiple studies of a given chemical or, when chemicals need to be compared, providing greater transparency and efficiency than current approaches. Citation: Wignall JA, Shapiro AJ, Wright FA, Woodruff TJ, Chiu WA, Guyton KZ, Rusyn I. 2014. Standardizing benchmark dose calculations to improve science-based decisions in human health assessments. Environ Health

  4. Measurement Methods in the field of benchmarking

    Directory of Open Access Journals (Sweden)

    István Szűts

    2004-05-01

    Full Text Available In benchmarking we often come across with parameters being difficultto measure while executing comparisons or analyzing performance, yet they haveto be compared and measured so as to be able to choose the best practices. Thesituation is similar in the case of complex, multidimensional evaluation as well,when the relative importance and order of different dimensions, parameters to beevaluated have to be determined or when the range of similar performanceindicators have to be decreased with regard to simpler comparisons. In suchcases we can use the ordinal or interval scales of measurement elaborated by S.S.Stevens.

  5. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  6. Application of Benchmark Dose (BMD) in Estimating Biological Exposure Limit (BEL) to Cadmium

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Objective To estimate the biological exposure limit (BEL) using benchmark dose (BMD) based on two sets of data from occupational epidemiology. Methods Cadmium-exposed workers were selected from a cadmium smelting factory and a zinc product factory. Doctors, nurses or shop assistants living in the same area served as a control group. Urinary cadmium (UCd) was used as an exposure biomarker and urinary β2-microgloburin (B2M), N-acetyl-β-D-glucosaminidase (NAG) and albumin (ALB) as effect biomarkers. All urine parameters were adjusted by urinary creatinine. Software of BMDS (Version 1.3.2, EPA.U.S.A) was used to calculate BMD. Results The cut-off point (abnormal values) was determined based on the upper limit of 95% of effect biomarkers in control group. There was a significant dose response relationship between the effect biomarkers (urinary B2M, NAG, and ALB) and exposure biomarker (UCd). BEL value was 5 μg/g creatinine for UB2M as an effect biomarker, consistent with the recommendation of WHO. BEL could be estimated by using the method of BMD. BEL value was 3 μg/g creatinine for UNAG as an effect biomarker. The more sensitive the used biomarker is, the more occupational population will be protected. Conclusion BMD can be used in estimating the biological exposure limit (BEL). UNAG is a sensitive biomarker for estimating BEL after cadmium exposure.

  7. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  8. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  9. Benchmarking: a method for continuous quality improvement in health.

    Science.gov (United States)

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-05-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields of application and examples of how it has been used in the healthcare sector. Benchmarking is often thought to consist simply of comparing indicators and is not perceived in its entirety, that is, as a tool based on voluntary and active collaboration among several organizations to create a spirit of competition and to apply best practices. The key feature of benchmarking is its integration within a comprehensive and participatory policy of continuous quality improvement (CQI). Conditions for successful benchmarking focus essentially on careful preparation of the process, monitoring of the relevant indicators, staff involvement and inter-organizational visits. Compared to methods previously implemented in France (CQI and collaborative projects), benchmarking has specific features that set it apart as a healthcare innovation. This is especially true for healthcare or medical-social organizations, as the principle of inter-organizational visiting is not part of their culture. Thus, this approach will need to be assessed for feasibility and acceptability before it is more widely promoted.

  10. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    Science.gov (United States)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  11. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  12. Image analysis benchmarking methods for high-content screen design.

    Science.gov (United States)

    Fuller, C J; Straight, A F

    2010-05-01

    The recent development of complex chemical and small interfering RNA (siRNA) collections has enabled large-scale cell-based phenotypic screening. High-content and high-throughput imaging are widely used methods to record phenotypic data after chemical and small interfering RNA treatment, and numerous image processing and analysis methods have been used to quantify these phenotypes. Currently, there are no standardized methods for evaluating the effectiveness of new and existing image processing and analysis tools for an arbitrary screening problem. We generated a series of benchmarking images that represent commonly encountered variation in high-throughput screening data and used these image standards to evaluate the robustness of five different image analysis methods to changes in signal-to-noise ratio, focal plane, cell density and phenotype strength. The analysis methods that were most reliable, in the presence of experimental variation, required few cells to accurately distinguish phenotypic changes between control and experimental data sets. We conclude that by applying these simple benchmarking principles an a priori estimate of the image acquisition requirements for phenotypic analysis can be made before initiating an image-based screen. Application of this benchmarking methodology provides a mechanism to significantly reduce data acquisition and analysis burdens and to improve data quality and information content.

  13. Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.

    Science.gov (United States)

    Renner, F; Wulff, J; Kapsch, R-P; Zink, K

    2015-10-01

    There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as

  14. Concordance of transcriptional and apical benchmark dose levels for conazole-induced liver effects in mice.

    Science.gov (United States)

    Bhat, Virunya S; Hester, Susan D; Nesnow, Stephen; Eastmond, David A

    2013-11-01

    The ability to anchor chemical class-based gene expression changes to phenotypic lesions and to describe these changes as a function of dose and time informs mode-of-action determinations and improves quantitative risk assessments. Previous global expression profiling identified a 330-probe cluster differentially expressed and commonly responsive to 3 hepatotumorigenic conazoles (cyproconazole, epoxiconazole, and propiconazole) at 30 days. Extended to 2 more conazoles (triadimefon and myclobutanil), the present assessment encompasses 4 tumorigenic and 1 nontumorigenic conazole. Transcriptional benchmark dose levels (BMDL(T)) were estimated for a subset of the cluster with dose-responsive behavior and a ≥ 5-fold increase or decrease in signal intensity at the highest dose. These genes primarily encompassed CAR/RXR activation, P450 metabolism, liver hypertrophy- glutathione depletion, LPS/IL-1-mediated inhibition of RXR, and NRF2-mediated oxidative stress pathways. Median BMDL(T) estimates from the subset were concordant (within a factor of 2.4) with apical benchmark doses (BMDL(A)) for increased liver weight at 30 days for the 5 conazoles. The 30-day median BMDL(T) estimates were within one-half order of magnitude of the chronic BMDLA for hepatocellular tumors. Potency differences seen in the dose-responsive transcription of certain phase II metabolism, bile acid detoxification, and lipid oxidation genes mirrored each conazole's tumorigenic potency. The 30-day BMDL(T) corresponded to tumorigenic potency on a milligram per kilogram day basis with cyproconazole > epoxiconazole > propiconazole > triadimefon > myclobutanil (nontumorigenic). These results support the utility of measuring short-term gene expression changes to inform quantitative risk assessments from long-term exposures.

  15. Development of a chronic noncancer oral reference dose and drinking water screening level for sulfolane using benchmark dose modeling.

    Science.gov (United States)

    Thompson, Chad M; Gaylor, David W; Tachovsky, J Andrew; Perry, Camarie; Carakostas, Michael C; Haws, Laurie C

    2013-12-01

    Sulfolane is a widely used industrial solvent that is often used for gas treatment (sour gas sweetening; hydrogen sulfide removal from shale and coal processes, etc.), and in the manufacture of polymers and electronics, and may be found in pharmaceuticals as a residual solvent used in the manufacturing processes. Sulfolane is considered a high production volume chemical with worldwide production around 18 000-36 000 tons per year. Given that sulfolane has been detected as a contaminant in groundwater, an important potential route of exposure is tap water ingestion. Because there are currently no federal drinking water standards for sulfolane in the USA, we developed a noncancer oral reference dose (RfD) based on benchmark dose modeling, as well as a tap water screening value that is protective of ingestion. Review of the available literature suggests that sulfolane is not likely to be mutagenic, clastogenic or carcinogenic, or pose reproductive or developmental health risks except perhaps at very high exposure concentrations. RfD values derived using benchmark dose modeling were 0.01-0.04 mg kg(-1) per day, although modeling of developmental endpoints resulted in higher values, approximately 0.4 mg kg(-1) per day. The lowest, most conservative, RfD of 0.01 mg kg(-1) per day was based on reduced white blood cell counts in female rats. This RfD was used to develop a tap water screening level that is protective of ingestion, viz. 365 µg l(-1). It is anticipated that these values, along with the hazard identification and dose-response modeling described herein, should be informative for risk assessors and regulators interested in setting health-protective drinking water guideline values for sulfolane.

  16. 氟砷致骨代谢损伤生物暴露限值基准剂量法分析%Determination of damage in bone metabolism caused by co-exposure to fluoride and arsenic using benchmark dose method in Chinese population

    Institute of Scientific and Technical Information of China (English)

    曾奇兵; 刘云; 洪峰; 杨鋆; 喻仙

    2012-01-01

    目的 应用基准剂量法探讨燃煤氟砷污染致暴露人群骨代谢损伤的生物暴露限值,为预防氟砷暴露对人体健康的损害提供骨损害方面的参考依据.方法 应用BMDS Version 2.1.2软件计算氟砷暴露人群尿氟、尿砷的基准剂量(BMD)及其可信限下限(BMDL).结果 氟、砷混合暴露引起骨代谢损伤的尿氟BMD及BMDL分别为1.96mg/gCr、1.32 mg/gCr;尿砷BMD及BMDL分别为120.11 μg/gCr、94.83 μg/gCr.结论 建议氟、砷混合暴露引起暴露人群骨代谢损伤尿氟和尿砷的生物暴露限值分别为1.32 mg/gCr和94.83 μg/gCr.%Objective To explore the biological exposure limitation for bone metabolism injury with benchmark dose method for the determination of potential risk associated with chronic co-exposure to fluoride and arsenic in Chinese population. Methods The benchmark dose( BMD) and the lower confidence limitation for the benchmark dose(BMDL) of urinary fluorine and arsenic in the exposure population were calculated using BMDS Version 2. 1. 2. Results The BMD and BMDL of urinary fluorine were 1. % mg/g creatinine and 1. 32 mg/g creatinine. BMD and BMDL of urinary arsenic were 120.11 μg/g and 94. 83 μg/g creatinine. Conclusion The estimated biological exposure limitation of urinary fluoride and arsenic were 1. 32 mg/g creatinine and 94. 83 μg/g creatinine in chronic co-exposure to fluoride and arsenic, respectively.

  17. Dose-response assessment using the benchmark dose approach of changes in hepatic EROD activity for individual polychlorinated biphenyl congeners

    Energy Technology Data Exchange (ETDEWEB)

    Fattore, E.; Fanelli, R. [' ' Mario Negri' ' Institute for Pharmacological Research, Milan (Italy); Chu, I. [Safe Environments Programme, Healthy Environments and Consumer Safety Branch, Tunney' s Pasture, Ottawa, ON (Canada); Sand, S.; Haakansson, H. [Institute of Environmental Medicine, Karolinska Institutet, Stockholm (Sweden); Falk-Filippson, A. [Swedish Chemicals Inspectorate, Sundbyberg (Sweden)

    2004-09-15

    The benchmark dose (BMD) approach was proposed as an alternative to the no-observedadverse- effect-level (NOAEL) or the lowest-observed-adverse-effect-level (LOAEL) as point of departure (POD) for extrapolation of data from animal studies to the low dose human exposure situation. In the risk assessment process using the NOAEL/LOAEL parameter, the reference dose (RfD) or the admissible daily intake (ADI) is obtained by dividing the NOAEL/LOAEL value by uncertainty factors. The uncertainty factors are incorporated in order to take into account variability in the sensitivity of different species, inter-individual differences in sensitivity within the human population, and variability in experimental data. In the BMD approach a dose-response curve is fitted to experimental data (Figure 1) and the BMD is calculated from the equation of the curve as the dose corresponding to a predetermined change in the response defined as the benchmark response (BMR). The 95% lower confidence bound of the BMD, usually referred to as BMDL, can be used as the POD in the extrapolation process to get a RfD or an ADI. The advantages of using the BMD approach are many. First, all the experimental data are utilized to construct the doseresponse curve; second, the variability and uncertainty are taken into account by incorporating standard deviations of means; and third, it represents a single methodology for cancer and noncancer endpoints. In this study the BMD methodology was applied to evaluate dose-response data of seven chlorinated biphenyl (CB) congeners (Table 1), some of which are dioxin-like while others are not. The data were obtained from subchronic dietary exposure studies in male and female Sprague Dawley rats. Elevation in ethoxyresorufin-O-deethylase (EROD) activity was selected as biological response because it is known to be an endpoint sensitive to the exposure of dioxin-like PCBs. Since this response is not an adverse effect per se, in this paper we will refer to the no

  18. Current modeling practice may lead to falsely high benchmark dose estimates.

    Science.gov (United States)

    Ringblom, Joakim; Johanson, Gunnar; Öberg, Mattias

    2014-07-01

    Benchmark dose (BMD) modeling is increasingly used as the preferred approach to define the point-of-departure for health risk assessment of chemicals. As data are inherently variable, there is always a risk to select a model that defines a lower confidence bound of the BMD (BMDL) that, contrary to expected, exceeds the true BMD. The aim of this study was to investigate how often and under what circumstances such anomalies occur under current modeling practice. Continuous data were generated from a realistic dose-effect curve by Monte Carlo simulations using four dose groups and a set of five different dose placement scenarios, group sizes between 5 and 50 animals and coefficients of variations of 5-15%. The BMD calculations were conducted using nested exponential models, as most BMD software use nested approaches. "Non-protective" BMDLs (higher than true BMD) were frequently observed, in some scenarios reaching 80%. The phenomenon was mainly related to the selection of the non-sigmoidal exponential model (Effect=a·e(b)(·dose)). In conclusion, non-sigmoid models should be used with caution as it may underestimate the risk, illustrating that awareness of the model selection process and sound identification of the point-of-departure is vital for health risk assessment.

  19. Benchmarking quantum control methods on a 12-qubit system

    CERN Document Server

    Negrevergne, C; Ryan, C A; Ditty, M; Cyr-Racine, F; Power, W; Boulant, N; Havel, T; Cory, D G; Laflamme, R

    2006-01-01

    In this letter, we present an experimental benchmark of operational control methods in quantum information processors extended up to 12 qubits. We implement universal control of this large Hilbert space using two complementary approaches and discuss their accuracy and scalability. Despite decoherence, we were able to reach a 12-coherence state (or 12-qubits pseudo-pure cat state), and decode it into an 11 qubit plus one qutrit labeled observable pseudo-pure state using liquid state nuclear magnetic resonance quantum information processors.

  20. A Consumer's Guide to Benchmark Dose Models: Results of U.S. EPA Testing of 14 Dichotomous, 8 Continuous, and 6 Developmental Models (Presentation)

    Science.gov (United States)

    Benchmark dose risk assessment software (BMDS) was designed by EPA to generate dose-response curves and facilitate the analysis, interpretation and synthesis of toxicological data. Partial results of QA/QC testing of the EPA benchmark dose software (BMDS) are presented. BMDS pr...

  1. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Science.gov (United States)

    Shao, Kan; Gift, Jeffrey S; Setzer, R Woodrow

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose-response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean±standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the "hybrid" method and relative deviation approach, we first evaluate six representative continuous dose-response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates.

  2. An adaptive nonparametric method in benchmark analysis for bioassay and environmental studies.

    Science.gov (United States)

    Bhattacharya, Rabi; Lin, Lizhen

    2010-12-01

    We present a novel nonparametric method for bioassay and benchmark analysis in risk assessment, which averages isotonic MLEs based on disjoint subgroups of dosages. The asymptotic theory for the methodology is derived, showing that the MISEs (mean integrated squared error) of the estimates of both the dose-response curve F and its inverse F(-1) achieve the optimal rate O(N(-4/5)). Also, we compute the asymptotic distribution of the estimate ζ~p of the effective dosage ζ(p) = F(-1) (p) which is shown to have an optimally small asymptotic variance.

  3. Polychlorinated biphenyls as oxidative stress inducers in liver of subacutely exposed rats: implication for dose-dependence toxicity and benchmark dose concept.

    Science.gov (United States)

    Buha, Aleksandra; Antonijević, Biljana; Milovanović, Vesna; Janković, Saša; Bulat, Zorica; Matović, Vesna

    2015-01-01

    Hepatotoxicity is one of the well-documented adverse health effects of polychlorinated biphenyls (PCBs)-persistent organic pollutants widely present in the environment. Although previous studies suggest possible role of oxidative stress, the precise mechanisms of PCB-induced ROS production in liver still remain to be fully assessed. The aim of this study was to evaluate the effects of different doses of PCBs on the parameters of oxidative stress and to investigate whether these effects are dose dependent. Furthermore, a comparison between calculated benchmark doses (BMD) and estimated NOAEL values for investigated parameters, was made. Six groups of male albino Wistar rats (7 animals per group) were receiving Aroclor 1254 dissolved in corn oil in the doses of 0.5, 1, 2, 4, 8, 16 mg PCBs/kg b.w./day by oral gavage during 28 days while control animals were receiving corn oil only. The following parameters of oxidative stress were analyzed in liver homogenates: superoxide dismutase activity, glutathione, malondialdehyde (MDA) and total protein thiol levels. Hepatic enzymes AST, ALT, ALP and protein albumin were also determined in serum as clinical parameters of liver function. Collected data on the investigated parameters were analyzed by the BMD method. The results of this study demonstrate that subacute exposure to PCBs causes induction of oxidative stress in liver with dose-dependent changes of the investigated parameters, although more pronounced adverse effects were observed on enzymatic than on non-enzymatic components of antioxidant protection. The obtained values for BMD and NOAEL support the use of BMD concept in the prediction of health risks associated with PCBs exposure. Furthermore, our results implicate possible use of MDA in PCBs risk assessment, since MDA was the most sensitive investigated parameter with calculated low critical effect dose of 0.07 mg/kg b.w.

  4. Avoiding Pitfalls in the Use of the Benchmark Dose Approach to Chemical Risk Assessments; Some Illustrative Case Studies (Presentation)

    Science.gov (United States)

    The USEPA's benchmark dose software (BMDS) version 1.2 has been available over the Internet since April, 2000 (epa.gov/ncea/bmds.htm), and has already been used in risk assessments of some significant environmental pollutants (e.g., diesel exhaust, dichloropropene, hexachlorocycl...

  5. Is the assumption of normality or log-normality for continuous response data critical for benchmark dose estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Kan, E-mail: Shao.Kan@epa.gov [ORISE Postdoctoral Fellow, National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Gift, Jeffrey S. [National Center for Environmental Assessment, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States); Setzer, R. Woodrow [National Center for Computational Toxicology, U.S. Environmental Protection Agency, Research Triangle Park, NC (United States)

    2013-11-01

    Continuous responses (e.g. body weight) are widely used in risk assessment for determining the benchmark dose (BMD) which is used to derive a U.S. EPA reference dose. One critical question that is not often addressed in dose–response assessments is whether to model the continuous data as normally or log-normally distributed. Additionally, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation) are available as is usual in the peer-reviewed literature, the BMD can only be approximated. In this study, using the “hybrid” method and relative deviation approach, we first evaluate six representative continuous dose–response datasets reporting individual animal responses to investigate the impact on BMD/BMDL estimates of (1) the distribution assumption and (2) the use of summarized versus individual animal data when a log-normal distribution is assumed. We also conduct simulation studies evaluating model fits to various known distributions to investigate whether the distribution assumption has influence on BMD/BMDL estimates. Our results indicate that BMDs estimated using the hybrid method are more sensitive to the distribution assumption than counterpart BMDs estimated using the relative deviation approach. The choice of distribution assumption has limited impact on the BMD/BMDL estimates when the within dose-group variance is small, while the lognormality assumption is a better choice for relative deviation method when data are more skewed because of its appropriateness in describing the relationship between mean and standard deviation. Additionally, the results suggest that the use of summarized data versus individual response data to characterize log-normal distributions has minimal impact on BMD estimates. - Highlights: • We investigate to what extent the distribution assumption can affect BMD estimates. • Both real data analysis and simulation study are conducted. • BMDs estimated using hybrid method are more

  6. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    OpenAIRE

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative perfor...

  7. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  8. Benchmarking: A Method for Continuous Quality Improvement in Health

    OpenAIRE

    Ettorchi-Tardy, Amina; Levif, Marie; Michel, Philippe

    2012-01-01

    Benchmarking, a management approach for implementing best practices at best cost, is a recent concept in the healthcare system. The objectives of this paper are to better understand the concept and its evolution in the healthcare sector, to propose an operational definition, and to describe some French and international experiences of benchmarking in the healthcare sector. To this end, we reviewed the literature on this approach's emergence in the industrial sector, its evolution, its fields ...

  9. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    Science.gov (United States)

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  10. Piloting a Process Maturity Model as an e-Learning Benchmarking Method

    Science.gov (United States)

    Petch, Jim; Calverley, Gayle; Dexter, Hilary; Cappelli, Tim

    2007-01-01

    As part of a national e-learning benchmarking initiative of the UK Higher Education Academy, the University of Manchester is carrying out a pilot study of a method to benchmark e-learning in an institution. The pilot was designed to evaluate the operational viability of a method based on the e-Learning Maturity Model developed at the University of…

  11. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs.

  12. Comparative Benchmark Dose Modeling as a Tool to Make the First Estimate of Safe Human Exposure Levels to Lunar Dust

    Science.gov (United States)

    James, John T.; Lam, Chiu-wing; Scully, Robert R.

    2013-01-01

    Brief exposures of Apollo Astronauts to lunar dust occasionally elicited upper respiratory irritation; however, no limits were ever set for prolonged exposure ot lunar dust. Habitats for exploration, whether mobile of fixed must be designed to limit human exposure to lunar dust to safe levels. We have used a new technique we call Comparative Benchmark Dose Modeling to estimate safe exposure limits for lunar dust collected during the Apollo 14 mission.

  13. Benchmark dose approach for low-level lead induced haematogenesis inhibition and associations of childhood intelligences with ALAD activity and ALA levels.

    Science.gov (United States)

    Wang, Q; Ye, L X; Zhao, H H; Chen, J W; Zhou, Y K

    2011-04-15

    Lead (Pb) levels, delta-aminolevulinic acid dehydratase (ALAD) activities, zinc protoporphyrin (ZPP) levels in blood, and urinary delta-aminolevulinic acid (ALA) and coproporphyrin (CP) concentrations were measured for 318 environmental Pb exposed children recruited from an area of southeast China. The mean of blood lead (PbB) levels was 75.0μg/L among all subjects. Benchmark dose (BMD) method was conducted to present a lower PbB BMD (lower bound of BMD) of 32.4μg/L (22.7) based on ALAD activity than those based on the other three haematological indices, corresponding to a benchmark response of 1%. Childhood intelligence degrees were not associated significantly with ALAD activities or ALA levels. It was concluded that blood ALAD activity is a sensitive indicator of early haematological damage due to low-level Pb exposures for children.

  14. NRC-BNL Benchmark Program on Evaluation of Methods for Seismic Analysis of Coupled Systems

    Energy Technology Data Exchange (ETDEWEB)

    Chokshi, N.; DeGrassi, G.; Xu, J.

    1999-03-24

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  15. NRC-BNL BENCHMARK PROGRAM ON EVALUATION OF METHODS FOR SEISMIC ANALYSIS OF COUPLED SYSTEMS.

    Energy Technology Data Exchange (ETDEWEB)

    XU,J.

    1999-08-15

    A NRC-BNL benchmark program for evaluation of state-of-the-art analysis methods and computer programs for seismic analysis of coupled structures with non-classical damping is described. The program includes a series of benchmarking problems designed to investigate various aspects of complexities, applications and limitations associated with methods for analysis of non-classically damped structures. Discussions are provided on the benchmarking process, benchmark structural models, and the evaluation approach, as well as benchmarking ground rules. It is expected that the findings and insights, as well as recommendations from this program will be useful in developing new acceptance criteria and providing guidance for future regulatory activities involving licensing applications of these alternate methods to coupled systems.

  16. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    Directory of Open Access Journals (Sweden)

    Yu.V. Dvirko

    2013-03-01

    Full Text Available The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalization big companies, enterprises, organizations realize the necessity of the thorough and profound research of the best achievements of market subjects relations with their further using in their own activity. Benchmarking is the process of competitive advantages borrowing and competitiveness increasing of Consumer Cooperatives trade enterprises at the expense of research leaning and adapting the best methods of business processes realization with the purpose to increase their functioning affectivity and best satisfaction of societal needs.The main goals of benchmarking using in Consumer Cooperatives are the following: increasing of needs satisfaction level at the expense of products quality increasing, transportation goods term shortening, service quality increasing; enterprise potential strengthening, competitiveness strengthening, image improvement; generation and new ideas and innovative decisions implementation in trade enterprise activity. The advantages of benchmarking using in Consumer Cooperatives trade enterprises activity are the following: adapting the parameters of enterprise functioning to market demands; gradual defining and removing inadequacies which obstacle enterprise development; borrowing the best methods of further enterprise development; competitive advantages gaining; technological innovations; employees motivation. Authors classification of benchmarking is represented by the following components: by cycle durability strategic, operative

  17. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    Science.gov (United States)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  18. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  19. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  20. Benchmark measurements and simulations of dose perturbations due to metallic spheres in proton beams

    Science.gov (United States)

    Newhauser, Wayne D.; Rechner, Laura; Mirkovic, Dragan; Yepes, Pablo; Koch, Nicholas C.; Titt, Uwe; Fontenot, Jonas D.; Zhang, Rui

    2014-01-01

    Monte Carlo simulations are increasingly used for dose calculations in proton therapy due to its inherent accuracy. However, dosimetric deviations have been found using Monte Carlo code when high density materials are present in the proton beam line. The purpose of this work was to quantify the magnitude of dose perturbation caused by metal objects. We did this by comparing measurements and Monte Carlo predictions of dose perturbations caused by the presence of small metal spheres in several clinical proton therapy beams as functions of proton beam range, spread-out Bragg peak width and drift space. Monte Carlo codes MCNPX, GEANT4 and Fast Dose Calculator (FDC) were used. Generally good agreement was found between measurements and Monte Carlo predictions, with the average difference within 5% and maximum difference within 17%. The modification of multiple Coulomb scattering model in MCNPX code yielded improvement in accuracy and provided the best overall agreement with measurements. Our results confirmed that Monte Carlo codes are well suited for predicting multiple Coulomb scattering in proton therapy beams when short drift spaces are involved. PMID:25147474

  1. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  2. Benchmarks of the ab initio FCI, MCSM and NCFC methods

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P

    2012-01-01

    We report ab initio no-core solutions for properties of light nuclei with three different approaches in order to assess the accuracy and convergence rates of each method. Full Configuration Interaction (FCI), Monte Carlo Shell Model (MCSM) and No Core Full Configuration (NCFC) approaches are solved separately for the ground state energy and other properties of seven light nuclei using the realistic JISP16 nucleon-nucleon interaction. The results are consistent among the different approaches. The methods differ significantly in how the required computational resources scale with increasing particle number for a given accuracy.

  3. Benchmarking the inelastic neutron scattering soil carbon method

    Science.gov (United States)

    The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...

  4. Benchmarking ortholog identification methods using functional genomics data.

    NARCIS (Netherlands)

    Hulsen, T.; Huynen, M.A.; Vlieg, J. de; Groenen, P.M.

    2006-01-01

    BACKGROUND: The transfer of functional annotations from model organism proteins to human proteins is one of the main applications of comparative genomics. Various methods are used to analyze cross-species orthologous relationships according to an operational definition of orthology. Often the defini

  5. Benchmarking Transcriptome Quantification Methods for Duplicated Genes in Xenopus laevis.

    Science.gov (United States)

    Kwon, Taejoon

    2015-01-01

    Xenopus is an important model organism for the study of genome duplication in vertebrates. With the full genome sequence of diploid Xenopus tropicalis available, and that of allotetraploid X. laevis close to being finished, we will be able to expand our understanding of how duplicated genes have evolved. One of the key features in the study of the functional consequence of gene duplication is how their expression patterns vary across different conditions, and RNA-seq seems to have enough resolution to discriminate the expression of highly similar duplicated genes. However, most of the current RNA-seq analysis methods were not designed to study samples with duplicate genes such as in X. laevis. Here, various computational methods to quantify gene expression in RNA-seq data were evaluated, using 2 independent X. laevis egg RNA-seq datasets and 2 reference databases for duplicated genes. The fact that RNA-seq can measure expression levels of similar duplicated genes was confirmed, but long paired-end reads are more informative than short single-end reads to discriminate duplicated genes. Also, it was found that bowtie, one of the most popular mappers in RNA-seq analysis, reports significantly smaller numbers of unique hits according to a mapping quality score compared to other mappers tested (BWA, GSNAP, STAR). Calculated from unique hits based on a mapping quality score, both expression levels and the expression ratio of duplicated genes can be estimated consistently among biological replicates, demonstrating that this method can successfully discriminate the expression of each copy of a duplicated gene pair. This comprehensive evaluation will be a useful guideline for studying gene expression of organisms with genome duplication using RNA-seq in the future.

  6. An International Pooled Analysis for Obtaining a Benchmark Dose for Environmental Lead Exposure in Children

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Bellinger, David; Lanphear, Bruce;

    2013-01-01

    Lead is a recognized neurotoxicant, but estimating effects at the lowest measurable levels is difficult. An international pooled analysis of data from seven cohort studies reported an inverse and supra-linear relationship between blood lead concentrations and IQ scores in children. The lack...... yielding lower confidence limits (BMDLs) of about 0.1-1.0 for the dose leading to a loss of one IQ point. We conclude that current allowable blood lead concentrations need to be lowered and further prevention efforts are needed to protect children from lead toxicity....

  7. Correlation of In Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study.

    Science.gov (United States)

    Soeteman-Hernández, Lya G; Fellows, Mick D; Johnson, George E; Slob, Wout

    2015-12-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was applied using the dose-response modeling program PROAST to estimate the genotoxic potency from the in vitro data. The resulting in vitro BMDs were compared with previously derived BMDs from in vivo MN and carcinogenicity studies. A proportional correlation was observed between the BMDs from the in vitro MN and the BMDs from the in vivo MN assays. Further, a clear correlation was found between the BMDs from in vitro MN and the associated BMDs for malignant tumors. Although these results are based on only 19 compounds, they show that genotoxicity potencies estimated from in vitro tests may result in useful information regarding in vivo genotoxic potency, as well as expected cancer potency. Extension of the number of compounds and further investigation of metabolic activation (S9) and of other toxicokinetic factors would be needed to validate our initial conclusions. However, this initial work suggests that this approach could be used for in vitro to in vivo extrapolations which would support the reduction of animals used in research (3Rs: replacement, reduction, and refinement).

  8. Continuum discretization methods in a composite-particle scattering off a nucleus: the benchmark calculations

    CERN Document Server

    Rubtsova, O A; Moro, A M

    2008-01-01

    The direct comparison of two different continuum discretization methods towards the solution of a composite particle scattering off a nucleus is presented. The first approach -- the Continumm-Discretized Coupled Channel method -- is based on the differential equation formalism, while the second one -- the Wave-Packet Continuum Discretization method -- uses the integral equation formulation for the composite-particle scattering problem. As benchmark calculations we have chosen the deuteron off \

  9. Methods of calculating radiation absorbed dose.

    Science.gov (United States)

    Wegst, A V

    1987-01-01

    The new tumoricidal radioactive agents being developed will require a careful estimate of radiation absorbed tumor and critical organ dose for each patient. Clinical methods will need to be developed using standard imaging or counting instruments to determine cumulated organ activities with tracer amounts before the therapeutic administration of the material. Standard MIRD dosimetry methods can then be applied.

  10. A novel and well-defined benchmarking method for second generation read mapping

    Directory of Open Access Journals (Sweden)

    Weese David

    2011-05-01

    Full Text Available Abstract Background Second generation sequencing technologies yield DNA sequence data at ultra high-throughput. Common to most biological applications is a mapping of the reads to an almost identical or highly similar reference genome. The assessment of the quality of read mapping results is not straightforward and has not been formalized so far. Hence, it has not been easy to compare different read mapping approaches in a unified way and to determine which program is the best for what task. Results We present a new benchmark method, called Rabema (Read Alignment BEnchMArk, for read mappers. It consists of a strict definition of the read mapping problem and of tools to evaluate the result of arbitrary read mappers supporting the SAM output format. Conclusions We show the usefulness of the benchmark program by performing a comparison of popular read mappers. The tools supporting the benchmark are licensed under the GPL and available from http://www.seqan.de/projects/rabema.html.

  11. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  12. TextGen:a realistic text data content generation method for modern storage system benchmarks

    Institute of Scientific and Technical Information of China (English)

    Long-xiang WANG; Xiao-she DONG; Xing-jun ZHANG; Yin-feng WANG; Tao JU; Guo-fu FENG

    2016-01-01

    Modern storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modern storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen isO(n). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.

  13. Estimation of benchmark dose as the threshold amount of alcohol consumption for blood pressure in Japanese workers.

    Science.gov (United States)

    Suwazono, Yasushi; Sakata, Kouichi; Oishi, Mitsuhiro; Okubo, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Kido, Teruhiko; Nogawa, Koji

    2007-12-01

    In order to determine the threshold amount of alcohol consumption for blood pressure, we calculated the benchmark dose (BMD) of alcohol consumption and its 95% lower confidence interval (BMDL) in Japanese workers. The subjects consisted of 4,383 males and 387 females in a Japanese steel company. The target variables were systolic, diastolic, and mean arterial pressures. The effects of other potential covariates such as age and body mass index were adjusted by including these covariates in the multiple linear regression models. In male workers, BMD/BMDL for alcohol consumption (g/week) at which the probability of an adverse response was estimated to increase by 5% relative to no alcohol consumption, were 396/315 (systolic blood pressure), 321/265 (diastolic blood pressure), and 326/269 (mean arterial pressures). These values were based on significant regression coefficients of alcohol consumption. In female workers, BMD/BMDL for alcohol consumption based on insignificant regression coefficients were 693/134 (systolic blood pressure), 199/90 (diastolic blood pressure), and 267/77 (mean arterial pressure). Therefore, BMDs/BMDLs in males were more informative than those in females as there was no significant relationship between alcohol and blood pressure in females. The threshold amount of alcohol consumption determined in this study provides valuable information for preventing alcohol-induced hypertension.

  14. Multiple exposures to indoor contaminants: Derivation of benchmark doses and relative potency factors based on male reprotoxic effects.

    Science.gov (United States)

    Fournier, K; Tebby, C; Zeman, F; Glorennec, P; Zmirou-Navier, D; Bonvallot, N

    2016-02-01

    Semi-Volatile Organic Compounds (SVOCs) are commonly present in dwellings and several are suspected of having effects on male reproductive function mediated by an endocrine disruption mode of action. To improve knowledge of the health impact of these compounds, cumulative toxicity indicators are needed. This work derives Benchmark Doses (BMD) and Relative Potency Factors (RPF) for SVOCs acting on the male reproductive system through the same mode of action. We included SVOCs fulfilling the following conditions: detection frequency (>10%) in French dwellings, availability of data on the mechanism/mode of action for male reproductive toxicity, and availability of comparable dose-response relationships. Of 58 SVOCs selected, 18 induce a decrease in serum testosterone levels. Six have sufficient and comparable data to derive BMDs based on 10 or 50% of the response. The SVOCs inducing the largest decrease in serum testosterone concentration are: for 10%, bisphenol A (BMD10 = 7.72E-07 mg/kg bw/d; RPF10 = 7,033,679); for 50%, benzo[a]pyrene (BMD50 = 0.030 mg/kg bw/d; RPF50 = 1630), and the one inducing the smallest one is benzyl butyl phthalate (RPF10 and RPF50 = 0.095). This approach encompasses contaminants from diverse chemical families acting through similar modes of action, and makes possible a cumulative risk assessment in indoor environments. The main limitation remains the lack of comparable toxicological data.

  15. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    Energy Technology Data Exchange (ETDEWEB)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org [Department of Physics and Astronomy, University of Nebraska-Lincoln, Lincoln, Nebraska 68588 (United States)

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.

  16. Two prospective dosing methods for nortriptyline.

    Science.gov (United States)

    Perry, P J; Browne, J L; Alexander, B; Tsuang, M T; Sherman, A D; Dunner, F J

    1984-01-01

    This study compared two prospective pharmacokinetic dosing methods to predict steady-state concentrations of nortriptyline. One method required multiple determinations of the nortriptyline plasma concentration to estimate the drug's steady-state concentration. The second method required a single nortriptyline concentration drawn at a fixed time, preferably 36 hours, following a nortriptyline test dose. The 36-hour nortriptyline plasma concentrations (NTP 36h) were substituted into the straight-line equation of Cssav = 17.2 + 3.74 (NTP 36h), where Cssav is the average steady-state concentration for a 100 mg/day dose of nortriptyline. No differences were noted between the observed steady-state nortriptyline concentration of 121 +/- 19 ng/ml, the 36-hour single-point prediction mean concentration of 121 +/- 21 ng/ml, or the multiple-point prediction mean concentration of 122 +/- 19 ng/ml. Because of the similar findings between the two methods, the clinical advantages and disadvantages of each kinetic approach are discussed to put these prospective dosing protocols into their proper perspective.

  17. Consortial benchmarking: a method of academic-practitioner collaborative research and its application in a b2b environment

    NARCIS (Netherlands)

    Schiele, Holger; Krummaker, Stefan

    2010-01-01

    Purpose of the paper and literature addressed: Development of a new method for academicpractitioner collaboration, addressing the literature on collaborative research Research method: Model elaboration and test with an in-depth case study Research findings: In consortial benchmarking, practitioner

  18. Determining the sensitivity of Data Envelopment Analysis method used in airport benchmarking

    Directory of Open Access Journals (Sweden)

    Mircea BOSCOIANU

    2013-03-01

    Full Text Available In the last decade there were some important changes in the airport industry, caused by the liberalization of the air transportation market. Until recently airports were considered infrastructure elements, and they were evaluated only by traffic values or their maximum capacity. Gradual orientation towards commercial led to the need of finding another ways of evaluation, more efficiency oriented. The existing methods for assessing efficiency used for other production units were not suitable to be used in case of airports due to specific features and high complexity of airport operations. In the last years there were some papers that proposed the Data Envelopment Analysis as a method for assessing the operational efficiency in order to conduct the benchmarking. This method offers the possibility of dealing with a large number of variables of different types, which represents the main advantage of this method and also recommends it as a good benchmarking tool for the airports management. This paper goal is to determine the sensitivity of this method in relation with its inputs and outputs. A Data Envelopment Analysis is conducted for 128 airports worldwide, in both input- and output-oriented measures, and the results are analysed against some inputs and outputs variations. Possible weaknesses of using DEA for assessing airports performance are revealed and analysed against this method advantages.

  19. Benchmarking of methods for identification of antimicrobial resistance genes in bacterial whole genome data

    DEFF Research Database (Denmark)

    Clausen, Philip T. L. C.; Zankari, Ea; Aarestrup, Frank Møller;

    2016-01-01

    Next generation sequencing (NGS) may be an alternative to phenotypic susceptibility testing for surveillance and clinical diagnosis. However, current bioinformatics methods may be associated with false positives and negatives. In this study, a novel mapping method was developed and benchmarked...... to two different methods in current use for identification of antibiotic resistance genes in bacterial WGS data. A novel method, KmerResistance, which examines the co-occurrence of k-mers between the WGS data and a database of resistance genes, was developed. The performance of this method was compared...... with two previously described methods; ResFinder and SRST2, which use an assembly/BLAST method and BWA, respectively, using two datasets with a total of 339 isolates, covering five species, originating from the Oxford University Hospitals NHS Trust and Danish pig farms. The predicted resistance...

  20. Reliable B cell epitope predictions: impacts of method development and improved benchmarking.

    Science.gov (United States)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2012-01-01

    The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant

  1. Reliable B cell epitope predictions: impacts of method development and improved benchmarking.

    Directory of Open Access Journals (Sweden)

    Jens Vindahl Kringelum

    Full Text Available The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0, we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve

  2. Control design for the nonlinear benchmark problem via the output regulation method

    Institute of Scientific and Technical Information of China (English)

    Jie HUANG; Guoqiang HU

    2004-01-01

    The problem of designing a feedback controller to achieve asymptotic disturbance rejection / attenuation while maintaining good transient response in the RTAC system is known as a benchmark nonlinear control problem, which has been an intensive research subject since 1995. In this paper, we will further investigate the solvability of the robust disturbance rejection problem of the RTAC system by the measurement output feedback control based on the robust output regulation method. We have obtained a design by overcoming two major obstacles: find a closed-form solution of the regulator equations; and devise a nonlinear internal model to account for non-polynomial nonlinearities.

  3. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    Energy Technology Data Exchange (ETDEWEB)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  4. Deflection-based method for seismic response analysis of concrete walls: Benchmarking of CAMUS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Prabir C. [Civil and Structural Engineering Division, Atomic Energy Regulatory Board (India)]. E-mail: pcb@aerb.gov.in; Roshan, A.D. [Civil and Structural Engineering Division, Atomic Energy Regulatory Board (India)

    2007-07-15

    A number of shake table tests had been conducted on the scaled down model of a concrete wall as part of CAMUS experiment. The experiments were conducted between 1996 and 1998 in the CEA facilities in Saclay, France. Benchmarking of CAMUS experiments was undertaken as a part of the coordinated research program on 'Safety Significance of Near-Field Earthquakes' organised by International Atomic Energy Agency (IAEA). Technique of deflection-based method was adopted for benchmarking exercise. Non-linear static procedure of deflection-based method has two basic steps: pushover analysis, and determination of target displacement or performance point. Pushover analysis is an analytical procedure to assess the capacity to withstand seismic loading effect that a structural system can offer considering the redundancies and inelastic deformation. Outcome of a pushover analysis is the plot of force-displacement (base shear-top/roof displacement) curve of the structure. This is obtained by step-by-step non-linear static analysis of the structure with increasing value of load. The second step is to determine target displacement, which is also known as performance point. The target displacement is the likely maximum displacement of the structure due to a specified seismic input motion. Established procedures, FEMA-273 and ATC-40, are available to determine this maximum deflection. The responses of CAMUS test specimen are determined by deflection-based method and analytically calculated values compare well with the test results.

  5. Benchmark Study of 3D Pore-scale Flow and Solute Transport Simulation Methods

    Science.gov (United States)

    Scheibe, T. D.; Yang, X.; Mehmani, Y.; Perkins, W. A.; Pasquali, A.; Schoenherr, M.; Kim, K.; Perego, M.; Parks, M. L.; Trask, N.; Balhoff, M.; Richmond, M. C.; Geier, M.; Krafczyk, M.; Luo, L. S.; Tartakovsky, A. M.

    2015-12-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that benchmark study to include additional models of the first type based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries in the manner of PNMs has not been fully determined. We apply all five approaches (FVM-based CFD, IMB, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The benchmark study was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods, and motivates further development and application of pore-scale simulation methods.

  6. Correlation of In  Vivo Versus In Vitro Benchmark Doses (BMDs) Derived From Micronucleus Test Data: A Proof of Concept Study

    OpenAIRE

    2015-01-01

    In this study, we explored the applicability of using in vitro micronucleus (MN) data from human lymphoblastoid TK6 cells to derive in vivo genotoxicity potency information. Nineteen chemicals covering a broad spectrum of genotoxic modes of action were tested in an in vitro MN test using TK6 cells using the same study protocol. Several of these chemicals were considered to need metabolic activation, and these were administered in the presence of S9. The Benchmark dose (BMD) approach was appli...

  7. Methodical aspects of benchmarking using in Consumer Cooperatives trade enterprises activity

    OpenAIRE

    2013-01-01

    The aim of the article. The aim of this article is substantiation of benchmarking main types in Consumer Cooperatives trade enterprises activity; flashlighting of main advantages and drawbacks of benchmarking using; presentation of the authors view upon expediency of flashlighted forms of benchmarking organization using in Consumer Cooperatives in Ukraine trade enterprises activity.The results of the analysis. Under modern conditions of economic relations development and business globalizatio...

  8. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    CERN Document Server

    Ciolek, Glenn E

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests ...

  9. Reliable B cell epitope predictions: impacts of method development and improved benchmarking

    DEFF Research Database (Denmark)

    Kringelum, Jens Vindahl; Lundegaard, Claus; Lund, Ole

    2012-01-01

    The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several...... of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological...... evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version...

  10. Continuum discretization methods in a composite particle scattering off a nucleus: Benchmark calculations

    Science.gov (United States)

    Rubtsova, O. A.; Kukulin, V. I.; Moro, A. M.

    2008-09-01

    The direct comparison of two different continuum discretization methods toward the solution of a composite particle scattering off a nucleus is presented. The first approach—the continuum-discretized coupled-channel method—is based on the differential equation formalism, while the second one—the wave-packet continuum discretization method—uses the integral equation formulation for the composite-particle scattering problem. As benchmark calculations, we have chosen the deuteron off Ni58 target scattering (as a realistic illustrative example) at three different incident energies: high, middle, and low. Clear nonvanishing effects of closed inelastic channels at small and intermediate energies are established. The elastic cross sections found in both approaches are very close to each other for all three considered energies.

  11. Chiral purity assay for Flindokalner using tandem mass spectrometry: method development, validation, and benchmarking.

    Science.gov (United States)

    Young, Brandy L; Cooks, R G; Madden, Michelle C; Bair, Michael; Jia, Jingpin; Aubry, Anne-Françoise; Miller, Scott A

    2007-04-11

    The present work demonstrates the application and validation of a mass spectrometry method for quantitative chiral purity determination. The particular compound analyzed is Flindokalner, a Bristol-Myers Squibb drug candidate for post-stroke neuroprotection. Chiral quantification of Flindokalner was achieved using tandem mass spectrometry (MS/MS) and the kinetic method, a gas phase method used for thermochemical and chiral determinations. The MS/MS method was validated and benchmarked against two separate chromatographic techniques, chiral high performance liquid chromatography with ultra-violet detection (LC/UV) and achiral high performance liquid chromatography with circular dichroism detection (LC/CD). The chiral purity determination of Flindokalner using MS/MS proved to be rapid (3 min run time for each sample) and to have accuracy and precision comparable to the chiral LC/UV and achiral LC/CD methods. This method represents an alternative to commonly used chromatographic techniques as a means of chiral purity determination and is particularly useful in rapid screening experiments.

  12. [Benchmarking projects examining patient care in Germany: methods of analysis, survey results, and best practice].

    Science.gov (United States)

    Blumenstock, Gunnar; Fischer, Imma; de Cruppé, Werner; Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    A survey among 232 German health care organisations addressed benchmarking projects in patient care. 53 projects were reported and analysed using a benchmarking development scheme and a list of criteria. None of the projects satisfied all the criteria. Rather, examples of best practice for single aspects have been identified.

  13. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    DEFF Research Database (Denmark)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai

    2016-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method...

  14. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    NARCIS (Netherlands)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai; Popescu, Elvira; Rehm, Matthias; Mealha, Oscar

    2017-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method by co

  15. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  16. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    Energy Technology Data Exchange (ETDEWEB)

    Ciolek, Glenn E.; Roberge, Wayne G., E-mail: cioleg@rpi.edu, E-mail: roberw@rpi.edu [New York Center for Astrobiology (United States); Department of Physics, Applied Physics, and Astronomy, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY 12180 (United States)

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  17. 标杆产品研究方法浅析%Brief Analysis of the Benchmarking Product Research Method

    Institute of Scientific and Technical Information of China (English)

    项壮华; 李祥松

    2015-01-01

    Benchmarking is one of the most important methods in modern enterprise management activities,which supports enterprises to continuously improve and gain competitive advantages.The paper briefly discusses the basic concept of benchmarking and its applications,analyses its commonly used methods and theoretical basis,and sets up a comprehensive flow chart for the benchmarking study.%标杆管理是现代企业管理活动中支持企业不断改进和获得竞争优势的最重要管理方式之一.本中简要论述了标杆管理的基本概念及其应用,分析了标杆研究常用的方法及理论基础,并搭建了完整的标杆研究流程图.

  18. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  19. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  20. A two-dimensional method of manufactured solutions benchmark suite based on variations of Larsen's benchmark with escalating order of smoothness of the exact solution

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, Sebastian; Azmy, Yousry Y., E-mail: snschune@ncsu.edu, E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC (United States)

    2011-07-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)

  1. A TWO-DIMENSIONAL METHOD OF MANUFACTURED SOLUTIONS BENCHMARK SUITE BASED ON VARIATIONS OF LARSEN'S BENCHMARK WITH ESCALATING ORDER OF SMOOTHNESS OF THE EXACT SOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Sebastian Schunert; Yousry Y. Azmy

    2011-05-01

    The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.

  2. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  3. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Science.gov (United States)

    Tikk, Domonkos; Thomas, Philippe; Palaga, Peter; Hakenberg, Jörg; Leser, Ulf

    2010-07-01

    The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs) reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study shows that three

  4. A Benchmark of Lidar-Based Single Tree Detection Methods Using Heterogeneous Forest Data from the Alpine Space

    Directory of Open Access Journals (Sweden)

    Lothar Eysn

    2015-05-01

    Full Text Available In this study, eight airborne laser scanning (ALS-based single tree detection methods are benchmarked and investigated. The methods were applied to a unique dataset originating from different regions of the Alpine Space covering different study areas, forest types, and structures. This is the first benchmark ever performed for different forests within the Alps. The evaluation of the detection results was carried out in a reproducible way by automatically matching them to precise in situ forest inventory data using a restricted nearest neighbor detection approach. Quantitative statistical parameters such as percentages of correctly matched trees and omission and commission errors are presented. The proposed automated matching procedure presented herein shows an overall accuracy of 97%. Method based analysis, investigations per forest type, and an overall benchmark performance are presented. The best matching rate was obtained for single-layered coniferous forests. Dominated trees were challenging for all methods. The overall performance shows a matching rate of 47%, which is comparable to results of other benchmarks performed in the past. The study provides new insight regarding the potential and limits of tree detection with ALS and underlines some key aspects regarding the choice of method when performing single tree detection for the various forest types encountered in alpine regions.

  5. Bacterial whole genome-based phylogeny: construction of a new benchmarking dataset and assessment of some existing methods

    DEFF Research Database (Denmark)

    Ahrenfeldt, Johanne; Skaarup, Carina; Hasman, Henrik;

    2017-01-01

    , consensus whole-genome sequences, as well as descriptions of the known phylogeny in a variety of formats) publicly available, with the hope that other groups may find this data useful for benchmarking and exploring the performance of epidemiological methods. All data is freely available at: https://cge.cbs.dtu.dk/services/evolution_data.php....

  6. Re-analysis of Alaskan benchmark glacier mass-balance data using the index method

    Science.gov (United States)

    Van Beusekom, Ashely E.; O'Nell, Shad R.; March, Rod S.; Sass, Louis C.; Cox, Leif H.

    2010-01-01

    At Gulkana and Wolverine Glaciers, designated the Alaskan benchmark glaciers, we re-analyzed and re-computed the mass balance time series from 1966 to 2009 to accomplish our goal of making more robust time series. Each glacier's data record was analyzed with the same methods. For surface processes, we estimated missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernized the traditional degree-day model and derived new degree-day factors in an effort to match the balance time series more closely. We estimated missing yearly-site data with a new balance gradient method. These efforts showed that an additional step needed to be taken at Wolverine Glacier to adjust for non-representative index sites. As with the previously calculated mass balances, the re-analyzed balances showed a continuing trend of mass loss. We noted that the time series, and thus our estimate of the cumulative mass loss over the period of record, was very sensitive to the data input, and suggest the need to add data-collection sites and modernize our weather stations.

  7. Simulation Methods for High-Cycle Fatigue-Driven Delamination using Cohesive Zone Models - Fundamental Behavior and Benchmark Studies

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lindgaard, Esben; Turon, A.;

    2015-01-01

    A novel computational method for simulating fatigue-driven delamination cracks in composite laminated structures under cyclic loading based on a cohesive zone model [2] and new benchmark studies with four other comparable methods [3-6] are presented. The benchmark studies describe and compare...... the traction-separation response in the cohesive zone and the transition phase from quasistatic to fatigue loading for each method. Furthermore, the accuracy of the predicted crack growth rate is studied and compared for each method. It is shown that the method described in [2] is significantly more accurate...... than the other methods [3-6]. Finally, studies are presented of the dependency and sensitivity to the change in different quasi-static material parameters and model specific fitting parameters. It is shown that all the methods except [2] rely on different parameters which are not possible to determine...

  8. Development of Benchmarks for Operating Costs and Resources Consumption to be Used in Healthcare Building Sustainability Assessment Methods

    Directory of Open Access Journals (Sweden)

    Maria de Fátima Castro

    2015-09-01

    Full Text Available Since the last decade of the twentieth century, the healthcare industry is paying attention to the environmental impact of their buildings and therefore new regulations, policy goals, and Building Sustainability Assessment (HBSA methods are being developed and implemented. At the present, healthcare is one of the most regulated industries and it is also one of the largest consumers of energy per net floor area. To assess the sustainability of healthcare buildings it is necessary to establish a set of benchmarks related with their life-cycle performance. They are both essential to rate the sustainability of a project and to support designers and other stakeholders in the process of designing and operating a sustainable building, by allowing the comparison to be made between a project and the conventional and best market practices. This research is focused on the methodology to set the benchmarks for resources consumption, waste production, operation costs and potential environmental impacts related to the operational phase of healthcare buildings. It aims at contributing to the reduction of the subjectivity found in the definition of the benchmarks used in Building Sustainability Assessment (BSA methods, and it is applied in the Portuguese context. These benchmarks will be used in the development of a Portuguese HBSA method.

  9. Benchmarking DFT and semiempirical methods on structures and lattice energies for ten ice polymorphs

    Science.gov (United States)

    Brandenburg, Jan Gerit; Maas, Tilo; Grimme, Stefan

    2015-03-01

    Water in different phases under various external conditions is very important in bio-chemical systems and for material science at surfaces. Density functional theory methods and approximations thereof have to be tested system specifically to benchmark their accuracy regarding computed structures and interaction energies. In this study, we present and test a set of ten ice polymorphs in comparison to experimental data with mass densities ranging from 0.9 to 1.5 g/cm3 and including explicit corrections for zero-point vibrational and thermal effects. London dispersion inclusive density functionals at the generalized gradient approximation (GGA), meta-GGA, and hybrid level as well as alternative low-cost molecular orbital methods are considered. The widely used functional of Perdew, Burke and Ernzerhof (PBE) systematically overbinds and overall provides inconsistent results. All other tested methods yield reasonable to very good accuracy. BLYP-D3atm gives excellent results with mean absolute errors for the lattice energy below 1 kcal/mol (7% relative deviation). The corresponding optimized structures are very accurate with mean absolute relative deviations (MARDs) from the reference unit cell volume below 1%. The impact of Axilrod-Teller-Muto (atm) type three-body dispersion and of non-local Fock exchange is small but on average their inclusion improves the results. While the density functional tight-binding model DFTB3-D3 performs well for low density phases, it does not yield good high density structures. As low-cost alternative for structure related problems, we recommend the recently introduced minimal basis Hartree-Fock method HF-3c with a MARD of about 3%.

  10. Combining and benchmarking methods of foetal ECG extraction without maternal or scalp electrode data.

    Science.gov (United States)

    Behar, Joachim; Oster, Julien; Clifford, Gari D

    2014-08-01

    Despite significant advances in adult clinical electrocardiography (ECG) signal processing techniques and the power of digital processors, the analysis of non-invasive foetal ECG (NI-FECG) is still in its infancy. The Physionet/Computing in Cardiology Challenge 2013 addresses some of these limitations by making a set of FECG data publicly available to the scientific community for evaluation of signal processing techniques.The abdominal ECG signals were first preprocessed with a band-pass filter in order to remove higher frequencies and baseline wander. A notch filter to remove power interferences at 50 Hz or 60 Hz was applied if required. The signals were then normalized before applying various source separation techniques to cancel the maternal ECG. These techniques included: template subtraction, principal/independent component analysis, extended Kalman filter and a combination of a subset of these methods (FUSE method). Foetal QRS detection was performed on all residuals using a Pan and Tompkins QRS detector and the residual channel with the smoothest foetal heart rate time series was selected.The FUSE algorithm performed better than all the individual methods on the training data set. On the validation and test sets, the best Challenge scores obtained were E1 = 179.44, E2 = 20.79, E3 = 153.07, E4 = 29.62 and E5 = 4.67 for events 1-5 respectively using the FUSE method. These were the best Challenge scores for E1 and E2 and third and second best Challenge scores for E3, E4 and E5 out of the 53 international teams that entered the Challenge. The results demonstrated that existing standard approaches for foetal heart rate estimation can be improved by fusing estimators together. We provide open source code to enable benchmarking for each of the standard approaches described.

  11. Evaluation of anode (electro)catalytic materials for the direct borohydride fuel cell: Methods and benchmarks

    Science.gov (United States)

    Olu, Pierre-Yves; Job, Nathalie; Chatenet, Marian

    2016-09-01

    In this paper, different methods are discussed for the evaluation of the potential of a given catalyst, in view of an application as a direct borohydride fuel cell DBFC anode material. Characterizations results in DBFC configuration are notably analyzed at the light of important experimental variables which influence the performances of the DBFC. However, in many practical DBFC-oriented studies, these various experimental variables prevent one to isolate the influence of the anode catalyst on the cell performances. Thus, the electrochemical three-electrode cell is a widely-employed and useful tool to isolate the DBFC anode catalyst and to investigate its electrocatalytic activity towards the borohydride oxidation reaction (BOR) in the absence of other limitations. This article reviews selected results for different types of catalysts in electrochemical cell containing a sodium borohydride alkaline electrolyte. In particular, propositions of common experimental conditions and benchmarks are given for practical evaluation of the electrocatalytic activity towards the BOR in three-electrode cell configuration. The major issue of gaseous hydrogen generation and escape upon DBFC operation is also addressed through a comprehensive review of various results depending on the anode composition. At last, preliminary concerns are raised about the stability of potential anode catalysts upon DBFC operation.

  12. APPLICATION OF PARAMETRIC AND NON-PARAMETRIC BENCHMARKING METHODS IN COST EFFICIENCY ANALYSIS OF THE ELECTRICITY DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Andrea Furková

    2007-06-01

    Full Text Available This paper explores the aplication of parametric and non-parametric benchmarking methods in measuring cost efficiency of Slovak and Czech electricity distribution companies. We compare the relative cost efficiency of Slovak and Czech distribution companies using two benchmarking methods: the non-parametric Data Envelopment Analysis (DEA and the Stochastic Frontier Analysis (SFA as the parametric approach. The first part of analysis was based on DEA models. Traditional cross-section CCR and BCC model were modified to cost efficiency estimation. In further analysis we focus on two versions of stochastic frontier cost functioin using panel data: MLE model and GLS model. These models have been applied to an unbalanced panel of 11 (Slovakia 3 and Czech Republic 8 regional electricity distribution utilities over a period from 2000 to 2004. The differences in estimated scores, parameters and ranking of utilities were analyzed. We observed significant differences between parametric methods and DEA approach.

  13. Comparative assessment of scoring functions on an updated benchmark: 2. Evaluation methods and general results.

    Science.gov (United States)

    Li, Yan; Han, Li; Liu, Zhihai; Wang, Renxiao

    2014-06-23

    Our comparative assessment of scoring functions (CASF) benchmark is created to provide an objective evaluation of current scoring functions. The key idea of CASF is to compare the general performance of scoring functions on a diverse set of protein-ligand complexes. In order to avoid testing scoring functions in the context of molecular docking, the scoring process is separated from the docking (or sampling) process by using ensembles of ligand binding poses that are generated in prior. Here, we describe the technical methods and evaluation results of the latest CASF-2013 study. The PDBbind core set (version 2013) was employed as the primary test set in this study, which consists of 195 protein-ligand complexes with high-quality three-dimensional structures and reliable binding constants. A panel of 20 scoring functions, most of which are implemented in main-stream commercial software, were evaluated in terms of "scoring power" (binding affinity prediction), "ranking power" (relative ranking prediction), "docking power" (binding pose prediction), and "screening power" (discrimination of true binders from random molecules). Our results reveal that the performance of these scoring functions is generally more promising in the docking/screening power tests than in the scoring/ranking power tests. Top-ranked scoring functions in the scoring power test, such as X-Score(HM), ChemScore@SYBYL, ChemPLP@GOLD, and PLP@DS, are also top-ranked in the ranking power test. Top-ranked scoring functions in the docking power test, such as ChemPLP@GOLD, Chemscore@GOLD, GlidScore-SP, LigScore@DS, and PLP@DS, are also top-ranked in the screening power test. Our results obtained on the entire test set and its subsets suggest that the real challenge in protein-ligand binding affinity prediction lies in polar interactions and associated desolvation effect. Nonadditive features observed among high-affinity protein-ligand complexes also need attention.

  14. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    Science.gov (United States)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random

  15. BENCHMARK OF MACHINE LEARNING METHODS FOR CLASSIFICATION OF A SENTINEL-2 IMAGE

    Directory of Open Access Journals (Sweden)

    F. Pirotti

    2016-06-01

    Full Text Available Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels for testing and validating subsets. The classes used are the following: (i urban (ii sowable areas (iii water (iv tree plantations (v grasslands. Validation is carried out using three different approaches: (i using pixels from the training dataset (train, (ii using pixels from the training dataset and applying cross-validation with the k-fold method (kfold and (iii using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train, the whole control dataset (full and with k-fold cross-validation (kfold with ten folds. Results from validation of predictions of the whole dataset (full show the

  16. Benchmarking passive seismic methods of estimating the depth of velocity interfaces down to ~300 m

    Science.gov (United States)

    Czarnota, Karol; Gorbatov, Alexei

    2016-04-01

    In shallow passive seismology it is generally accepted that the spatial autocorrelation (SPAC) method is more robust than the horizontal-over-vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in western Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Depths of this interface intersected in drill holes are between ~27 m and ~300 m. Seismometers were deployed in a three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium Compact 120 s broadband instruments. Data were acquired at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighbourhood algorithm, implemented in the geopsy modelling package (Wathelet, 2005, GRL v35). The HVSR technique yielded depth estimates of the target interface (Vs > 1000 m/s) generally within ±20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments which in turn overlie the target basement. Half of the SPAC estimates showed significantly higher errors than were obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low levels of anthropogenic noise at the benchmark sites. At a few sites SPAC waveform curves showed clear overtones suggesting that more reliable SPAC estimates may be obtained utilizing a multi-modal inversion. Nevertheless, our study indicates that reliable basin thickness estimates in the Australian conditions tested can be obtained utilizing HVSR data from a single

  17. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  18. An efficient dose-compensation method for proximity effect correction

    Energy Technology Data Exchange (ETDEWEB)

    Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua [Research Center of Semiconductor Integrated Technology, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China); Zhang Renping, E-mail: wangying@semi.ac.c [State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2010-08-15

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)

  19. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    Science.gov (United States)

    Chen, Hsing-Ta; Cohen, Guy; Reichman, David R.

    2017-02-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  20. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    CERN Document Server

    Chen, Hsing-Ta; Reichman, David R

    2016-01-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  1. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  2. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  3. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    -tailed hawk, osprey) (scientific names for both the mammalian and avian species are presented in Appendix B). [In this document, NOAEL refers to both dose (mg contaminant per kg animal body weight per day) and concentration (mg contaminant per kg of food or L of drinking water)]. The 20 wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at U.S. Department of Energy (DOE) waste sites. The NOAEL-based benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species; LOAEL-based benchmarks represent threshold levels at which adverse effects are likely to become evident. These benchmarks consider contaminant exposure through oral ingestion of contaminated media only. Exposure through inhalation and/or direct dermal exposure are not considered in this report.

  4. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. A coupled deterministic/stochastic method for computing neutron capture therapy dose rates

    Science.gov (United States)

    Hubbard, Thomas Richard

    new method was validated by comparing results to experimental measurements and benchmark data in a series of test cases chosen to demonstrate the strengths and weaknesses of the method. Experimental cases included the SAINT gold foil irradiations at the UVAR and detailed phantom dosimetry measurements at the Brookhaven Medical Research Reactor (BMRR). Results of the validation studies showed that the method provides values that are, in most cases, within one fractional standard deviation (FSD) of accepted experimental and benchmark values. A sample brain tumor treatment case was modeled for the conceptual UVAR NCT facility in order to determine the effect of body orientation, size, position, and shielding on the neutron dose rate at a variety of body parts. Ssb{n} "ray effects" were apparent and caused inaccuracies toward the back of the coupling surface; these can be avoided. The method provides treatment planners the ability to calculate dose rates throughout a patient's body and in the treatment room for various treatment configurations in order to minimize the dose to healthy tissue. The thermal neutrons provide the major contribution to neutron dose, but their effect can be minimized by applying localized shielding and by orienting the patient in order to maximize self-shielding. The method may also be used for facility design studies, and such studies of the UVAR have confirmed its suitability as an NCT facility.

  6. Fully Automated Treatment Planning for Head and Neck Radiotherapy using a Voxel-Based Dose Prediction and Dose Mimicking Method

    CERN Document Server

    McIntosh, Chris; McNiven, Andrea; Jaffray, David A; Purdie, Thomas G

    2016-01-01

    Recent works in automated radiotherapy treatment planning have used machine learning based on historical treatment plans to infer the spatial dose distribution for a novel patient directly from the planning image. We present an atlas-based approach which learns a dose prediction model for each patient (atlas) in a training database, and then learns to match novel patients to the most relevant atlases. The method creates a spatial dose objective, which specifies the desired dose-per-voxel, and therefore replaces any requirement for specifying dose-volume objectives for conveying the goals of treatment planning. A probabilistic dose distribution is inferred from the most relevant atlases, and is scalarized using a conditional random field to determine the most likely spatial distribution of dose to yield a specific dose prior (histogram) for relevant regions of interest. Voxel-based dose mimicking then converts the predicted dose distribution to a deliverable treatment plan dose distribution. In this study, we ...

  7. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    Science.gov (United States)

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management.

  8. TORT solutions to the NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)], E-mail: bekarkb@ornl.gov; Azmy, Yousry Y. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)], E-mail: yyazmy@ncsu.edu

    2009-04-15

    We present the TORT solutions to the 3D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40 x 40 x 40, 200 angles) to the finest model (160 x 160 x 160, 800 angles). The MCNP reference solution is used for evaluating the effect of model-refinement on the accuracy of the TORT solutions. The presented results show that the majority of benchmark quantities are computed with good accuracy by TORT, and that the accuracy improves with model refinement. However, this deliberately severe test has exposed some deficiencies in both deterministic and stochastic solution approaches. Specifically, TORT fails to converge the inner iterations in some benchmark configurations while MCNP produces zero tallies, or drastically poor statistics for some benchmark quantities. We conjecture that TORT's failure to converge is driven by ray effects in configurations with low scattering ratio and/or highly skewed computational cells, i.e. aspect ratio far from unity. The failure of MCNP occurs in quantities tallied over a very small area or volume in physical space, or quantities tallied many ({approx}25) mean free paths away from the source. Hence automated, robust, and reliable variance reduction techniques are essential for obtaining high quality reference values of the benchmark quantities. Preliminary results of the benchmark exercise indicate that the occasionally poor performance of TORT is shared with other deterministic codes. Armed with this information, method developers can now direct their attention to regions in parameter space where such failures occur and design alternative solution approaches for such instances.

  9. Shutdown dose rate assessment with the Advanced D1S method: Development, applications and validation

    Energy Technology Data Exchange (ETDEWEB)

    Villari, R., E-mail: rosaria.villari@enea.it [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Fischer, U. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Moro, F. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Pereslavtsev, P. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Petrizzi, L. [European Commission, DG Research and Innovation K5, CDMA 00/030, B-1049 Brussels (Belgium); Podda, S. [Associazione EURATOM-ENEA sulla Fusione, Via Enrico Fermi 45, 00044 Frascati, Rome (Italy); Serikov, A. [Karlsruhe Institute of Technology KIT, Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2014-10-15

    Highlights: Development of Advanced-D1S for shutdown dose rate calculations; Recent applications of the tool to tokamaks; Summary of the results of benchmarking with measurements and R2S calculations; Limitations and further development. Abstract: The present paper addresses the recent developments and applications of Advanced-D1S to the calculations of shutdown dose rate in tokamak devices. Results of benchmarking with measurements and Rigorous 2-Step (R2S) calculations are summarized and discussed as well as limitations and further developments. The outcomes confirm the essential role of the Advanced-D1S methodology and the evidence for its complementary use with the R2Smesh approach for the reliable assessment of shutdown dose rates and related statistical uncertainties in present and future fusion devices.

  10. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    Energy Technology Data Exchange (ETDEWEB)

    Belley, Matthew D.; Wang, Chu [Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Nguyen, Giao; Gunasingha, Rathnayaka [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chao, Nelson J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 and Department of Immunology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chen, Benny J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 (United States); Dewhirst, Mark W. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Yoshizumi, Terry T., E-mail: terry.yoshizumi@duke.edu [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.

  11. Benchmark Calculations on the Atomization Enthalpy,Geometry and Vibrational Frequencies of UF6 with Relativistic DFT Methods

    Institute of Scientific and Technical Information of China (English)

    XIAO Hai; LI Jun

    2008-01-01

    Benchmark calculations on the molar atomization enthalpy, geometry, and vibrational frequencies of uranium hexafluoride (UF6) have been performed by using relativistic density functional theory (DFT) with various levels of relativistic effects, different types of basis sets, and exchange-correlation functionals. Scalar relativistic effects are shown to be critical for the structural properties. The spin-orbit coupling effects are important for the calculated energies, but are much less important for other calculated ground-state properties of closed-shell UF6. We conclude through systematic investigations that ZORA- and RECP-based relativistic DFT methods are both appropriate for incorporating relativistic effects. Comparisons of different types of basis sets (Slater, Gaussian, and plane-wave types) and various levels of theoretical approximation of the exchange-correlation functionals were also made.

  12. Method of simulation of low dose rate for total dose effect in 0.18 {mu}m CMOS technology

    Energy Technology Data Exchange (ETDEWEB)

    He Baoping; Yao Zhibin; Guo Hongxia; Luo Yinhong; Zhang Fengqi; Wang Yuanming; Zhang Keying, E-mail: baopinghe@126.co [Northwest Institute of Nuclear Technology, Xi' an 710613 (China)

    2009-07-15

    Three methods for simulating low dose rate irradiation are presented and experimentally verified by using 0.18 {mu}m CMOS transistors. The results show that it is the best way to use a series of high dose rate irradiations, with 100 {sup 0}C annealing steps in-between irradiation steps, to simulate a continuous low dose rate irradiation. This approach can reduce the low dose rate testing time by as much as a factor of 45 with respect to the actual 0.5 rad (Si)/s dose rate irradiation. The procedure also provides detailed information on the behavior of the test devices in a low dose rate environment.

  13. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  14. Radiological environmental dose assessment methods and compliance dose results for 2015 operations at the savannah river site

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, G. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Dixon, K. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-01

    This report presents the environmental dose assessment methods and the estimated potential doses to the offsite public from 2015 Savannah River Site (SRS) atmospheric and liquid radioactive releases. Also documented are potential doses from special-case exposure scenarios - such as the consumption of deer meat, fish, and goat milk.

  15. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  16. A novel method of estimating effective dose from the point dose method: a case study—parathyroid CT scans

    Science.gov (United States)

    Januzis, Natalie; Nguyen, Giao; Hoang, Jenny K.; Lowry, Carolyn; Yoshizumi, Terry T.

    2015-02-01

    The purpose of this study was to validate a novel approach of applying a partial volume correction factor (PVCF) using a limited number of MOSFET detectors in the effective dose (E) calculation. The results of the proposed PVCF method were compared to the results from both the point dose (PD) method and a commercial CT dose estimation software (CT-Expo). To measure organ doses, an adult female anthropomorphic phantom was loaded with 20 MOSFET detectors and was scanned using the non-contrast and 2 phase contrast-enhanced parathyroid imaging protocols on a 64-slice multi-detector computed tomography scanner. E was computed by three methods: the PD method, the PVCF method, and the CT-Expo method. The E (in mSv) for the PD method, the PVCF method, and CT-Expo method was 2.6  ±  0.2, 1.3  ±  0.1, and 1.1 for the non-contrast scan, 21.9  ±  0.4, 13.9  ±  0.2, and 14.6 for the 1st phase of the contrast-enhanced scan, and 15.5  ±  0.3, 9.8  ±  0.1, and 10.4 for the 2nd phase of the contrast-enhanced scan, respectively. The E with the PD method differed from the PVCF method by 66.7% for the non-contrast scan, by 44.9% and by 45.5% respectively for the 1st and 2nd phases of the contrast-enhanced scan. The E with PVCF was comparable to the results from the CT-Expo method with percent differences of 15.8%, 5.0%, and 6.3% for the non-contrast scan and the 1st and 2nd phases of the contrast-enhanced scan, respectively. To conclude, the PVCF method estimated E within 16% difference as compared to 50-70% in the PD method. In addition, the results demonstrate that E can be estimated accurately from a limited number of detectors.

  17. Model Averaging Software for Dichotomous Dose Response Risk Estimation

    Directory of Open Access Journals (Sweden)

    Matthew W. Wheeler

    2008-02-01

    Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, fits the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulfills a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.

  18. Benchmarking DFT methods with small basis sets for the calculation of halogen-bond strengths.

    Science.gov (United States)

    Siiskonen, Antti; Priimagi, Arri

    2017-02-01

    In recent years, halogen bonding has become an important design tool in crystal engineering, supramolecular chemistry and biosciences. The fundamentals of halogen bonding have been studied extensively with high-accuracy computational methods. Due to its non-covalency, the use of triple-zeta (or larger) basis sets is often recommended when studying halogen bonding. However, in the large systems often encountered in supramolecular chemistry and biosciences, large basis sets can make the calculations far too slow. Therefore, small basis sets, which would combine high computational speed and high accuracy, are in great demand. This study focuses on comparing how well density functional theory (DFT) methods employing small, double-zeta basis sets can estimate halogen-bond strengths. Several methods with triple-zeta basis sets are included for comparison. Altogether, 46 DFT methods were tested using two data sets of 18 and 33 halogen-bonded complexes for which the complexation energies have been previously calculated with the high-accuracy CCSD(T)/CBS method. The DGDZVP basis set performed far better than other double-zeta basis sets, and it even outperformed the triple-zeta basis sets. Due to its small size, it is well-suited to studying halogen bonding in large systems.

  19. Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies

    DEFF Research Database (Denmark)

    Thorsen, Jonathan; Brejnrod, Asker Daniel; Mortensen, Martin Steen

    2016-01-01

    BACKGROUND: There is an immense scientific interest in the human microbiome and its effects on human physiology, health, and disease. A common approach for examining bacterial communities is high-throughput sequencing of 16S rRNA gene hypervariable regions, aggregating sequence-similar amplicons...... analysis and (2) beta-diversity-based sample separation, using a rigorous benchmarking framework based on large clinical 16S microbiome datasets from different sources. RESULTS: Running more than 380,000 full differential relative abundance tests on real datasets with permuted case/control assignments...... should be interpreted with caution. We provide an easily extensible framework for benchmarking of new methods and future microbiome datasets....

  20. Robust H-infinity control synthesis method and its application to benchmark problems

    Science.gov (United States)

    Wie, Bong; Liu, Qiang; Byun, Kuk-Whan

    1992-01-01

    This paper presents a robust H-infinity control synthesis method for structured parameter uncertainty. The robust H-infinity control design methodology is also incorporated with the so-called internal model principle for persistent-disturbance rejection. A noncollocated control problem of flexible space structures subject to parameter variations is used to illustrate the design methodology. It is shown that the proposed design method invariably makes use of nonminimum-phase compensation and that it achieves the desired asymptotic disturbance rejection by having a disturbance rejection 'dipole'.

  1. Benchmarking the DFT+U method for thermochemical calculations of uranium molecular compounds and solids.

    Science.gov (United States)

    Beridze, George; Kowalski, Piotr M

    2014-12-18

    Ability to perform a feasible and reliable computation of thermochemical properties of chemically complex actinide-bearing materials would be of great importance for nuclear engineering. Unfortunately, density functional theory (DFT), which on many instances is the only affordable ab initio method, often fails for actinides. Among various shortcomings, it leads to the wrong estimate of enthalpies of reactions between actinide-bearing compounds, putting the applicability of the DFT approach to the modeling of thermochemical properties of actinide-bearing materials into question. Here we test the performance of DFT+U method--a computationally affordable extension of DFT that explicitly accounts for the correlations between f-electrons - for prediction of the thermochemical properties of simple uranium-bearing molecular compounds and solids. We demonstrate that the DFT+U approach significantly improves the description of reaction enthalpies for the uranium-bearing gas-phase molecular compounds and solids and the deviations from the experimental values are comparable to those obtained with much more computationally demanding methods. Good results are obtained with the Hubbard U parameter values derived using the linear response method of Cococcioni and de Gironcoli. We found that the value of Coulomb on-site repulsion, represented by the Hubbard U parameter, strongly depends on the oxidation state of uranium atom. Last, but not least, we demonstrate that the thermochemistry data can be successfully used to estimate the value of the Hubbard U parameter needed for DFT+U calculations.

  2. Total molecular photoionization cross-sections by algebraic diagrammatic construction-Stieltjes-Lanczos method: Benchmark calculations

    Science.gov (United States)

    Ruberti, M.; Yun, R.; Gokhberg, K.; Kopelke, S.; Cederbaum, L. S.; Tarantelli, F.; Averbukh, V.

    2013-10-01

    In [K. Gokhberg, V. Vysotskiy, L. S. Cederbaum, L. Storchi, F. Tarantelli, and V. Averbukh, J. Chem. Phys. 130, 064104 (2009)] we introduced a new {L}2ab initio method for the calculation of total molecular photoionization cross-sections. The method is based on the ab initio description of discretized photoionized molecular states within the many-electron Green's function approach, known as algebraic diagrammatic construction (ADC), and on the application of Stieltjes-Chebyshev moment theory to Lanczos pseudospectra of the ADC electronic Hamiltonian. Here we establish the accuracy of the new technique by comparing the ADC-Lanczos-Stieltjes cross-sections in the valence ionization region to the experimental ones for a series of eight molecules of first row elements: HF, NH3, H2O, CO2, H2CO, CH4, C2H2, and C2H4. We find that the use of the second-order ADC technique [ADC(2)] that includes double electronic excitations leads to a substantial systematic improvement over the first-order method [ADC(1)] and to a good agreement with experiment for photon energies below 80 eV. The use of extended second-order ADC theory [ADC(2)x] leads to a smaller further improvement. Above 80 eV photon energy all three methods lead to significant deviations from the experimental values which we attribute to the use of Gaussian single-electron bases. Our calculations show that the ADC(2)-Lanczos-Stieltjes technique is a reliable and efficient ab initio tool for theoretical prediction of total molecular photo-ionization cross-sections in the valence region.

  3. Computer–based method of bite mark analysis: A benchmark in forensic dentistry?

    OpenAIRE

    Nandita Kottieth Pallam; Karen Boaz; Srikant Natrajan; Minu Raj; Nidhi Manaktala; Lewis, Amitha J

    2016-01-01

    Aim: The study aimed to determine the technique with maximum accuracy in production of bite mark overlay. Materials and Methods: Thirty subjects (10 males and 20 females; all aged 20–30 years) with complete set of natural upper and lower anterior teeth were selected for this study after obtaining approval from the Institutional Ethical Committee. The upper and lower alginate impressions were taken and die stone models were obtained from each impression; overlays were produced from the biting ...

  4. Benchmarking the invariant embedding method against analytical solutions in model transport problems

    Directory of Open Access Journals (Sweden)

    Wahlberg Malin

    2006-01-01

    Full Text Available The purpose of this paper is to demonstrate the use of the invariant embedding method in a few model transport problems for which it is also possible to obtain an analytical solution. The use of the method is demonstrated in three different areas. The first is the calculation of the energy spectrum of sputtered particles from a scattering medium without absorption, where the multiplication (particle cascade is generated by recoil production. Both constant and energy dependent cross-sections with a power law dependence were treated. The second application concerns the calculation of the path length distribution of reflected particles from a medium without multiplication. This is a relatively novel application, since the embedding equations do not resolve the depth variable. The third application concerns the demonstration that solutions in an infinite medium and in a half-space are interrelated through embedding-like integral equations, by the solution of which the flux reflected from a half-space can be reconstructed from solutions in an infinite medium or vice versa. In all cases, the invariant embedding method proved to be robust, fast, and monotonically converging to the exact solutions.

  5. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    Science.gov (United States)

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  6. Method of dosing electrolyte in a sealed storage battery

    Energy Technology Data Exchange (ETDEWEB)

    Boldin, R.V.; Akbulatova, A.D.; Mel' nikova, T.A.; Perugina, T.P.

    1981-01-01

    A method is proposed for dosing electrolyte in a sealed storage battery by weighing the storage battery before pouring in the electrolyte, pouring in the electrolyte, forming, removing the surplus electrolyte, repeated weighing, calculation for the difference in the weight of the quantity of the remaining electrolyte and correction for the weight of the quantity of electrolyte according to theoretical calculations. In order to improve accuracy after repeated weighing, a measurement is made of the magnitude of free gas space of the storage battery and a volume of electrolyte is added until it reaches 90-95% of the degree of filling of the pores included in the volume of the gas space.

  7. A mathematical approach to optimal selection of dose values in the additive dose method of ERP dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Hayes, R.B.; Haskell, E.H.; Kenner, G.H. [Utah Univ., Salt Lake City, UT (United States)

    1996-01-01

    Additive dose methods commonly used in electron paramagnetic resonance (EPR) dosimetry are time consuming and labor intensive. We have developed a mathematical approach for determining optimal spacing of applied doses and the number of spectra which should be taken at each dose level. Expected uncertainitites in the data points are assumed to be normally distributed with a fixed standard deviation and linearity of dose response is also assumed. The optimum spacing and number of points necessary for the minimal error can be estimated, as can the likely error in the resulting estimate. When low doses are being estimated for tooth enamel samples the optimal spacing is shown to be a concentration of points near the zero dose value with fewer spectra taken at a single high dose value within the range of known linearity. Optimization of the analytical process results in increased accuracy and sample throughput.

  8. Benchmarking the External Surrogate Ratio Method using the (alpha,alpha' f) reaction at STARS

    Energy Technology Data Exchange (ETDEWEB)

    Lesher, S R; Bernstein, L A; Ai, H; Beausang, C W; Bleuel, D; Burke, J T; Clark, R M; Fallon, P; Gibelin, J; Lee, I Y; Lyles, B F; Macchiavelli, A O; McMahan, M A; Moody, K J; Norman, E B; Phair, L; Rodriguez-Vieitez, E; Wiedeking, M

    2008-01-09

    We measured the ratio of the fission probabilities of {sup 234}U* relative to {sup 236}U* formed via an ({alpha},{alpha}{prime}) direct reactions using the STARS array at the 88-inch cyclotron at the Lawrence Berkeley National Laboratory. This ratio has a shape similar to the ratio of neutron capture probabilities from {sup 233}U(n; f) and {sup 235}U(n; f), indicating the alpha reactions likely formed a compound nucleus. This result indicates that the ratios of fission exit channel probabilities for two actinide nuclei populated via ({alpha}, {alpha}{prime}) can be used to determine an unknown fission cross section relative to a known one. The validity of the External Surrogate Ratio Method (ESRM) is tested and the results support the conclusions of Burke et al. [1].

  9. Finite Element Method Modeling of Sensible Heat Thermal Energy Storage with Innovative Concretes and Comparative Analysis with Literature Benchmarks

    Directory of Open Access Journals (Sweden)

    Claudio Ferone

    2014-08-01

    Full Text Available Efficient systems for high performance buildings are required to improve the integration of renewable energy sources and to reduce primary energy consumption from fossil fuels. This paper is focused on sensible heat thermal energy storage (SHTES systems using solid media and numerical simulation of their transient behavior using the finite element method (FEM. Unlike other papers in the literature, the numerical model and simulation approach has simultaneously taken into consideration various aspects: thermal properties at high temperature, the actual geometry of the repeated storage element and the actual storage cycle adopted. High-performance thermal storage materials from the literatures have been tested and used here as reference benchmarks. Other materials tested are lightweight concretes with recycled aggregates and a geopolymer concrete. Their thermal properties have been measured and used as inputs in the numerical model to preliminarily evaluate their application in thermal storage. The analysis carried out can also be used to optimize the storage system, in terms of thermal properties required to the storage material. The results showed a significant influence of the thermal properties on the performances of the storage elements. Simulation results have provided information for further scale-up from a single differential storage element to the entire module as a function of material thermal properties.

  10. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    Energy Technology Data Exchange (ETDEWEB)

    Balabin, Roman M., E-mail: balabin@org.chem.ethz.ch [Department of Chemistry and Applied Biosciences, ETH Zurich, 8093 Zurich (Switzerland); Smirnov, Sergey V. [Unimilk Joint Stock Co., 143421 Moscow Region (Russian Federation)

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm{sup -1}) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  11. Benchmark dose of saliva fluoride concentration in adolescents and it's relationship to the prevalence of dental fluorosis%儿童唾液氟基准剂量及与氟斑牙相关关系的研究

    Institute of Scientific and Technical Information of China (English)

    于阳阳; 王连芳; 赵伟; 邹冬荣; 郭蕊

    2016-01-01

    Objective To study the benchmark dose (BMD) of fluoride concentration in saliva,and to evaluate the significance of saliva fluoride on control and prevention of endemic fluorosis.Methods In September 2014,middle school students in endemic fluorosis areas and non-endemic fluorosis areas in North China Petoleum were selected as objects.The contents of fluoride in water,urine and saliva were determined.The correlation of fluoride content in water,urine fluoride and fluoride concentration in saliva was analyzed.According to the levels of the saliva fluoride concentration,the children were divided into 11 groups,< 1.00,1.00-,2.00-,3.00-,4.00-,5.00-,6.00-,7.00-,8.00-,9.00-and ≥ 10.00 mg/L.The prevalence of dental fluorosis and defected dental fluorosis were investigated and the saliva fluoride concentration was calculated by Banch-Mark Dose Software.Results Compared with non endemic areas,the fluoride contents in water,urine and saliva [(2.13 ± 0.13),(1.29 ±0.73),(4.01 ± 3.61) mg/L] were higher than that in endemic areas [(0.67 ± 0.13),(0.38 ± 0.08),(0.75 ± 0.12) mg/L,t =158.730,24.780,18.114,all P < 0.01].The fluoride concentration in saliva was positively correlated with the fluoride content in water and urine in endemic areas (r =0.626,0.945,all P < 0.01).The (BMDs and benchmark dose lower bound (BMDLs) were 0.91,0.54,3.72,3.32 mg/L respectively,calculated by Banch-Mark Dose Software.With the increase of fluoride concentration in saliva,the prevalence of dental fluorosis and defect dental fluorosis had increased too,especially when the fluoride content in saliva was more than 4 mg/L.There were significant doseresponse relationships between the urine fluoride and the prevalence of dental fluorosis and defected dental fluorosis.Conclusion The fluoride concentration in saliva could be used as one of the evaluation indexes of fluorosis,and the BMD of saliva fluoride concentration in endemic fluorosis areas is suggested as 0.91 mg/L.%目的 探讨唾液氟

  12. A track length estimator method for dose calculations in low-energy X-ray irradiations. Implementation, properties and performance

    Energy Technology Data Exchange (ETDEWEB)

    Baldacci, F.; Delaire, F.; Letang, J.M.; Sarrut, D.; Smekens, F.; Freud, N. [Lyon-1 Univ. - CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Centre Leon Berard (France); Mittone, A.; Coan, P. [LMU Munich (Germany). Dept. of Physics; LMU Munich (Germany). Faculty of Medicine; Bravin, A.; Ferrero, C. [European Synchrotron Radiation Facility, Grenoble (France); Gasilov, S. [LMU Munich (Germany). Dept. of Physics

    2015-05-01

    The track length estimator (TLE) method, an 'on-the-fly' fluence tally in Monte Carlo (MC) simulations, recently implemented in GATE 6.2, is known as a powerful tool to accelerate dose calculations in the domain of low-energy X-ray irradiations using the kerma approximation. Overall efficiency gains of the TLE with respect to analogous MC were reported in the literature for regions of interest in various applications (photon beam radiation therapy, X-ray imaging). The behaviour of the TLE method in terms of statistical properties, dose deposition patterns, and computational efficiency compared to analogous MC simulations was investigated. The statistical properties of the dose deposition were first assessed. Derivations of the variance reduction factor of TLE versus analogous MC were carried out, starting from the expression of the dose estimate variance in the TLE and analogous MC schemes. Two test cases were chosen to benchmark the TLE performance in comparison with analogous MC: (i) a small animal irradiation under stereotactic synchrotron radiation therapy conditions and (ii) the irradiation of a human pelvis during a cone beam computed tomography acquisition. Dose distribution patterns and efficiency gain maps were analysed. The efficiency gain exhibits strong variations within a given irradiation case, depending on the geometrical (voxel size, ballistics) and physical (material and beam properties) parameters on the voxel scale. Typical values lie between 10 and 103, with lower levels in dense regions (bone) outside the irradiated channels (scattered dose only), and higher levels in soft tissues directly exposed to the beams.

  13. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  14. 天然气净化业务能效对标方法探索%Energy efficiency benchmarking methods in natural gas purification business

    Institute of Scientific and Technical Information of China (English)

    苟小静; 黄朝齐; 龚毅然; 陈世明; 王灵军

    2014-01-01

    According to the requirements of PetroChina Exploration and Production Branch , Southwest Oil & Gasfield Company carried out energy efficiency benchmarking for gas purifica-tion business .The method of “subdivision unit ,and ratings contrast”was used to basically over-come the differences between purification devices ,and eliminate the impact level of non-compara-ble factors ,w hich provides a feasible idea for energy efficiency benchmarking of oil and gas field upstream business .The technology difficulties and solution ideas of energy efficiency benchmark-ing in gas purification business were summarized .%按照中国石油勘探与生产分公司要求,西南油气田公司开展了天然气净化业务能效对标试点,采用“细分单元、分级对比”的方法,基本实现了克服天然气净化装置间的差异性、消除不可比因素的影响程度,为油气田上游业务能效对标工作的开展提供了一种可行的思路,并总结了天然气净化业务能效对标技术难点与解决思路。

  15. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  16. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  17. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  18. Effect of subchronic 2,3,7,8-tetrachlorodibenzo-p-dioxin exposure on immune system and target gene responses in mice: calculation of benchmark doses for CYP1A1 and CYP1A2 related enzyme activities

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Toxicology, Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Donat, S. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Toxicology, Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Doehr, O. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Toxicology, Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Kremer, J. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Immunology Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Esser, C. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Immunology Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Roller, M. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Experimental Hygiene Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany); Abel, J. [Medical Institute of Environmental Hygiene at the Heinrich Heine University of Duesseldorf, Division of Toxicology, Auf`m Hennekamp 50, D-40225 Duesseldorf (Germany)

    1997-04-01

    The dose-effect relationships were analysed for several noncarcinogenic endpoints, such as immunological and biochemical responses at subchronic, low dose exposure of female C57BL/6 mice to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). The animals were treated i.p. with TCDD according to the initial- and maintenance-dose principle for a period of 135 days. The initial doses were 1, 10 and 100 ng TCDD/kg, the weekly maintenance doses were 0.2, 2 and 20 ng TCDD/kg, respectively. At days 23, 79 and 135 of TCDD treatment 10 animals of each dose group were killed. As immunological parameters the number of thymocytes and the pattern of thymocyte subpopulations were determined. In liver, lung and thymus, mRNA expression of TGF-{alpha}, TGF-{beta}{sub 1}, TGF-{beta}{sub 2}, TGF-{beta}{sub 3}, TNF-{alpha}, IL-1{beta} and different CYP1 isoforms (CYP1A1, CYP1A2, CYP1B1) was analysed. In the livers, activities of 7-ethoxyresorufin-O-deethylase (EROD) and 7-methoxyresorufin-O-demethylase (MROD) were measured. TCDD content in the liver was determined. The main results are summarized as follows: (1) The TCDD doses were not sufficient to elicit dose-dependent changes of pattern of thymocyte subpopulation. (2) TCDD failed to change the mRNA expression of TGF-{alpha}, TGF-{beta} and TNF-{alpha}, but led to an increase of IL-1{beta} mRNA expression in liver, lung and thymus. The results show that the TCDD induced IL-1{beta} mRNA increase is at least as sensitive a marker as the induction of CYP1A isoforms. (3) The expression of CYP1B1 mRNA remained unchanged at the doses tested, while CYP1A1 and CYP1A2 mRNA expression was dose-dependently enhanced. EROD and MROD activities in the liver paralleled the increases of CYP1A1 and CYP1A2 mRNA expression. (4) Regression analysis of the data showed that most of the parameters tested fit a linear model. (5) From the data, a benchmark dose for EROD/MROD activities in the livers of female C57BL/6 mice of about 0.03 ng TCDD/kg per day was

  19. Technical Review of SRS Dose Reconstrruction Methods Used By CDC

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, Ali, A

    2005-07-20

    At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. The receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.

  20. Manual method for dose calculation in gynecologic brachytherapy; Metodo manual para o calculo de doses em braquiterapia ginecologica

    Energy Technology Data Exchange (ETDEWEB)

    Vianello, Elizabeth A.; Almeida, Carlos E. de [Instituto Nacional do Cancer, Rio de Janeiro, RJ (Brazil); Biaggio, Maria F. de [Universidade do Estado, Rio de Janeiro, RJ (Brazil)

    1998-09-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author) 10 refs., 5 figs.

  1. Absorbed dose determination in photon fields using the tandem method

    CERN Document Server

    Marques-Pachas, J F

    1999-01-01

    The purpose of this work is to develop an alternative method to determine the absorbed dose and effective energy of photons with unknown spectral distributions. It includes a 'tandem' system that consists of two thermoluminescent dosemeters with different energetic dependence. LiF: Mg, Ti, CaF sub 2 : Dy thermoluminescent dosemeters and a Harshaw 3500 reading system are employed. Dosemeters are characterized with sup 9 sup 0 Sr- sup 9 sup 0 Y, calibrated with the energy of sup 6 sup 0 Co and irradiated with seven different qualities of x-ray beams, suggested by ANSI No. 13 and ISO 4037. The answers of each type of dosemeter are adjusted to a function that depends on the effective energy of photons. The adjustment is carried out by means of the Rosenbrock minimization algorithm. The mathematical model used for this function includes five parameters and has a gauss and a straight line. Results show that the analytical functions reproduce the experimental data of the answers, with a margin of error of less than ...

  2. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  3. A Method to Evaluate Hormesis in Nanoparticle Dose-Responses

    OpenAIRE

    Nascarella, Marc A.; Calabrese, Edward J.

    2012-01-01

    The term hormesis describes a dose-response relationship that is characterized by a response that is opposite above and below the toxicological or pharmacological threshold. Previous reports have shown that this relationship is ubiquitous in the response of pharmaceuticals, metals, organic chemicals, radiation, and physical stressor agents. Recent reports have also indicated that certain nanoparticles (NPs) may also exhibit a hormetic dose-response. We describe the application of three previo...

  4. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  5. Benchmark Energetic Data in a Model System for Grubbs II Metathesis Catalysis and Their Use for the Development, Assessment, and Validation of Electronic Structure Methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Yan; Truhlar, Donald G.

    2009-01-31

    We present benchmark relative energetics in the catalytic cycle of a model system for Grubbs second-generation olefin metathesis catalysts. The benchmark data were determined by a composite approach based on CCSD(T) calculations, and they were used as a training set to develop a new spin-component-scaled MP2 method optimized for catalysis, which is called SCSC-MP2. The SCSC-MP2 method has improved performance for modeling Grubbs II olefin metathesis catalysts as compared to canonical MP2 or SCS-MP2. We also employed the benchmark data to test 17 WFT methods and 39 density functionals. Among the tested density functionals, M06 is the best performing functional. M06/TZQS gives an MUE of only 1.06 kcal/mol, and it is a much more affordable method than the SCSC-MP2 method or any other correlated WFT methods. The best performing meta-GGA is M06-L, and M06-L/DZQ gives an MUE of 1.77 kcal/mol. PBEh is the best performing hybrid GGA, with an MUE of 3.01 kcal/mol; however, it does not perform well for the larger, real Grubbs II catalyst. B3LYP and many other functionals containing the LYP correlation functional perform poorly, and B3LYP underestimates the stability of stationary points for the cis-pathway of the model system by a large margin. From the assessments, we recommend the M06, M06-L, and MPW1B95 functionals for modeling Grubbs II olefin metathesis catalysts. The local M06-L method is especially efficient for calculations on large systems.

  6. Accurate reaction barrier heights of pericyclic reactions: Surprisingly large deviations for the CBS-QB3 composite method and their consequences in DFT benchmark studies.

    Science.gov (United States)

    Karton, Amir; Goerigk, Lars

    2015-04-05

    Accurate barrier heights are obtained for the 26 pericyclic reactions in the BHPERI dataset by means of the high-level Wn-F12 thermochemical protocols. Very often, the complete basis set (CBS)-type composite methods are used in similar situations, but herein it is shown that they in fact result in surprisingly large errors with root mean square deviations (RMSDs) of about 2.5 kcal mol(-1). In comparison, other composite methods, particularly G4-type and estimated coupled cluster with singles, doubles, and quasiperturbative triple excitations [CCSD(T)/CBS] approaches, show deviations well below the chemical-accuracy threshold of 1 kcal mol(-1). With the exception of SCS-MP2 and the herein newly introduced MP3.5 approach, all other tested Møller-Plesset perturbative procedures give poor performance with RMSDs of up to 8.0 kcal mol(-1). The finding that CBS-type methods fail for barrier heights of these reactions is unexpected and it is particularly troublesome given that they are often used to obtain reference values for benchmark studies. Significant differences are identified in the interpretation and final ranking of density functional theory (DFT) methods when using the original CBS-QB3 rather than the new Wn-F12 reference values for BHPERI. In particular, it is observed that the more accurate Wn-F12 benchmark results in lower statistical errors for those methods that are generally considered to be robust and accurate. Two examples are the PW6B95-D3(BJ) hybrid-meta-general-gradient approximation and the PWPB95-D3(BJ) double-hybrid functionals, which result in the lowest RMSDs of the entire DFT study (1.3 and 1.0 kcal mol(-1), respectively). These results indicate that CBS-QB3 should be applied with caution in computational modeling and benchmark studies involving related systems.

  7. Code intercomparison and benchmark for muon fluence and absorbed dose induced by an 18-GeV electron beam after massive iron shielding

    CERN Document Server

    Fasso, Alberto; Ferrari, Anna; Mokhov, Nikolai V; Mueller, Stefan E; Nelson, Walter Ralph; Roesler, Stefan; Sanami, Toshiya; Striganov, Sergei I; Versaci, Roberto

    2015-01-01

    In 1974, Nelson, Kase, and Svenson published an experimental investigation on muon shielding using the SLAC high energy LINAC. They measured muon fluence and absorbed dose induced by a 18 GeV electron beam hitting a copper/water beam dump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical mode ls available at the time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results will then be compared between the codes, and with the SLAC data.

  8. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    Energy Technology Data Exchange (ETDEWEB)

    Fasso, A. [SLAC; Ferrari, A. [CERN; Ferrari, A. [HZDR, Dresden; Mokhov, N. V. [Fermilab; Mueller, S. E. [HZDR, Dresden; Nelson, W. R. [SLAC; Roesler, S. [CERN; Sanami, t.; Striganov, S. I. [Fermilab; Versaci, R. [Unlisted, CZ

    2016-12-01

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, and with the SLAC data.

  9. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  10. Leveraging long read sequencing from a single individual to provide a comprehensive resource for benchmarking variant calling methods.

    Science.gov (United States)

    Mu, John C; Tootoonchi Afshar, Pegah; Mohiyuddin, Marghoob; Chen, Xi; Li, Jian; Bani Asadi, Narges; Gerstein, Mark B; Wong, Wing H; Lam, Hugo Y K

    2015-09-28

    A high-confidence, comprehensive human variant set is critical in assessing accuracy of sequencing algorithms, which are crucial in precision medicine based on high-throughput sequencing. Although recent works have attempted to provide such a resource, they still do not encompass all major types of variants including structural variants (SVs). Thus, we leveraged the massive high-quality Sanger sequences from the HuRef genome to construct by far the most comprehensive gold set of a single individual, which was cross validated with deep Illumina sequencing, population datasets, and well-established algorithms. It was a necessary effort to completely reanalyze the HuRef genome as its previously published variants were mostly reported five years ago, suffering from compatibility, organization, and accuracy issues that prevent their direct use in benchmarking. Our extensive analysis and validation resulted in a gold set with high specificity and sensitivity. In contrast to the current gold sets of the NA12878 or HS1011 genomes, our gold set is the first that includes small variants, deletion SVs and insertion SVs up to a hundred thousand base-pairs. We demonstrate the utility of our HuRef gold set to benchmark several published SV detection tools.

  11. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  12. 基准地价更新方法及趋势探析%Study on Remedying Methods of Benchmark Land Prices and Development Trend

    Institute of Scientific and Technical Information of China (English)

    张磊; 王阳

    2016-01-01

    城镇基准地价制度是我国在城镇土地市场发育初期所建立的一项土地管理制度,随着土地市场的发育,原有的基准地价更新方法已不能完全满足土地管理的需求,各地都在探索适应当地土地利用管理的新的更新方法。本文通过对各类基准地价更新方法的比较,并借鉴国外的地价制度,对城镇基准地价更新方法的发展趋势作以探讨。%Urban system is China's benchmark land prices in the early development of urban land market by the establishment of a land management system. A-long with the development of the land market, the original method can not fully meet the demand for land management, all localities are exploring adapt to the local land-use management of the new method update. This essay, which bases on comparing various methods and using foreign premium system, try to update on urban land price benchmark trends for the development of methods to explore.

  13. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  14. Stepwise Method Based on Confidence Bound and Information Incorporation for Identifying the Maximum Tolerable Dose

    Institute of Scientific and Technical Information of China (English)

    王雪丽; 陶剑; 史宁中

    2005-01-01

    The primary goal of a phase I clinical trial is to find the maximum tolerable dose of a treatment. In this paper, we propose a new stepwise method based on confidence bound and information incorporation to determine the maximum tolerable dose among given dose levels. On the one hand, in order to avoid severe even fatal toxicity to occur and reduce the experimental subjects, the new method is executed from the lowest dose level, and then goes on in a stepwise fashion. On the other hand,in order to improve the accuracy of the recommendation, the final recommendation of the maximum tolerable dose is accomplished through the information incorporation of an additional experimental cohort at the same dose level. Furthermore, empirical simulation results show that the new method has some real advantages in comparison with the modified continual reassessment method.

  15. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  16. SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Geneser, S; Paulsson, A; Sneed, P; Braunstein, S; Ma, L [UCSF Comprehensive Cancer Center, San Francisco, CA (United States)

    2015-06-15

    Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to the thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.

  17. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Directory of Open Access Journals (Sweden)

    Akifumi S Tanabe

    Full Text Available Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need

  18. Two new computational methods for universal DNA barcoding: a benchmark using barcode sequences of bacteria, archaea, animals, fungi, and land plants.

    Science.gov (United States)

    Tanabe, Akifumi S; Toju, Hirokazu

    2013-01-01

    Taxonomic identification of biological specimens based on DNA sequence information (a.k.a. DNA barcoding) is becoming increasingly common in biodiversity science. Although several methods have been proposed, many of them are not universally applicable due to the need for prerequisite phylogenetic/machine-learning analyses, the need for huge computational resources, or the lack of a firm theoretical background. Here, we propose two new computational methods of DNA barcoding and show a benchmark for bacterial/archeal 16S, animal COX1, fungal internal transcribed spacer, and three plant chloroplast (rbcL, matK, and trnH-psbA) barcode loci that can be used to compare the performance of existing and new methods. The benchmark was performed under two alternative situations: query sequences were available in the corresponding reference sequence databases in one, but were not available in the other. In the former situation, the commonly used "1-nearest-neighbor" (1-NN) method, which assigns the taxonomic information of the most similar sequences in a reference database (i.e., BLAST-top-hit reference sequence) to a query, displays the highest rate and highest precision of successful taxonomic identification. However, in the latter situation, the 1-NN method produced extremely high rates of misidentification for all the barcode loci examined. In contrast, one of our new methods, the query-centric auto-k-nearest-neighbor (QCauto) method, consistently produced low rates of misidentification for all the loci examined in both situations. These results indicate that the 1-NN method is most suitable if the reference sequences of all potentially observable species are available in databases; otherwise, the QCauto method returns the most reliable identification results. The benchmark results also indicated that the taxon coverage of reference sequences is far from complete for genus or species level identification in all the barcode loci examined. Therefore, we need to accelerate

  19. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  20. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  1. Dose calculation and in-phantom measurement in BNCT using response matrix method.

    Science.gov (United States)

    Rahmani, Faezeh; Shahriari, Majid

    2011-12-01

    In-phantom measurement of physical dose distribution is very important for Boron Neutron Capture Therapy (BNCT) planning validation. If any changes take place in therapeutic neutron beam due to the beam shaping assembly (BSA) change, the dose will be changed so another group of simulations should be carried out for dose calculation. To avoid this time consuming procedure and speed up the dose calculation to help patients not wait for a long time, response matrix method was used. This procedure was performed for neutron beam of the optimized BSA as a reference beam. These calculations were carried out using the MCNPX, Monte Carlo code. The calculated beam parameters were measured for a SNYDER head phantom placed 10 cm away from beam the exit of the BSA. The head phantom can be assumed as a linear system and neutron beam and dose distribution can be assumed as an input and a response of this system (head phantom), respectively. Neutron spectrum energy was digitized into 27 groups. Dose response of each group was calculated. Summation of these dose responses is equal to a total dose of the whole neutron/gamma spectrum. Response matrix is the double dimension matrix (energy/dose) in which each parameter represents a depth-dose resulted from specific energy. If the spectrum is changed, response of each energy group may be differed. By considering response matrix and energy vector, dose response can be calculated. This method was tested for some BSA, and calculations show statistical errors less than 10%.

  2. A method to adjust radiation dose-response relationships for clinical risk factors

    DEFF Research Database (Denmark)

    Appelt, Ane Lindegaard; Vogelius, Ivan R

    2012-01-01

    Several clinical risk factors for radiation induced toxicity have been identified in the literature. Here, we present a method to quantify the effect of clinical risk factors on radiation dose-response curves and apply the method to adjust the dose-response for radiation pneumonitis for patients...

  3. Dose calculation using a numerical method based on Haar wavelets integration

    Energy Technology Data Exchange (ETDEWEB)

    Belkadhi, K., E-mail: khaled.belkadhi@ult-tunisie.com [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); Manai, K. [Unité de Recherche de Physique Nucléaire et des Hautes Énergies, Faculté des Sciences de Tunis, Université Tunis El-Manar (Tunisia); College of Science and Arts, University of Bisha, Bisha (Saudi Arabia)

    2016-03-11

    This paper deals with the calculation of the absorbed dose in an irradiation cell of gamma rays. Direct measurement and simulation have shown that they are expensive and time consuming. An alternative to these two operations is numerical methods, a quick and efficient way can furnish an estimation of the absorbed dose by giving an approximation of the photon flux at a specific point of space. To validate the numerical integration method based on the Haar wavelet for absorbed dose estimation, a study with many configurations was performed. The obtained results with the Haar wavelet method showed a very good agreement with the simulation highlighting good efficacy and acceptable accuracy. - Highlights: • A numerical integration method using Haar wavelets is detailed. • Absorbed dose is estimated with Haar wavelets method. • Calculated absorbed dose using Haar wavelets and Monte Carlo simulation using Geant4 are compared.

  4. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  5. Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies

    DEFF Research Database (Denmark)

    Thorsen, Jonathan; Brejnrod, Asker Daniel; Mortensen, Martin Steen;

    2016-01-01

    BACKGROUND: There is an immense scientific interest in the human microbiome and its effects on human physiology, health, and disease. A common approach for examining bacterial communities is high-throughput sequencing of 16S rRNA gene hypervariable regions, aggregating sequence-similar amplicons ...... should be interpreted with caution. We provide an easily extensible framework for benchmarking of new methods and future microbiome datasets....... into operational taxonomic units (OTUs). Strategies for detecting differential relative abundance of OTUs between sample conditions include classical statistical approaches as well as a plethora of newer methods, many borrowing from the related field of RNA-seq analysis. This effort is complicated by unique data...... characteristics, including sparsity, sequencing depth variation, and nonconformity of read counts to theoretical distributions, which is often exacerbated by exploratory and/or unbalanced study designs. Here, we assess the robustness of available methods for (1) inference in differential relative abundance...

  6. Multicentre evaluation of a novel vaginal dose reporting method in 153 cervical cancer patients

    DEFF Research Database (Denmark)

    Westerveld, Henrike; de Leeuw, Astrid; Kirchheiner, Kathrin

    2016-01-01

    and methods In a subset of patients from the EMBRACE study, vaginal doses were evaluated. Doses at the applicator surface left/right and anterior/posterior and at 5 mm depth were measured. In addition, the dose at the Posterior–Inferior Border of Symphysis (PIBS) vaginal dose point and PIBS±2 cm......, respectively). At 5 mm depth, doses were 98 (55–212) Gy/91 (54–227) Gy left/right, and 71 (51–145) Gy/67 (49–189) Gy anterior/posterior, respectively. The dose at PIBS and PIBS±2 cm was 41 (3–81) Gy, 54 (32–109) Gy and 5 (1–51) Gy, respectively. At PIBS+2 cm (mid vagina) dose variation was coming from BT....... The variation at PIBS−2 cm (lower vagina) was mainly dependent on EBRT field border location. Conclusions This novel method for reporting vaginal doses coming from EBRT and BT through well-defined dose points gives a robust representation of the dose along the vaginal axis. In addition, it allows comparison...

  7. A method of estimating conceptus doses resulting from multidetector CT examinations during all stages of gestation

    Energy Technology Data Exchange (ETDEWEB)

    Damilakis, John; Tzedakis, Antonis; Perisinakis, Kostas; Papadakis, Antonios E. [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete (Greece); Department of Medical Physics, University Hospital of Iraklion, P.O. Box 1352, 71003 Iraklion, Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, 71003 Iraklion, Crete (Greece); Department of Medical Physics, University Hospital of Iraklion, P.O. Box 1352, 71003 Iraklion, Crete (Greece)

    2010-12-15

    Purpose: Current methods for the estimation of conceptus dose from multidetector CT (MDCT) examinations performed on the mother provide dose data for typical protocols with a fixed scan length. However, modified low-dose imaging protocols are frequently used during pregnancy. The purpose of the current study was to develop a method for the estimation of conceptus dose from any MDCT examination of the trunk performed during all stages of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study to model the Siemens Sensation 16 and Sensation 64 MDCT scanners. Four mathematical phantoms were used, simulating women at 0, 3, 6, and 9 months of gestation. The contribution to the conceptus dose from single simulated scans was obtained at various positions across the phantoms. To investigate the effect of maternal body size and conceptus depth on conceptus dose, phantoms of different sizes were produced by adding layers of adipose tissue around the trunk of the mathematical phantoms. To verify MCNP results, conceptus dose measurements were carried out by means of three physical anthropomorphic phantoms, simulating pregnancy at 0, 3, and 6 months of gestation and thermoluminescence dosimetry (TLD) crystals. Results: The results consist of Monte Carlo-generated normalized conceptus dose coefficients for single scans across the four mathematical phantoms. These coefficients were defined as the conceptus dose contribution from a single scan divided by the CTDI free-in-air measured with identical scanning parameters. Data have been produced to take into account the effect of maternal body size and conceptus position variations on conceptus dose. Conceptus doses measured with TLD crystals showed a difference of up to 19% compared to those estimated by mathematical simulations. Conclusions: Estimation of conceptus doses from MDCT examinations of the trunk performed on pregnant patients during all stages of gestation can be made

  8. X-ray tube output based calculation of patient entrance surface dose: validation of the method

    Energy Technology Data Exchange (ETDEWEB)

    Harju, O.; Toivonen, M.; Tapiovaara, M.; Parviainen, T. [Radiation and Nuclear Safety Authority, Helsinki (Finland)

    2003-06-01

    X-ray departments need methods to monitor the doses delivered to the patients in order to be able to compare their dose level to established reference levels. For this purpose, patient dose per radiograph is described in terms of the entrance surface dose (ESD) or dose-area product (DAP). The actual measurement is often made by using a DAP-meter or thermoluminescent dosimeters (TLD). The third possibility, the calculation of ESD from the examination technique factors, is likely to be a common method for x-ray departments that do not have the other methods at their disposal or for examinations where the dose may be too low to be measured by the other means (e.g. chest radiography). We have developed a program for the determination of ESD by the calculation method and analysed the accuracy that can be achieved by this indirect method. The program calculates the ESD from the current time product, x-ray tube voltage, beam filtration and focus- to-skin distance (FSD). Additionally, for calibrating the dose calculation method and thereby improving the accuracy of the calculation, the x-ray tube output should be measured for at least one x-ray tube voltage value in each x-ray unit. The aim of the present work is to point out the restrictions of the method and details of its practical application. The first experiences from the use of the method will be summarised. (orig.)

  9. Liver tumour segmentation using contrast-enhanced multi-detector CT data: performance benchmarking of three semiautomated methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Jia-Yin [National University of Singapore, Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, Singapore (Singapore); Agency for Science, Technology and Research, Institute for Infocomm Research, Singapore (Singapore); Wong, Damon W.K.; Tian, Qi; Xiong, Wei; Liu, Jimmy J. [Agency for Science, Technology and Research, Institute for Infocomm Research, Singapore (Singapore); Ding, Feng; Venkatesh, Sudhakar K.; Qi, Ying-Yi [National University of Singapore, Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, Singapore (Singapore); Leow, Wee-Kheng [National University of Singapore, School of Computing, Singapore (Singapore)

    2010-07-15

    Automatic tumour segmentation and volumetry is useful in cancer staging and treatment outcome assessment. This paper presents a performance benchmarking study on liver tumour segmentation for three semiautomatic algorithms: 2D region growing with knowledge-based constraints (A1), 2D voxel classification with propagational learning (A2) and Bayesian rule-based 3D region growing (A3). CT data from 30 patients were studied, and 47 liver tumours were isolated and manually segmented by experts to obtain the reference standard. Four datasets with ten tumours were used for algorithm training and the remaining 37 tumours for testing. Three evaluation metrics, relative absolute volume difference (RAVD), volumetric overlap error (VOE) and average symmetric surface distance (ASSD), were computed based on computerised and reference segmentations. A1, A2 and A3 obtained mean/median RAVD scores of 17.93/10.53%, 17.92/9.61% and 34.74/28.75%, mean/median VOEs of 30.47/26.79%, 25.70/22.64% and 39.95/38.54%, and mean/median ASSDs of 2.05/1.41 mm, 1.57/1.15 mm and 4.12/3.41 mm, respectively. For each metric, we obtained significantly lower values of A1 and A2 than A3 (P < 0.01), suggesting that A1 and A2 outperformed A3. Compared with the reference standard, the overall performance of A1 and A2 is promising. Further development and validation is necessary before reliable tumour segmentation and volumetry can be widely used clinically. (orig.)

  10. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    Science.gov (United States)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  11. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. Standardized benchmarking in the quest for orthologs

    DEFF Research Database (Denmark)

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador;

    2016-01-01

    -recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods...... and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods....

  14. Benchmark Database on Isolated Small Peptides Containing an Aromatic Side Chain: Comparison Between Wave Function and Density Functional Theory Methods and Empirical Force Field

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, Haydee; Pluhackova, Kristyna; Pitonak, Michal; Rezac, Jan; Hobza, Pavel

    2008-03-13

    A detailed quantum chemical study on five peptides (WG, WGG, FGG, GGF and GFA) containing the residues phenylalanyl (F), glycyl (G), tryptophyl (W) and alanyl (A)—where F and W are of aromatic character—is presented. When investigating isolated small peptides, the dispersion interaction is the dominant attractive force in the peptide backbone–aromatic side chain intramolecular interaction. Consequently, an accurate theoretical study of these systems requires the use of a methodology covering properly the London dispersion forces. For this reason we have assessed the performance of the MP2, SCS-MP2, MP3, TPSS-D, PBE-D, M06-2X, BH&H, TPSS, B3LYP, tight-binding DFT-D methods and ff99 empirical force field compared to CCSD(T)/complete basis set (CBS) limit benchmark data. All the DFT techniques with a ‘-D’ symbol have been augmented by empirical dispersion energy while the M06-2X functional was parameterized to cover the London dispersion energy. For the systems here studied we have concluded that the use of the ff99 force field is not recommended mainly due to problems concerning the assignment of reliable atomic charges. Tight-binding DFT-D is efficient as a screening tool providing reliable geometries. Among the DFT functionals, the M06-2X and TPSS-D show the best performance what is explained by the fact that both procedures cover the dispersion energy. The B3LYP and TPSS functionals—not covering this energy—fail systematically. Both, electronic energies and geometries obtained by means of the wave-function theory methods compare satisfactorily with the CCSD(T)/CBS benchmark data.

  15. Statistical methods to evaluate the correlation between measured and calculated dose using quality assurance method in IMRT

    Directory of Open Access Journals (Sweden)

    Abdulhamid Chaikh

    2015-12-01

    Full Text Available Purpose: the objective of this study is to validate a procedure based on a statistical method to assess the agreement and the correlation between measured and calculated dose in the process of quality assurance (QA for Intensity-Modulated Radiation Therapy (IMRT.Patients and methods: 10 patients including 56 fields for head and neck cancer treatment were analyzed. For each patient, one treatment plan was generated using Eclipse TPS®. To compare the calculated dose with measured dose a CT-scan of solid water slabs (30 × 30 × 15 cm3 was used. The measurements were done for absolute dose by a pinpoint ionization chamber and 2D dose distributions using electronic portal imaging device dosimetry. Six criteria levels were applied for each case (3%, 3 mm, (4%, 3 mm, (5%, 3 mm, (4%, 4 mm, (5%, 4 mm and (5%, 5 mm. The normality of the data and the variance homogeneity were tested using Shapiro-Wilks test and Levene’s test, respectively. Wilcoxon signed-rank paired test was used to calculate p-value. Bland-Altman method was used to calculate the limit of agreement between calculated and measured doses and to draw a scatter plot. The correlation between calculated and measured doses was assessed using Spearman’s rank test.Results: The statistical tests indicate that the data do not fulfill normal distribution, p < 0.001 and had a homogenous variance, p = 0.85. The upper and lower limit of agreements for absolute dose measurements were 6.44% and -6.40%, respectively. Wilcoxon test indicated a significance difference between calculated and measured dose with ionization chamber, p = 0.01. Spearman’s test indicated a strong correlation between calculated and absolute measured dose, ρ = 0.99. Therefore, there is a lack of correlation between dose difference for absolute dose measurements and gamma passing rates for 2D dose measurements.Conclusion: the statistical tests showed that the common acceptance criteria’s using gamma evaluation are not able

  16. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  17. Digital radiography of scoliosis with a scanning method: radiation dose optimization

    Energy Technology Data Exchange (ETDEWEB)

    Geijer, Haakan; Andersson, Torbjoern [Department of Radiology, Oerebro University Hospital, 701 85 Oerebro (Sweden); Verdonck, Bert [Philips Medical Systems, P.O. Box 10,000, 5680 Best (Netherlands); Beckman, Karl-Wilhelm; Persliden, Jan [Department of Medical Physics, Oerebro University Hospital, 701 85 Oerebro (Sweden)

    2003-03-01

    The aim of this study was optimization of the radiation dose-image quality relationship for a digital scanning method of scoliosis radiography. The examination is performed as a digital multi-image translation scan that is reconstructed to a single image in a workstation. Entrance dose was recorded with thermoluminescent dosimeters placed dorsally on an Alderson phantom. At the same time, kerma area product (KAP) values were recorded. A Monte Carlo calculation of effective dose was also made. Image quality was evaluated with a contrast-detail phantom and Visual Grading. The radiation dose was reduced by lowering the image intensifier entrance dose request, adjusting pulse frequency and scan speed, and by raising tube voltage. The calculated effective dose was reduced from 0.15 to 0.05 mSv with reduction of KAP from 1.07 to 0.25 Gy cm{sup 2} and entrance dose from 0.90 to 0.21 mGy. The image quality was reduced with the Image Quality Figure going from 52 to 62 and a corresponding reduction in image quality as assessed with Visual Grading. The optimization resulted in a dose reduction to 31% of the original effective dose with an acceptable reduction in image quality considering the intended use of the images for angle measurements. (orig.)

  18. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  19. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  20. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  1. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  2. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore al...... alternative improvement strategies. Implementations of both a parametric and a non-parametric model are presented....

  3. Hepatic CT perfusion measurements: A feasibility study for radiation dose reduction using new image reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Negi, Noriyuki, E-mail: noriyuki@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Yoshikawa, Takeshi, E-mail: yoshikawa0816@aol.com [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Ohno, Yoshiharu, E-mail: yosirad@kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Somiya, Yuichiro, E-mail: somiya13@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sekitani, Toshinori, E-mail: atieinks-toshi@nifty.com [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sugihara, Naoki, E-mail: naoki.sugihara@toshiba.co.jp [Toshiba Medical Systems Co., 1385 Shimoishigami, Otawara 324-0036 (Japan); Koyama, Hisanobu, E-mail: hkoyama@med.kobe-u.ac.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kanda, Tomonori, E-mail: k_a@hotmail.co.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kanata, Naoki, E-mail: takikina12345@yahoo.co.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Murakami, Tohru, E-mail: mura@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Kawamitsu, Hideaki, E-mail: kawamitu@med.kobe-u.ac.jp [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan); Sugimura, Kazuro, E-mail: sugimura@med.kobe-u.ac.jp [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunokicho, Chuoku, Kobe 650-0017 (Japan)

    2012-11-15

    Objectives: To assess the effects of image reconstruction method on hepatic CT perfusion (CTP) values using two CT protocols with different radiation doses. Materials and methods: Sixty patients underwent hepatic CTP and were randomly divided into two groups. Tube currents of 210 or 250 mA were used for the standard dose group and 120 or 140 mA for the low dose group. The higher currents were selected for large patients. Demographic features of the groups were compared. CT images were reconstructed by using filtered back projection (FBP), image filter (quantum de-noising, QDS), and adaptive iterative dose reduction (AIDR). Hepatic arterial and portal perfusion (HAP and HPP, ml/min/100 ml) and arterial perfusion fraction (APF, %) were calculated using the dual-input maximum slope method. ROIs were placed on each hepatic segment. Perfusion and Hounsfield unit (HU) values, and image noises (standard deviations of HU value, SD) were measured and compared between the groups and among the methods. Results: There were no significant differences in the demographic features of the groups, nor were there any significant differences in mean perfusion and HU values for either the groups or the image reconstruction methods. Mean SDs of each of the image reconstruction methods were significantly lower (p < 0.0001) for the standard dose group than the low dose group, while mean SDs for AIDR were significantly lower than those for FBP for both groups (p = 0.0006 and 0.013). Radiation dose reductions were approximately 45%. Conclusions: Image reconstruction method did not affect hepatic perfusion values calculated by dual-input maximum slope method with or without radiation dose reductions. AIDR significantly reduced images noises.

  4. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  5. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    Science.gov (United States)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  6. SU-E-J-96: Multi-Axis Dose Accumulation of Noninvasive Image-Guided Breast Brachytherapy Through Biomechanical Modeling of Tissue Deformation Using the Finite Element Method

    Energy Technology Data Exchange (ETDEWEB)

    Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States); Ghadyani, HR [SUNY Farmingdale State College, Farmingdale, NY (United States); Bastien, AD; Lutz, NN [Univeristy Massachusetts Lowell, Lowell, MA (United States); Hepel, JT [Rhode Island Hospital, Providence, RI (United States)

    2015-06-15

    Purpose: Noninvasive image-guided breast brachytherapy delivers conformal HDR Ir-192 brachytherapy treatments with the breast compressed, and treated in the cranial-caudal and medial-lateral directions. This technique subjects breast tissue to extreme deformations not observed for other disease sites. Given that, commercially-available software for deformable image registration cannot accurately co-register image sets obtained in these two states, a finite element analysis based on a biomechanical model was developed to deform dose distributions for each compression circumstance for dose summation. Methods: The model assumed the breast was under planar stress with values of 30 kPa for Young’s modulus and 0.3 for Poisson’s ratio. Dose distributions from round and skin-dose optimized applicators in cranial-caudal and medial-lateral compressions were deformed using 0.1 cm planar resolution. Dose distributions, skin doses, and dose-volume histograms were generated. Results were examined as a function of breast thickness, applicator size, target size, and offset distance from the center. Results: Over the range of examined thicknesses, target size increased several millimeters as compression thickness decreased. This trend increased with increasing offset distances. Applicator size minimally affected target coverage, until applicator size was less than the compressed target size. In all cases, with an applicator larger or equal to the compressed target size, > 90% of the target covered by > 90% of the prescription dose. In all cases, dose coverage became less uniform as offset distance increased and average dose increased. This effect was more pronounced for smaller target-applicator combinations. Conclusions: The model exhibited skin dose trends that matched MC-generated benchmarking results and clinical measurements within 2% over a similar range of breast thicknesses and target sizes. The model provided quantitative insight on dosimetric treatment variables over

  7. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  8. A novel method for the evaluation of uncertainty in dose volume histogram computation

    CERN Document Server

    Cutanda-Henriquez, Francisco

    2007-01-01

    Dose volume histograms are a useful tool in state-of-the-art radiotherapy planning, and it is essential to be aware of their limitations. Dose distributions computed by treatment planning systems are affected by several sources of uncertainty such as algorithm limitations, measurement uncertainty in the data used to model the beam and residual differences between measured and computed dose, once the model is optimized. In order to take into account the effect of uncertainty, a probabilistic approach is proposed and a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal or greater than a certain value is found using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a relationship is given for practical computations. This method is applied to a set of dose volume histograms for different regions of interest for 6 brain pat...

  9. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  10. Benchmarking of corporate social responsibility: Methodological problems and robustness

    OpenAIRE

    2004-01-01

    This paper investigates the possibilities and problems of benchmarking Corporate Social Responsibility (CSR). After a methodological analysis of the advantages and problems of benchmarking, we develop a benchmark method that includes economic, social and environmental aspects as well as national and international aspects of CSR. The overall benchmark is based on a weighted average of these aspects. The weights are based on the opinions of companies and NGO’s. Using different me...

  11. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  12. A simple method for conversion of airborne gamma-ray spectra to ground level doses

    DEFF Research Database (Denmark)

    Korsbech, Uffe C C; Bargholz, Kim

    1996-01-01

    A new and simple method for conversion of airborne NaI(Tl) gamma-ray spectra to dose rates at ground level has been developed. By weighting the channel count rates with the channel numbers a spectrum dose index (SDI) is calculated for each spectrum. Ground level dose rates then are determined...... by multiplying the SDI by an altitude dependent conversion factor. The conversion factors are determined from spectra based on Monte Carlo calculations. The results are compared with measurements in a laboratory calibration set-up. IT-NT-27. June 1996. 27 p....

  13. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  14. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy.

    Science.gov (United States)

    Koger, B; Kirkby, C

    2016-03-07

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.

  15. The heat-balance integral method by a parabolic profile with unspecified exponent: Analysis and Benchmark Exercises

    CERN Document Server

    Hristov, Jordan

    2010-01-01

    The heat-balance integral method of Goodman has been thoroughly analyzed in the case of a parabolic profile with unspecified exponent depending on the boundary condition imposed. That the classical Good man's boundary conditions defining the time-dependent coefficients of the prescribed temperature profile do not work efficiently at the front of the thermal layers if the specific parabolic profile at issue is employed. Additional constraints based on physical assumption enhance the heat-balance integral method and form a robust algorithm defining the parabola exponent . The method has been compared by results provided by the Veinik's method that is by far different from the Good man's idea but also assume forma tion of thermal layer penetrating the heat body. The method has been demonstrated through detailed solutions of 4 1-D heat-conduction problems in Cartesian co-ordinates including a spherical problem (through change of vari ables) and over-specified boundary condition at the face of the thermal layer.

  16. Benchmarking Compound Methods (CBS-QB3, CBS-APNO, G3, G4, W1BD) against the Active Thermochemical Tables: Formation Enthalpies of Radicals.

    Science.gov (United States)

    Somers, Kieran P; Simmie, John M

    2015-08-20

    The 298.15 K formation enthalpies of 38 radicals with molecular formula CxHyOz have been computed via the atomization procedure using the five title methods. The computed formation enthalpies are then benchmarked against the values recommended in the Active Thermochemical Tables (ATcT). The accuracy of the methods have been interpreted in terms of descriptive statistics, including the mean-signed error, mean-unsigned error, maximum average deviation, 2σ uncertainties, and 2×root-mean-square-deviations (2RMSD). The results highlight the following rank order of accuracy for the methods studied G4 > G3 > W1BD > CBS-APNO > CBS-QB3. The findings of this work are also considered in light of a recent companion study, which took an identical approach to quantifying the accuracies of these methods for 48 closed-shell singlet CxHyOz compounds. A similar order of accuracies and precisions were observed therein: G3 > G4 > W1BD > CBS-APNO > CBS-QB3. Both studies highlight systematic biases/deviations from the ATcT for the methods investigated, which are discussed in some detail, with methods having clear tendencies to over- or underpredict the recommended formation enthalpies for radical and/or closed-shell CxHyOz compounds. We show that one can improve the accuracy of their computation, and simultaneously reduce the uncertainty, by taking unweighted average formation enthalpies from various combinations of methods used. The reader should note that the statistical analyses preceding these conclusions also highlight that these error cancellation effects are unique for closed-shell and radical species. By extension, these error-cancellation effects can be expected to be different for various homologous series and chemical functionalities and their closed- and open-shell subgroups. Hence, further benchmarking studies are advised for other homologous series, such that the scientists and engineers (e.g., combustion/atmospheric/astrochemical) who frequently use these methods can

  17. Pan-specific MHC class I predictors: A benchmark of HLA class I pan-specific prediction methods

    DEFF Research Database (Denmark)

    Zhang, Hao; Lundegaard, Claus; Nielsen, Morten

    2009-01-01

    emerging pathogens. Methods have recently been published that are able to predict peptide binding to any human MHC class I molecule. In contrast to conventional allele-specific methods, these methods do allow for extrapolation to un-characterized MHC molecules. These pan-specific HLA predictors have...... not previously been compared using independent evaluation sets. Results: A diverse set of quantitative peptide binding affinity measurements was collected from IEDB, together with a large set of HLA class I ligands from the SYFPEITHI database. Based on these data sets, three different pan-specific HLA web......-accessible predictors NetMHCpan, Adaptive-Double-Threading (ADT), and KISS were evaluated. The performance of the pan-specific predictors was also compared to a well performing allele-specific MHC class I predictor, NetMHC, as well as a consensus approach integrating the predictions from the NetMHC and Net...

  18. Radiation dose to children in diagnostic radiology. Measurements and methods for clinical optimisation studies

    Energy Technology Data Exchange (ETDEWEB)

    Almen, A.J.

    1995-09-01

    A method for estimating mean absorbed dose to different organs and tissues was developed for paediatric patients undergoing X-ray investigations. The absorbed dose distribution in water was measured for the specific X-ray beam used. Clinical images were studied to determine X-ray beam positions and field sizes. Size and position of organs in the patient were estimated using ORNL phantoms and complementary clinical information. Conversion factors between the mean absorbed dose to various organs and entrance surface dose for five different body sizes were calculated. Direct measurements on patients estimating entrance surface dose and energy imparted for common X-ray investigations were performed. The examination technique for a number of paediatric X-ray investigations used in 19 Swedish hospitals was studied. For a simulated pelvis investigation of a 1-year old child the entrance surface dose was measured and image quality was estimated using a contrast-detail phantom. Mean absorbed doses to organs and tissues in urography, lung, pelvis, thoracic spine, lumbar spine and scoliosis investigations was calculated. Calculations of effective dose were supplemented with risk calculations for special organs e g the female breast. The work shows that the examination technique in paediatric radiology is not yet optimised, and that the non-optimised procedures contribute to a considerable variation in radiation dose. In order to optimise paediatric radiology there is a need for more standardised methods in patient dosimetry. It is especially important to relate measured quantities to the size of the patient, using e g the patient weight and length. 91 refs, 17 figs, 8 tabs.

  19. Simulation of sound waves using the Lattice Boltzmann Method for fluid flow: Benchmark cases for outdoor sound propagation

    NARCIS (Netherlands)

    Salomons, E.M.; Lohman, W.J.A.; Zhou, H.

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-fi

  20. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  1. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  2. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  3. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  4. TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B.; Azmy, Yousry Y. [Department of Mechanical and Nuclear Engineering, Penn State University, University Park, PA 16802 (United States)

    2008-07-01

    We present the TORT solutions to the 3-D transport codes' suite of benchmarks exercise. An overview of benchmark configurations is provided, followed by a description of the TORT computational model we developed to solve the cases comprising the benchmark suite. In the numerical experiments reported in this paper, we chose to refine the spatial and angular discretizations simultaneously, from the coarsest model (40x40x40, 200 angles) to the finest model (160x160x160, 800 angles), and employed the results of the finest computational model as reference values for evaluating the mesh-refinement effects. The presented results show that the solutions for most cases in the suite of benchmarks as computed by TORT are in the asymptotic regime. (authors)

  5. A study on the indirect urea dosing method in the Selective Catalytic Reduction system

    Science.gov (United States)

    Brzeżański, M.; Sala, R.

    2016-09-01

    This article presents the results of studies on concept solution of dosing urea in a gas phase in a selective catalytic reduction system. The idea of the concept was to heat-up and evaporate the water urea solution before introducing it into the exhaust gas stream. The aim was to enhance the processes of urea converting into ammonia, what is the target reductant for nitrogen oxides treatment. The study was conducted on a medium-duty Euro 5 diesel engine with exhaust line consisting of DOC catalyst, DPF filter and an SCR system with a changeable setup allowing to dose the urea in liquid phase (regular solution) and to dose it in a gas phase (concept solution). The main criteria was to assess the effect of physical state of urea dosed on the NOx conversion ratio in the SCR catalyst. In order to compare both urea dosing methods a special test procedure was developed which consisted of six test steps covering a wide temperature range of exhaust gas generated at steady state engine operation condition. Tests were conducted for different urea dosing quantities defined by the a equivalence ratio. Based on the obtained results, a remarkable improvement in NOx reduction was found for gas urea application in comparison to the standard liquid urea dosing. Measured results indicate a high potential to increase an efficiency of the SCR catalyst by using a gas phase urea and provide the basis for further scientific research on this type of concept.

  6. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation

    OpenAIRE

    Erik M. Salomons; Lohman, Walter J. A.; Han Zhou

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equation...

  7. Is the Farm Prospering and Why? A Method to Spot Good or Weak Performance and To Do "Benchmarking"

    OpenAIRE

    Nelson, Bert-Owe

    2003-01-01

    In Swedish agriculture "mixed farming" is common. The manager may choose among several crops, with or without livestock production, or to produce services to other farmers or to customers outside the farm sector. Farmers with diversified production feel the need to know which enterprises that do really contribute to farm profitability on the whole even if gross margins appears to be rather good. With our method we evaluate the economic result of the most recent year in cooperation with the fa...

  8. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  9. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  10. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  11. Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

    Science.gov (United States)

    Gu, Jianwei

    There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the

  12. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  13. Revisiting the TORT Solutions to the NEA Suite of Benchmarks for 3D Transport Methods and Codes Over a Range in Parameter Space

    Energy Technology Data Exchange (ETDEWEB)

    Bekar, Kursat B [ORNL; Azmy, Yousry [North Carolina State University

    2009-01-01

    Improved TORT solutions to the 3D transport codes' suite of benchmarks exercise are presented in this study. Preliminary TORT solutions to this benchmark indicate that the majority of benchmark quantities for most benchmark cases are computed with good accuracy, and that accuracy improves with model refinement. However, TORT fails to compute accurate results for some benchmark cases with aspect ratios drastically different from 1, possibly due to ray effects. In this work, we employ the standard approach of splitting the solution to the transport equation into an uncollided flux and a fully collided flux via the code sequence GRTUNCL3D and TORT to mitigate ray effects. The results of this code sequence presented in this paper show that the accuracy of most benchmark cases improved substantially. Furthermore, the iterative convergence problems reported for the preliminary TORT solutions have been resolved by bringing the computational cells' aspect ratio closer to unity and, more importantly, by using 64-bit arithmetic precision in the calculation sequence. Results of this study are also reported.

  14. Using the Monte Carlo method for assessing the tissue and organ doses of patients in dental radiography

    Science.gov (United States)

    Makarevich, K. O.; Minenko, V. F.; Verenich, K. A.; Kuten, S. A.

    2016-05-01

    This work is dedicated to modeling dental radiographic examinations to assess the absorbed doses of patients and effective doses. For simulating X-ray spectra, the TASMIP empirical model is used. Doses are assessed on the basis of the Monte Carlo method by using MCNP code for voxel phantoms of ICRP. The results of the assessment of doses to individual organs and effective doses for different types of dental examinations and features of X-ray tube are presented.

  15. A benchmark study of the two-dimensional Hubbard model with auxiliary-field quantum Monte Carlo method

    CERN Document Server

    Qin, Mingpu; Zhang, Shiwei

    2016-01-01

    Ground state properties of the Hubbard model on a two-dimensional square lattice are studied by the auxiliary-field quantum Monte Carlo method. Accurate results for energy, double occupancy, effective hopping, magnetization, and momentum distribution are calculated for interaction strengths of U/t from 2 to 8, for a range of densities including half-filling and n = 0.3, 0.5, 0.6, 0.75, and 0.875. At half-filling, the results are numerically exact. Away from half-filling, the constrained path Monte Carlo method is employed to control the sign problem. Our results are obtained with several advances in the computational algorithm, which are described in detail. We discuss the advantages of generalized Hartree-Fock trial wave functions and its connection to pairing wave functions, as well as the interplay with different forms of Hubbard-Stratonovich decompositions. We study the use of different twist angle sets when applying the twist averaged boundary conditions. We propose the use of quasi-random sequences, whi...

  16. Benchmarking Quantum Mechanics/Molecular Mechanics (QM/MM) Methods on the Thymidylate Synthase-Catalyzed Hydride Transfer.

    Science.gov (United States)

    Świderek, Katarzyna; Arafet, Kemel; Kohen, Amnon; Moliner, Vicent

    2017-03-14

    Given the ubiquity of hydride-transfer reactions in enzyme-catalyzed processes, identifying the appropriate computational method for evaluating such biological reactions is crucial to perform theoretical studies of these processes. In this paper, the hydride-transfer step catalyzed by thymidylate synthase (TSase) is studied by examining hybrid quantum mechanics/molecular mechanics (QM/MM) potentials via multiple semiempirical methods and the M06-2X hybrid density functional. Calculations of protium and tritium transfer in these reactions across a range of temperatures allowed calculation of the temperature dependence of kinetic isotope effects (KIE). Dynamics and quantum-tunneling effects are revealed to have little effect on the reaction rate, but are significant in determining the KIEs and their temperature dependence. A good agreement with experiments is found, especially when computed for RM1/MM simulations. The small temperature dependence of quantum tunneling corrections and the quasiclassical contribution term cancel each other, while the recrossing transmission coefficient seems to be temperature-independent over the interval of 5-40 °C.

  17. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  18. Radioactivity in food and the environment: calculations of UK radiation doses using integrated assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Camplin, W C; Brownless, G P; Round, G D; Winpenny, K; Hunt, G J [Centre for Environment, Fisheries and Aquaculture Science, CEFAS Laboratory, Lowestoft (United Kingdom)

    2002-12-01

    A new method for estimating radiation doses to UK critical groups is proposed for discussion. Amongst others, the Food Standards Agency (FSA) and the Scottish Environment Protection Agency (SEPA) undertake surveillance of UK food and the environment as a check on the effect of discharges of radioactive wastes. Discharges in gaseous and liquid form are made under authorisation by the Environment Agency and SEPA under powers in the Radioactive Substance Act. Results of surveillance by the FSA and SEPA are published in the Radioactivity in Food and the Environment (RIFE) report series. In these reports, doses to critical groups are normally estimated separately for gaseous and liquid discharge pathways. Simple summation of these doses would tend to overestimate doses actually received. Three different methods of combining the effects of both types of discharge in an integrated assessment are considered and ranked according to their ease of application, transparency, scientific rigour and presentational issues. A single integrated assessment method is then chosen for further study. Doses are calculated for surveillance data for the calendar year 2000 and compared with those from the existing RIFE method.

  19. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    Directory of Open Access Journals (Sweden)

    Jodi L. Pawluski

    2014-01-01

    Full Text Available Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1 cookie and (2 osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat.

  20. A Blind Test Experiment in Volcano Geodesy: a Benchmark for Inverse Methods of Ground Deformation and Gravity Data

    Science.gov (United States)

    D'Auria, Luca; Fernandez, Jose; Puglisi, Giuseppe; Rivalta, Eleonora; Camacho, Antonio; Nikkhoo, Mehdi; Walter, Thomas

    2016-04-01

    The inversion of ground deformation and gravity data is affected by an intrinsic ambiguity because of the mathematical formulation of the inverse problem. Current methods for the inversion of geodetic data rely on both parametric (i.e. assuming a source geometry) and non-parametric approaches. The former are able to catch the fundamental features of the ground deformation source but, if the assumptions are wrong or oversimplified, they could provide misleading results. On the other hand, the latter class of methods, even if not relying on stringent assumptions, could suffer from artifacts, especially when dealing with poor datasets. In the framework of the EC-FP7 MED-SUV project we aim at comparing different inverse approaches to verify how they cope with basic goals of Volcano Geodesy: determining the source depth, the source shape (size and geometry), the nature of the source (magmatic/hydrothermal) and hinting the complexity of the source. Other aspects that are important in volcano monitoring are: volume/mass transfer toward shallow depths, propagation of dikes/sills, forecasting the opening of eruptive vents. On the basis of similar experiments already done in the fields of seismic tomography and geophysical imaging, we have devised a bind test experiment. Our group was divided into one model design team and several inversion teams. The model design team devised two physical models representing volcanic events at two distinct volcanoes (one stratovolcano and one caldera). They provided the inversion teams with: the topographic reliefs, the calculated deformation field (on a set of simulated GPS stations and as InSAR interferograms) and the gravity change (on a set of simulated campaign stations). The nature of the volcanic events remained unknown to the inversion teams until after the submission of the inversion results. Here we present the preliminary results of this comparison in order to determine which features of the ground deformation and gravity source

  1. Research on Benchmarking Evaluation Method for Green Construction Based on Individual Advantage Identification%基于个性优势识别的绿色施工标杆评定方法研究

    Institute of Scientific and Technical Information of China (English)

    邵必林; 臧朋; 赵欢欢

    2012-01-01

    Green construction in China is still at an early development stage,which requires benchmarking projects as reference for construction enterprises. In this paper, using the method of individual advantage identification and starting from the individual perspective, the evaluation methods for individual benchmarking and group benchmarking were presented based on the advantages and disadvantages of green construction projects, which was attempt to provide an innovative and more objective method of benchmarking evaluation for green construction in China.%绿色施工在我国尚处于起步阶段,亟需树立起可供施工企业学习和借鉴的标杆项目.本文采用个性优势识别的方法,从个体的角度出发,通过客观地识别各个绿色施工项目的优劣点,给出了“个体标杆”和“群体标杆”的评定方法,以期为我国绿色施工的标杆评定工作提供一种新的和更趋客观的理论方法.

  2. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  3. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  4. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Directory of Open Access Journals (Sweden)

    Erik M Salomons

    Full Text Available Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i reduction of the kinematic viscosity and ii reduction of the lattice spacing.

  5. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    Science.gov (United States)

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  6. Application of combined TLD and CR-39 PNTD method for measurement of total dose and dose equivalent on ISS

    Energy Technology Data Exchange (ETDEWEB)

    Benton, E.R. [Eril Research, Inc., Stillwater, Oklahoma (United States); Deme, S.; Apathy, I. [KFKI Atomic Energy Research Institute, Budapest (Hungary)

    2006-07-01

    To date, no single passive detector has been found that measures dose equivalent from ionizing radiation exposure in low-Earth orbit. We have developed the I.S.S. Passive Dosimetry System (P.D.S.), utilizing a combination of TLD in the form of the self-contained Pille TLD system and stacks of CR-39 plastic nuclear track detector (P.N.T.D.) oriented in three mutually orthogonal directions, to measure total dose and dose equivalent aboard the International Space Station (I.S.S.). The Pille TLD system, consisting on an on board reader and a large number of Ca{sub 2}SO{sub 4}:Dy TLD cells, is used to measure absorbed dose. The Pille TLD cells are read out and annealed by the I.S.S. crew on orbit, such that dose information for any time period or condition, e.g. for E.V.A. or following a solar particle event, is immediately available. Near-tissue equivalent CR-39 P.N.T.D. provides Let spectrum, dose, and dose equivalent from charged particles of LET{sub {infinity}}H{sub 2}O {>=} 10 keV/{mu}m, including the secondaries produced in interactions with high-energy neutrons. Dose information from CR-39 P.N.T.D. is used to correct the absorbed dose component {>=} 10 keV/{mu}m measured in TLD to obtain total dose. Dose equivalent from CR-39 P.N.T.D. is combined with the dose component <10 keV/{mu}m measured in TLD to obtain total dose equivalent. Dose rates ranging from 165 to 250 {mu}Gy/day and dose equivalent rates ranging from 340 to 450 {mu}Sv/day were measured aboard I.S.S. during the Expedition 2 mission in 2001. Results from the P.D.S. are consistent with those from other passive detectors tested as part of the ground-based I.C.C.H.I.B.A.N. intercomparison of space radiation dosimeters. (authors)

  7. Benchmark experiment to verify radiation transport calculations for dosimetry in radiation therapy; Benchmark-Experiment zur Verifikation von Strahlungstransportrechnungen fuer die Dosimetrie in der Strahlentherapie

    Energy Technology Data Exchange (ETDEWEB)

    Renner, Franziska [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (Germany)

    2016-11-01

    Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide.

  8. Determination of gelation dose of poly(vinyl acetate) by a spectrophotometric method

    Energy Technology Data Exchange (ETDEWEB)

    Guven, Olgun; Yigit, Fatma

    1986-01-01

    The gelation point is an important property of polymers undergoing crosslinking when subjected to high energy radiation. This point is generally determined by viscometric and solubility methods or by mechanical measurements. When crosslinking and discoloration take place simultaneously, gelation doses can be determined spectrophotometrically. In this work it is demonstrated that the gelation dose of poly (vinyl acetate) can be determined by simply recording the u.v.-vis. spectra of the solutions of ..gamma..-irradiated polymer. The reliability of the method is verified by viscometric and solubility measurements.

  9. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  10. Comparison of Vocal Vibration-Dose Measures for Potential-Damage Risk Criteria

    Science.gov (United States)

    Titze, Ingo R.; Hunter, Eric J.

    2015-01-01

    Purpose: School-teachers have become a benchmark population for the study of occupational voice use. A decade of vibration-dose studies on the teacher population allows a comparison to be made between specific dose measures for eventual assessment of damage risk. Method: Vibration dosimetry is reformulated with the inclusion of collision stress.…

  11. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  12. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  13. Benchmarking Passive Seismic Methods of Imaging Surface Wave Velocity Interfaces Down to 300 m — Mapping Murray Basin Thickness in Southeastern Australia

    Science.gov (United States)

    Gorbatov, A.; Czarnota, K.

    2015-12-01

    In shallow passive seismology it is generally thought that the spatial autocorrelation (SPAC) method is more robust than the horizontal over vertical spectral ratio (HVSR) method at resolving the depth to surface-wave velocity (Vs) interfaces. Here we present results of a field test of these two methods over ten drill sites in Victoria, Australia. The target interface is the base of Cenozoic unconsolidated to semi-consolidated clastic and/or carbonate sediments of the Murray Basin, which overlie Paleozoic crystalline rocks. Drilled depths of this interface are between 27 and 300 m. A three-arm spiral array, with a radius of 250 m, consisting of 13 Trillium compact broadband seismometers was deployed at each site for 7-21 hours. The Vs architecture beneath each site was determined through nonlinear inversion of HVSR and SPAC data using the neighborhood algorithm of Sambridge (1999) implemented in geopsy by Wathelet et al (2005). The HVSR technique yielded depth estimates, of the target interface (Vs > 1000 m/s), generally within 20% error. Successful estimates were even obtained at a site with an inverted velocity profile, where Quaternary basalts overlie Neogene sediments. Half of the SPAC estimates showed significantly higher errors than obtained using HVSR. Joint inversion provided the most reliable estimates but was unstable at three sites. We attribute the surprising success of HVSR over SPAC to a low content of transient signals within the seismic record caused by low degrees of anthropogenic noise at the benchmark sites. At a few sites SPAC curves showed clear overtones suggesting that more reliable SPAC estimates maybe obtained utilizing a multi modal inversion. Nevertheless, our study seems to indicate that reliable basin thickness estimates in remote Australia can be obtained utilizing HVSR data from a single seismometer, without a priori knowledge of the surface-wave velocity of the basin material, thereby negating the need to deploy cumbersome arrays.

  14. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  15. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  16. Determination of the delivered hemodialysis dose using standard methods and on-line clearance monitoring

    Directory of Open Access Journals (Sweden)

    Vlatković Vlastimir

    2006-01-01

    Full Text Available Background/aim: Delivered dialysis dose has a cumulative effect and significant influence upon the adequacy of dialysis, quality of life and development of co-morbidity at patients on dialysis. Thus, a great attention is given to the optimization of dialysis treatment. On-line Clearance Monitoring (OCM allows a precise and continuous measurement of the delivered dialysis dose. Kt/V index (K = dialyzer clearance of urea; t = dialysis time; V = patient's total body water, measured in real time is used as a unit for expressing the dialysis dose. The aim of this research was to perform a comparative assessment of the delivered dialysis dose by the application of the standard measurement methods and a module for continuous clearance monitoring. Methods. The study encompassed 105 patients who had been on the chronic hemodialysis program for more than three months, three times a week. By random choice, one treatment per each controlled patient was taken. All the treatments understood bicarbonate dialysis. The delivered dialysis dose was determined by the calculation of mathematical models: Urea Reduction Ratio (URR singlepool index Kt/V (spKt/V and by the application of OCM. Results. Urea Reduction Ratio was the most sensitive parameter for the assessment and, at the same time, it was in the strongest correlation with the other two, spKt/V indexes and OCM. The values pointed out an adequate dialysis dose. The URR values were significantly higher in women than in men, p < 0.05. The other applied model for the delivered dialysis dose measurement was Kt/V index. The obtained values showed that the dialysis dose was adequate, and that, according to this parameter, the women had significantly better dialysis, then the men p < 0.05. According to the OCM, the average value was slightly lower than the adequate one. The women had a satisfactory dialysis according to this index as well, while the delivered dialysis dose was insufficient in men. The difference

  17. A new method for optimum dose distribution determination taking tumour mobility into account

    Science.gov (United States)

    Stavrev, P. V.; Stavreva, N. A.; Round, W. H.

    1996-09-01

    A method for determining the optimum dose distribution in the planning target volume is proposed when target volumes are deliberately enlarged to account for tumour mobility in external beam radiotherapy. The optimum dose distribution is a dose distribution that will result in an acceptable level of tumour control probability (TCP) in most of the arising cases of tumour dislocation. An assumption is made that the possible shifts of the tumour are subject to a Gaussian distribution with mean zero and known variance. The idea of a reduced (mean in ensemble) tumour cell density is introduced. On this basis, the target volume and dose distribution in it are determined. The tumour control probability as a function of the shift of the tumour has been calculated. The Monte Carlo method has been used to simulate TCP distributions corresponding to tumour mobility characterized by different variances. The obtained TCP distributions are independent of the variance of the mobility because the dose distribution in the planning target volume is prescribed so that the mobility variance is taken into account. For simplicity a one-dimensional model is used but three-dimensional generalization can be done.

  18. Dose calculation method with 60-cobalt gamma rays in total body irradiation

    CERN Document Server

    Scaff, L A M

    2001-01-01

    Physical factors associated to total body irradiation using sup 6 sup 0 Co gamma rays beams, were studied in order to develop a calculation method of the dose distribution that could be reproduced in any radiotherapy center with good precision. The method is based on considering total body irradiation as a large and irregular field with heterogeneities. To calculate doses, or doses rates, of each area of interest (head, thorax, thigh, etc.), scattered radiation is determined. It was observed that if dismagnified fields were considered to calculate the scattered radiation, the resulting values could be applied on a projection to the real size to obtain the values for dose rate calculations. In a parallel work it was determined the variation of the dose rate in the air, for the distance of treatment, and for points out of the central axis. This confirm that the use of the inverse square law is not valid. An attenuation curve for a broad beam was also determined in order to allow the use of absorbers. In this wo...

  19. Determination of gelation doses of gamma-irradiated hydrophilic polymers by different methods

    Science.gov (United States)

    Yiǧit, Fatma; Tekin, Niket; Erkan, Sevin; Güven, Olgun

    1994-04-01

    Poly(acrylic acid) and poly(vinyl pyrrolidone) are hydrophilic polymers. Poly(acrylic acid) is a polyelectrolyte which ionizes in water to produce an electrically conducting medium. Therefore, the gelation dose of poly(acrylic acid) can be determined by conductometric titration, simple titration and the measurement of pH. The conventional techniques of determining gelation dose are very time and material consuming especially for poly(acrylic acid) and subject to serious errors due to its electrolytic behavior. In this study, it has been shown that the gelation dose of poly(acrylic acid) can be determined by conductimetric and titrimetric methods with NaOH and measuring pH of aqueous solution of γ-irradiated polymer. In order to develop new, simpler and rapid methods for the determination of gelation dose of PVP, its complexation with gallic acid in dilute aqueous solution has been used. The complex formation between gallic acid and irradiated PVP in aqueous solutions is followed by UV-vis spectroscopy. The reliability of the dose value found, 120 kGy for poly(acrylic acid) and 140 kGy for poly(vinyl pyrrolidone), are also verified by viscometric and solubility measurements.

  20. A New System For Recording The Radiological Effective Doses For Pacients Investigated by Imaging Methods

    CERN Document Server

    Stanciu, Silviu

    2014-01-01

    In this paper the project of an integrated system for radiation safety and security of the patients investigated by radiological imaging methods is presented. The new system is based on smart cards and Public Key Infrastructure. The new system allows radiation effective dose data storage and a more accurate reporting system.

  1. A continuous OSL scanning method for analysis of radiation depth-dose profiles in bricks

    DEFF Research Database (Denmark)

    Bøtter-Jensen, L.; Jungner, H.; Poolton, N.R.J.

    1995-01-01

    This article describes the development of a method for directly measuring radiation depth-dose profiles from brick, tile and porcelain cores, without the need for sample separation techniques. For the brick cores, examples are shown of the profiles generated by artificial irradiation using...

  2. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  3. Respiratory triggered 4D cone-beam computed tomography: A novel method to reduce imaging dose

    Science.gov (United States)

    Cooper, Benjamin J.; O’Brien, Ricky T.; Balik, Salim; Hugo, Geoffrey D.; Keall, Paul J.

    2013-01-01

    Purpose: A novel method called respiratory triggered 4D cone-beam computed tomography (RT 4D CBCT) is described whereby imaging dose can be reduced without degrading image quality. RT 4D CBCT utilizes a respiratory signal to trigger projections such that only a single projection is assigned to a given respiratory bin for each breathing cycle. In contrast, commercial 4D CBCT does not actively use the respiratory signal to minimize image dose. Methods: To compare RT 4D CBCT with conventional 4D CBCT, 3600 CBCT projections of a thorax phantom were gathered and reconstructed to generate a ground truth CBCT dataset. Simulation pairs of conventional 4D CBCT acquisitions and RT 4D CBCT acquisitions were developed assuming a sinusoidal respiratory signal which governs the selection of projections from the pool of 3600 original projections. The RT 4D CBCT acquisition triggers a single projection when the respiratory signal enters a desired acquisition bin; the conventional acquisition does not use a respiratory trigger and projections are acquired at a constant frequency. Acquisition parameters studied were breathing period, acquisition time, and imager frequency. The performance of RT 4D CBCT using phase based and displacement based sorting was also studied. Image quality was quantified by calculating difference images of the test dataset from the ground truth dataset. Imaging dose was calculated by counting projections. Results: Using phase based sorting RT 4D CBCT results in 47% less imaging dose on average compared to conventional 4D CBCT. Image quality differences were less than 4% at worst. Using displacement based sorting RT 4D CBCT results in 57% less imaging dose on average, than conventional 4D CBCT methods; however, image quality was 26% worse with RT 4D CBCT. Conclusions: Simulation studies have shown that RT 4D CBCT reduces imaging dose while maintaining comparable image quality for phase based 4D CBCT; image quality is degraded for displacement based RT 4D

  4. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  5. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  6. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  7. TH-A-19A-03: Impact of Proton Dose Calculation Method On Delivered Dose to Lung Tumors: Experiments in Thorax Phantom and Planning Study in Patient Cohort

    Energy Technology Data Exchange (ETDEWEB)

    Grassberger, C; Daartz, J; Dowdell, S; Ruggieri, T; Sharp, G; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2014-06-15

    Purpose: Evaluate Monte Carlo (MC) dose calculation and the prediction of the treatment planning system (TPS) in a lung phantom and compare them in a cohort of 20 lung patients treated with protons. Methods: A 2-dimensional array of ionization chambers was used to evaluate the dose across the target in a lung phantom. 20 lung cancer patients on clinical trials were re-simulated using a validated Monte Carlo toolkit (TOPAS) and compared to the TPS. Results: MC increases dose calculation accuracy in lung compared to the clinical TPS significantly and predicts the dose to the target in the phantom within ±2%: the average difference between measured and predicted dose in a plane through the center of the target is 5.6% for the TPS and 1.6% for MC. MC recalculations in patients show a mean dose to the clinical target volume on average 3.4% lower than the TPS, exceeding 5% for small fields. The lower dose correlates significantly with aperture size and the distance of the tumor to the chest wall (Spearman's p=0.0002/0.004). For large tumors MC also predicts consistently higher V{sub 5} and V{sub 10} to the normal lung, due to a wider lateral penumbra, which was also observed experimentally. Critical structures located distal to the target can show large deviations, though this effect is very patient-specific. Conclusion: Advanced dose calculation techniques, such as MC, would improve treatment quality in proton therapy for lung cancer by avoiding systematic overestimation of target dose and underestimation of dose to normal lung. This would increase the accuracy of the relationships between dose and effect, concerning tumor control as well as normal tissue toxicity. As the role of proton therapy in the treatment of lung cancer continues to be evaluated in clinical trials, this is of ever-increasing importance. This work was supported by National Cancer Institute Grant R01CA111590.

  8. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  9. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    Science.gov (United States)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  10. Radiation dose determines the method for quantification of DNA double strand breaks

    Energy Technology Data Exchange (ETDEWEB)

    Bulat, Tanja; Keta, Olitija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra [University of Belgrade, Vinča Institute of Nuclear Sciences, Belgrade (Serbia); Todorović, Danijela, E-mail: dtodorovic@medf.kg.ac.rs [University of Kragujevac, Faculty of Medical Sciences, Kragujevac (Serbia)

    2016-03-15

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. (author)

  11. Radiation dose determines the method for quantification of DNA double strand breaks

    Directory of Open Access Journals (Sweden)

    TANJA BULAT

    2016-03-01

    Full Text Available ABSTRACT Ionizing radiation induces DNA double strand breaks (DSBs that trigger phosphorylation of the histone protein H2AX (γH2AX. Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany. Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci.

  12. Film dosimetry calibration method for pulsed-dose-rate brachytherapy with an 192Ir source.

    Science.gov (United States)

    Schwob, Nathan; Orion, Itzhak

    2007-05-01

    192Ir sources have been widely used in clinical brachytherapy. An important challenge is to perform dosimetric measurements close to the source despite the steep dose gradient. The common, inexpensive silver halide film is a classic two-dimensional integrator dosimeter and would be an attractive solution for these dose measurements. The main disadvantage of film dosimetry is the film response to the low-energy photon. Since the photon energy spectrum is known to vary with depth, the sensitometric curves are expected to be dependent on depth. The purpose of this study is to suggest a correction method for silver halide film dosimetry that overcomes the response changes at different depths. Sensitometric curves have been obtained at different depths with verification film near a 1 Ci 192Ir pulsed-dose-rate source. The depth dependence of the film response was observed and a correction function was established. The suitability of the method was tested through measurement of the radial dose profile and radial dose function. The results were compared to Monte Carlo-simulated values according to the TG43 formalism. Monte Carlo simulations were performed separately for the beta and gamma source emissions, using the EGS4 code system, including the low-energy photon and electron transport optimization procedures. The beta source emission simulation showed that the beta dose contribution could be neglected and therefore the film-depth dependence could not be attributed to this part of the source radioactivity. The gamma source emission simulations included photon-spectra collection at several depths. The results showed a depth-dependent softening of the photon spectrum that can explain the film-energy dependence.

  13. Water quality benchmarking (WQB) and priority control screening (PCS) of persistent toxic substances (PTSs) in China: necessity, method and a case study.

    Science.gov (United States)

    He, Wei; Qin, Ning; Kong, Xiang-Zhen; Liu, Wen-Xiu; He, Qi-Shuang; Wang, Qing-Mei; Yang, Chen; Jiang, Yu-Jiao; Yang, Bin; Wu, Wen-Jing; Xu, Fu-Liu

    2014-02-15

    The priority control screening (PCS) and water quality benchmarking (WQB) of toxic chemicals in water are key steps to ensure the safety of drinking water and aquatic ecosystem that is the crucial goal of water environment management. Owing to the different levels of social-economic development in different countries and regions, the PCS and WQB of toxic chemicals must be determined in accordance with their specific water environment situations. However, in China, the PCS and WQB of toxic chemicals in water were mainly introduced from the other countries. A method for the PCS and WQB of toxic chemicals in water based on the ecological risks was proposed, and a platform named Bayesian Matbugs Calculator (BMC) was developed. As a case study, the WQB and PCS of sixty-nine PTSs based their ecological risks were performed on the basis of one-year monthly monitoring in Lake Chaohu. The results showed that the current national water quality criteria (WQC) would underestimate the toxicological risk to organisms in this aquatic ecosystem. It appears necessary to develop new WQC for the protection of aquatic organisms in Lake Chaohu. Four grades of priority control chemicals (PCCs) in Lake Chaohu were proposed. The highest priority was assigned to organonitrogen-phosphorus pesticides, including parathion, dichlorvos, malathion, omethoate, and di-n-butyl phthalate. However, the national "blacklist" of toxic compounds only covered 7 of 20 PCCs, indicating that the other 13 PCCs would not be controlled efficiently. Because the pollution pattern of PTSs in various water bodies might be quite different, we appealed to the governments to screen the regional PPC lists or develop a more comprehensive national list for aquatic ecosystem protection in China.

  14. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    Science.gov (United States)

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.

  15. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  16. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  17. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  18. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  19. Energies and 2'-Hydroxyl Group Orientations of RNA Backbone Conformations. Benchmark CCSD(T)/CBS Database, Electronic Analysis, and Assessment of DFT Methods and MD Simulations.

    Science.gov (United States)

    Mládek, Arnošt; Banáš, Pavel; Jurečka, Petr; Otyepka, Michal; Zgarbová, Marie; Šponer, Jiří

    2014-01-14

    Sugar-phosphate backbone is an electronically complex molecular segment imparting RNA molecules high flexibility and architectonic heterogeneity necessary for their biological functions. The structural variability of RNA molecules is amplified by the presence of the 2'-hydroxyl group, capable of forming multitude of intra- and intermolecular interactions. Bioinformatics studies based on X-ray structure database revealed that RNA backbone samples at least 46 substates known as rotameric families. The present study provides a comprehensive analysis of RNA backbone conformational preferences and 2'-hydroxyl group orientations. First, we create a benchmark database of estimated CCSD(T)/CBS relative energies of all rotameric families and test performance of dispersion-corrected DFT-D3 methods and molecular mechanics in vacuum and in continuum solvent. The performance of the DFT-D3 methods is in general quite satisfactory. The B-LYP-D3 method provides the best trade-off between accuracy and computational demands. B3-LYP-D3 slightly outperforms the new PW6B95-D3 and MPW1B95-D3 and is the second most accurate density functional of the study. The best agreement with CCSD(T)/CBS is provided by DSD-B-LYP-D3 double-hybrid functional, although its large-scale applications may be limited by high computational costs. Molecular mechanics does not reproduce the fine energy differences between the RNA backbone substates. We also demonstrate that the differences in the magnitude of the hyperconjugation effect do not correlate with the energy ranking of the backbone conformations. Further, we investigated the 2'-hydroxyl group orientation preferences. For all families, we conducted a QM and MM hydroxyl group rigid scan in gas phase and solvent. We then carried out set of explicit solvent MD simulations of folded RNAs and analyze 2'-hydroxyl group orientations of different backbone families in MD. The solvent energy profiles determined primarily by the sugar pucker match well with the

  20. Paleodose evaluation of porcelain: a practical regression method of saturation exponential in pre-dose technique

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A practical regression method of saturation exponential in pre-dose technique is proposed. The method is mainly applied for porcelain dating. To test, the method, some simulated paleodoses of the imitations of ancient porcelain were used. The measured results are in good agreement with the simulated values of the paleodoses, and the average ratios of the two values by using the two ways are 1.05 and 0.99 with standard deviations (±lσ) of 19% and 15% respectively. Such errors can be accepted in porcelain dating.

  1. Estimation of safe doses: critical review of the hockey stick regression method

    Energy Technology Data Exchange (ETDEWEB)

    Yanagimoto, T.; Yamamoto, E.

    1979-10-01

    The hockey stick regression method is a convenient method to estimate safe doses, which is a kind of regression method using segmented lines. The method seems intuitively to be useful, but needs the assumption of the existence of the positive threshold value. The validity of the assumption is considered to be difficult to be shown. The alternative methods which are not based on the assumption, are given under suitable dose-response curves by introducing a risk level. Here the method using the probit model is compared with the hockey stick regression method. Computational results suggest that the alternative method is preferable. Furthermore similar problems in the case that response is measured as a continuous value are also extended. Data exemplified are concerned with relations of SO/sub 2/ to simple chronic bronchitis, relations of photochemical oxidants to eye discomfort and residual antibiotics in the lever of the chicks. These data was analyzed by the original authors under the assumption of the existence of the positive threshold values.

  2. A method to determine the planar dose distributions in patient undergone radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Cilla, S.; Viola, P.; Augelli, B.G.; D' Onofrio, G.; Grimaldi, L.; Craus, M. [U.O. Fisica Sanitaria, Universita Cattolica S. Cuore, Campobasso (Italy); Digesu, C.; Deodato, F.; Macchia, G.; Morganti, A.G. [U.O. Radioterapia, Universita Cattolica S. Cuore, Campobasso (Italy); Fidanzio, A.; Azario, L. [Istituto di Fisica, Universita Cattolica S. Cuore, Roma (Italy); Piermattei, A. [U.O. Fisica Sanitaria, Universita Cattolica S. Cuore, Campobasso (Italy); Istituto di Fisica, Universita Cattolica S. Cuore, Roma (Italy)], E-mail: a.piermatteei@rm.unicatt.it

    2008-06-15

    A 2D-array equipped with 729 vented plane parallel ion-chambers has been calibrated as a portal dose detector for radiotherapy in vivo measurements. The array has been positioned by a radiographic film stand at 120 cm from the source orthogonal to the radiotherapy beam delivered with the gantry angle at 180 deg. The collision between the 2D-array and the patient's couch have been avoided. In this work, using the measurements of the portal detector, we present a method to reconstruct the dose variations in the patient treated with step and shoot intensity-modulated beams (IMRT) for head-neck tumours. For this treatment morphological changes often occur during the fractionated therapy. In a first step an in-house software supplied the comparison between the measured portal dose and the one computed by a commercial treatment planning system within the field of view of the computed tomography (CT) scanner. For each patient, the percentage P{sub {gamma}} of chambers, where the comparison is in agreement within a selected acceptance criteria, was determined 8 times. At the first radiotherapy fraction the {gamma}-index analysis supplied P{sub {gamma}} values of about 95%, within acceptance criteria in terms of dose-difference, {delta}D, and distance-agreement, {delta}d, that was equal to 5% and 4 mm, respectively. These acceptance criteria were taken into account for small errors in the patient's set-up reproducibility and for the accuracy of the portal dose calculated by the treatment planning system (TPS) in particular when the beam was attenuated by inhomogeneous tissues and the shape of the head-neck body contours were irregular. During the treatment, some patients showed a reduction of the P{sub {gamma}} below 90% because due to radiotherapy treatment there was a change of the patient's morphology. In a second step a method, based on dosimetric measurements that used standard phantoms, supplied the percentage dose variations in a coronal plane of the

  3. Application of FTA Method to Reliability Analysis of Vacuum Resin Shot Dosing Equipment

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Faults of vacuum resin shot dosing equipment are studied systematically and the fault tree of the system is constructed by using the fault tree analysis(FTA) method. Then the qualitative and quantitative analysis of the tree is carried out, respectively, and according to the results of the analysis, the measures to improve the system are worked out and implemented. As a result, the reliability of the equipment is enhanced greatly.

  4. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  5. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2

    Energy Technology Data Exchange (ETDEWEB)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.

  6. Calibration and intercomparison methods of dose calibrators used in nuclear medicine facilities; Metodos de calibracao e de intercomparacao de calibradores de dose utilizados em servicos de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Alessandro Martins da

    1999-07-01

    Dose calibrators are used in most of the nuclear medicine facilities to determine the amount of radioactivity administered to a patient in a particular investigation or therapeutic procedure. It is therefore of vital importance that the equipment used presents good performance and is regular;y calibrated at a authorized laboratory. This occurs of adequate quality assurance procedures are carried out. Such quality control tests should be performed daily, other biannually or yearly, testing, for example, its accuracy and precision, the reproducibility and response linearity. In this work a commercial dose calibrator was calibrated with solution of radionuclides used in nuclear medicine. Simple instrument tests, such as response linearity and the response variation of the source volume increase at a constant source activity concentration, were performed. This instrument can now be used as a working standard for calibration of other dose calibrators/ An intercomparison procedure was proposed as a method of quality control of dose calibrators used in nuclear medicine facilities. (author)

  7. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  8. An in vivo dose verification method for SBRT–VMAT delivery using the EPID

    Energy Technology Data Exchange (ETDEWEB)

    McCowan, P. M., E-mail: peter.mccowan@cancercare.mb.ca [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Van Uytven, E.; Van Beek, T.; Asuni, G. [Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); McCurdy, B. M. C. [Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba R3T 2N2 (Canada); Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9 (Canada); Department of Radiology, University of Manitoba, 820 Sherbrook Street, Winnipeg, Manitoba R3A 1R9 (Canada)

    2015-12-15

    Purpose: Radiation treatments have become increasingly more complex with the development of volumetric modulated arc therapy (VMAT) and the use of stereotactic body radiation therapy (SBRT). SBRT involves the delivery of substantially larger doses over fewer fractions than conventional therapy. SBRT–VMAT treatments will strongly benefit from in vivo patient dose verification, as any errors in delivery can be more detrimental to the radiobiology of the patient as compared to conventional therapy. Electronic portal imaging devices (EPIDs) are available on most commercial linear accelerators (Linacs) and their documented use for dosimetry makes them valuable tools for patient dose verification. In this work, the authors customize and validate a physics-based model which utilizes on-treatment EPID images to reconstruct the 3D dose delivered to the patient during SBRT–VMAT delivery. Methods: The SBRT Linac head, including jaws, multileaf collimators, and flattening filter, were modeled using Monte Carlo methods and verified with measured data. The simulation provides energy spectrum data that are used by their “forward” model to then accurately predict fluence generated by a SBRT beam at a plane above the patient. This fluence is then transported through the patient and then the dose to the phosphor layer in the EPID is calculated. Their “inverse” model back-projects the EPID measured focal fluence to a plane upstream of the patient and recombines it with the extra-focal fluence predicted by the forward model. This estimate of total delivered fluence is then forward projected onto the patient’s density matrix and a collapsed cone convolution algorithm calculates the dose delivered to the patient. The model was tested by reconstructing the dose for two prostate, three lung, and two spine SBRT–VMAT treatment fractions delivered to an anthropomorphic phantom. It was further validated against actual patient data for a lung and spine SBRT–VMAT plan. The

  9. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  10. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  11. Remarks on a benchmark nonlinear constrained optimization problem

    Institute of Scientific and Technical Information of China (English)

    Luo Yazhong; Lei Yongjun; Tang Guojin

    2006-01-01

    Remarks on a benchmark nonlinear constrained optimization problem are made. Due to a citation error, two absolutely different results for the benchmark problem are obtained by independent researchers. Parallel simulated annealing using simplex method is employed in our study to solve the benchmark nonlinear constrained problem with mistaken formula and the best-known solution is obtained, whose optimality is testified by the Kuhn-Tucker conditions.

  12. Methods for meta-analysis of pharmacodynamic dose-response data with application to multi-arm studies of alogliptin.

    Science.gov (United States)

    Langford, Oliver; Aronson, Jeffrey K; van Valkenhoef, Gert; Stevens, Richard J

    2016-03-17

    Standard methods for meta-analysis of dose-response data in epidemiology assume a model with a single scalar parameter, such as log-linear relationships between exposure and outcome; such models are implicitly unbounded. In contrast, in pharmacology, multi-parameter models, such as the widely used Emax model, are used to describe relationships that are bounded above and below. We propose methods for estimating the parameters of a dose-response model by meta-analysis of summary data from the results of randomized controlled trials of a drug, in which each trial uses multiple doses of the drug of interest (possibly including dose 0 or placebo). We assume that, for each randomized arm of each trial, the mean and standard error of a continuous response measure and the corresponding allocated dose are available. We consider weighted least squares fitting of the model to the mean and dose pairs from all arms of all studies, and a two-stage procedure in which scalar inverse-variance meta-analysis is performed at each dose, and the dose-response model is fitted to the results by weighted least squares. We then compare these with two further methods inspired by network meta-analysis that fit the model to the contrasts between doses. We illustrate the methods by estimating the parameters of the Emax model to a collection of multi-arm, multiple-dose, randomized controlled trials of alogliptin, a drug for the management of diabetes mellitus, and further examine the properties of the four methods with sensitivity analyses and a simulation study. We find that all four methods produce broadly comparable point estimates for the parameters of most interest, but a single-stage method based on contrasts between doses produces the most appropriate confidence intervals. Although simpler methods may have pragmatic advantages, such as the use of standard software for scalar meta-analysis, more sophisticated methods are nevertheless preferable for their advantages in estimation.

  13. Improved method to label beta-2 agonists in metered-dose inhalers with technetium-99m

    Energy Technology Data Exchange (ETDEWEB)

    Ballinger, J.R.; Calcutt, L.E.; Hodder, R.V.; Proulx, A.; Gulenchyn, K.Y. (Ottawa Civic Hospital, Ottawa (Canada). Div. of Nuclear Medicine and Respiratory Unit)

    1993-01-01

    Labelling beta-2 agonists in a metered-dose inhaler (MDI) with technetium-99m allows imaging of the deposition of the aerosol in the respiratory tract. We have developed an improved labeling method in which anhydrous pertechnetate is dissolved in a small volume of ethanol, diluted with a fluorocarbon, and introduced into a commercial MDI. Imaging the MDI demonstrated that the [sup 99m]Tc was associated with the active ingredient, not just the propellant. The method has been used successfully with salbutamol and fenoterol MDIs and should be directly applicable to other MDIs which contain hydrophilic drugs. (Author).

  14. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    Science.gov (United States)

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.

  15. A method applicable to effective dose rate estimates for aircrew dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, A.; Pelliccioni, M.; Rancati, T

    2001-07-01

    The inclusion of cosmic radiation as occupational exposure under ICRP Publication 60 and the European Union Council Directive 96/29/Euratom has highlighted the need to estimate the exposure of aircrew. According to a report of the Group of Experts established under the terms of Article 31 of the European Treaty, the individual estimates of dose for flights below 15 km may be done using an appropriate computer program. In order to calculate the radiation exposure at aircraft altitudes, calculations have been performed by means of the Monte Carlo transport code FLUKA. On the basis of the calculated results, a simple method is proposed for the individual evaluation of effective dose rate due to the galactic component of cosmic radiation as a function of latitude and altitude. (author)

  16. A method applicable to effective dose rate estimates for aircrew dosimetry

    CERN Document Server

    Ferrari, A; Rancati, T

    2001-01-01

    The inclusion of cosmic radiation as occupational exposure under ICRP Publication 60 and the European Union Council Directive 96/29/Euratom has highlighted the need to estimate the exposure of aircrew. According to a report of the Group of Experts established under the terms of Article 31 of the European Treaty, the individual estimates of dose for flights below 15 km may be done using an appropriate computer program. In order to calculate the radiation exposure at aircraft altitudes, calculations have been performed by means of the Monte Carlo transport code FLUKA. On the basis of the calculated results, a simple method is proposed for the individual evaluation of effective dose rate due to the galactic component of cosmic radiation as a function of latitude and altitude. (13 refs).

  17. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Chen Chaobin; Huang Qunying; Wu Yican

    2005-01-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  18. Radiation Dose Reduction Methods For Use With Fluoroscopic Imaging, Computers And Implications For Image Quality

    Science.gov (United States)

    Edmonds, E. W.; Hynes, D. M.; Rowlands, J. A.; Toth, B. D.; Porter, A. J.

    1988-06-01

    The use of a beam splitting device for medical gastro-intestinal fluoroscopy has demonstrated that clinical images obtained with a 100mm photofluorographic camera, and a 1024 X 1024 digital matrix with pulsed progressive readout acquisition techniques, are identical. In addition, it has been found that clinical images can be obtained with digital systems at dose levels lower than those possible with film. The use of pulsed fluoroscopy with intermittent storage of the fluoroscopic image has also been demonstrated to reduce the fluoroscopy part of the examination to very low dose levels, particularly when low repetition rates of about 2 frames per second (fps) are used. The use of digital methods reduces the amount of radiation required and also the heat generated by the x-ray tube. Images can therefore be produced using a very small focal spot on the x-ray tube, which can produce further improvement in the resolution of the clinical images.

  19. DOSE MEASURMENT IN ULTRAVIOLET DISINFECTION OF WATER AND WASTE WATER BY CHEMICAL METHOD

    Directory of Open Access Journals (Sweden)

    F.Vaezi

    1995-06-01

    Full Text Available Chemical methods ( actinometry depend on the measurement of the extent to which a chemical reaction occurs under the influence of UV light. Two chemical actinometers have been used in this research. In one method, the mixtures of potassium peroxidisuiphate butanol solutions were irradiated for various time intervals, and pH-changes were determined. A linear relationship was observed between these changes and UV-dose applied. In another method, the acidic solutions of ammonium molybdate and ethyl alcohol were irradiated and the intensity of blue color developed was determined by titration with potassium permanganate solutions. The volumes of titrant used were then plotted versus the UV-doses. This showed a linear relationship which could be used for dosimeiry. Both of these actometers proved to be reliable. The first is the method of choice with a view to have much accuracy and the second method is preferred because of its feasibility and having advantages of no need to any equipment and non-accessible raw materials.

  20. NEW METHODICAL APPROACH FOR CALCULATION OF THE INDIVIDUALIZED INTERNAL DOSES OF PERSONS AFFECTED DUE TO THE CHERNOBYL ACCIDENT

    Directory of Open Access Journals (Sweden)

    E. A. Drozd

    2014-01-01

    Full Text Available The basis of methodical approach for calculation of the individualized internal doses is the con-firmed original scientific hypothesis that every group of individuals which are homogeneous on demographic characteristics (gender and age, on a curve of dose distribution that is constructed according to the data of individual measurements of Cs137 in the human body (WB measurements, has the determined location, thus, that is constant in time, i.e. percentiles of dose distribution corresponding to the average internal dose of every age group of men and women on a curve of dose distribution occupy the certain, steady in time, location. Keywords: individualized internal dose, percentile of dose distribution, stability.

  1. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  2. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  3. Comparison of passive and active radon measurement methods for personal occupational dose assessment

    Directory of Open Access Journals (Sweden)

    Hasanzadeh Elham

    2016-01-01

    Full Text Available To compare the performance of the active short-term and passive long-term radon measurement methods, a study was carried out in several closed spaces, including a uranium mine in Iran. For the passive method, solid-state nuclear track detectors based on Lexan polycarbonate were utilized, for the active method, AlphaGUARD. The study focused on the correlation between the results obtained for estimating the average indoor radon concentrations and consequent personal occupational doses in various working places. The repeatability of each method was investigated, too. In addition, it was shown that the radon concentrations in different stations of the continually ventilated uranium mine were comparable to the ground floor laboratories or storage rooms (without continual ventilation and lower than underground laboratories.

  4. Evaluation of a sieve classification method for characterization of low-dose interactive mixtures.

    Science.gov (United States)

    Bredenberg, Susanne; Dahlgren, Anna; Mattsson, Sofia

    2013-01-01

    This study investigated a sieve classification method for evaluating carrier materials and particle size fractions, which could be a valuable tool in the early development of pharmaceutical dosage forms containing low-dose interactive mixtures. When developing new products based on interactive mixtures, it is essential to ensure that the drug particles are successfully deagglomerated and have adhered to the carrier particles. In this study, the effect on the demixing potential (DP) of low-dose interactive mixtures was assessed for various carrier particle sizes and surface textures. The model drug used was sodium salicylate and the tested carriers were lactose, mannitol, and isomalt. The results showed that the lowest DPs, i.e. the most mechanically stable mixtures, were obtained with lactose. Furthermore, for interactive mixtures, small carrier particles and/or a narrow carrier particle size range are essential for obtaining a low DP and high homogeneity. Calculation of the DP provided a reliable estimate of the quality of the low-dose interactive mixtures used in this study.

  5. Experimental method for calculation of effective doses in interventional radiology; Metodo experimental para calculo de dosis efectivas en radiologia intervencionista

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz Lblanca, M. D.; Diaz Romero, F.; Casares Magaz, O.; Garrido Breton, C.; Catalan Acosta, A.; Hernandez Armas, J.

    2013-07-01

    This paper proposes a method that allows you to calculate the effective dose in any interventional radiology procedure using an anthropomorphic mannequin Alderson RANDO and dosimeters TLD 100 chip. This method has been applied to an angio Radiology procedure: the biliary drainage. The objectives that have been proposed are: to) put together a method that, on an experimental basis, allows to know dosis en organs to calculate effective dose in complex procedures and b) apply the method to the calculation of the effective dose of biliary drainage. (Author)

  6. Pediatric Stroke and transcranial Direct Current Stimulation: Methods for Rational Individualized Dose Optimization

    Directory of Open Access Journals (Sweden)

    Bernadette T Gillick

    2014-09-01

    Full Text Available Background- Transcranial direct current stimulation (tDCS has been investigated mainly in adults and doses may not be appropriate in pediatric applications. In perinatal stroke where potential applications are promising, rational adaptation of dosage for children remains under investigation.Objective - Construct child-specific tDCS dosing parameters through case study within a perinatal stroke tDCS safety and feasibility trial. Methods- 10-year-old subject with a diagnosis of presumed perinatal ischemic stroke and hemiparesis was identified. T1 MRI scans used to derive computerized model for current flow and electrode positions. Workflow using modeling results and consideration of dosage in previous clinical trials was incorporated. Prior Ad hoc adult montages versus de novo optimized montages provided distinct risk benefit analysis. Approximating adult dose required consideration of changes in both peak brain current flow and distribution which further tradeoff between maximizing efficacy and adding safety factors. Electrode size, position, current intensity, compliance voltage, and duration were controlled independently in this process.Results- Brain electric fields modeled and compared to values previously predicted models. Approximating conservative brain current flow patterns and intensities used in previous adult trials for comparable indications, the optimal current intensity established was 0.7 mA for 10 minutes with a tDCS C3/C4 montage. Specifically 0.7 mA produced comparable peak brain current intensity of an average adult receiving 1.0 mA. Electrode size of 5x7 cm2 with 1.0 mA and low-voltage tDCS was employed to maximize tolerability. Safety and feasibility confirmed with subject tolerating the session well and no serious adverse events.Conclusion- Rational approaches to dose customization, with steps informed by computational modeling, may improve guidance for pediatric stroke tDCS trials.

  7. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  8. FGK Benchmark Stars A new metallicity scale

    CERN Document Server

    Jofre, Paula; Soubiran, C; Blanco-Cuaresma, S; Pancino, E; Bergemann, M; Cantat-Gaudin, T; Hernandez, J I Gonzalez; Hill, V; Lardo, C; de Laverny, P; Lind, K; Magrini, L; Masseron, T; Montes, D; Mucciarelli, A; Nordlander, T; Recio-Blanco, A; Sobeck, J; Sordo, R; Sousa, S G; Tabernero, H; Vallenari, A; Van Eck, S; Worley, C C

    2013-01-01

    In the era of large spectroscopic surveys of stars of the Milky Way, atmospheric parameter pipelines require reference stars to evaluate and homogenize their values. We provide a new metallicity scale for the FGK benchmark stars based on their corresponding fundamental effective temperature and surface gravity. This was done by analyzing homogeneously with up to seven different methods a spectral library of benchmark stars. Although our direct aim was to provide a reference metallicity to be used by the Gaia-ESO Survey, the fundamental effective temperatures and surface gravities of benchmark stars of Heiter et al. 2013 (in prep) and their metallicities obtained in this work can also be used as reference parameters for other ongoing surveys, such as Gaia, HERMES, RAVE, APOGEE and LAMOST.

  9. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    The purpose of this article is to benchmark different optimization solvers when applied to various finite element based structural topology optimization problems. An extensive and representative library of minimum compliance, minimum volume, and mechanism design problem instances for different...... sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  10. Reconstruction of high-resolution 3D dose from matrix measurements : error detection capability of the COMPASS correction kernel method

    NARCIS (Netherlands)

    Godart, J.; Korevaar, E. W.; Visser, R.; Wauben, D. J. L.; van t Veld, Aart

    2011-01-01

    TheCOMPASS system (IBADosimetry) is a quality assurance (QA) tool which reconstructs 3D doses inside a phantom or a patient CT. The dose is predicted according to the RT plan with a correction derived from 2D measurements of a matrix detector. This correction method is necessary since a direct recon

  11. A novel method of estimating dose responses for polymer gels using texture analysis of scanning electron microscopy images.

    Directory of Open Access Journals (Sweden)

    Cheng-Ting Shih

    Full Text Available Polymer gels are regarded as a potential dosimeter for independent validation of absorbed doses in clinical radiotherapy. Several imaging modalities have been used to convert radiation-induced polymerization to absorbed doses from a macro-scale viewpoint. This study developed a novel dose conversion mechanism by texture analysis of scanning electron microscopy (SEM images. The modified N-isopropyl-acrylamide (NIPAM gels were prepared under normoxic conditions, and were administered radiation doses from 5 to 20 Gy. After freeze drying, the gel samples were sliced for SEM scanning with 50×, 500×, and 3500× magnifications. Four texture indices were calculated based on the gray level co-occurrence matrix (GLCM. The results showed that entropy and homogeneity were more suitable than contrast and energy as dose indices for higher linearity and sensitivity of the dose response curves. After parameter optimization, an R (2 value of 0.993 can be achieved for homogeneity using 500× magnified SEM images with 27 pixel offsets and no outlier exclusion. For dose verification, the percentage errors between the prescribed dose and the measured dose for 5, 10, 15, and 20 Gy were -7.60%, 5.80%, 2.53%, and -0.95%, respectively. We conclude that texture analysis can be applied to the SEM images of gel dosimeters to accurately convert micro-scale structural features to absorbed doses. The proposed method may extend the feasibility of applying gel dosimeters in the fields of diagnostic radiology and radiation protection.

  12. A novel method of estimating dose responses for polymer gels using texture analysis of scanning electron microscopy images.

    Science.gov (United States)

    Shih, Cheng-Ting; Hsu, Jui-Ting; Han, Rou-Ping; Hsieh, Bor-Tsung; Chang, Shu-Jun; Wu, Jay

    2013-01-01

    Polymer gels are regarded as a potential dosimeter for independent validation of absorbed doses in clinical radiotherapy. Several imaging modalities have been used to convert radiation-induced polymerization to absorbed doses from a macro-scale viewpoint. This study developed a novel dose conversion mechanism by texture analysis of scanning electron microscopy (SEM) images. The modified N-isopropyl-acrylamide (NIPAM) gels were prepared under normoxic conditions, and were administered radiation doses from 5 to 20 Gy. After freeze drying, the gel samples were sliced for SEM scanning with 50×, 500×, and 3500× magnifications. Four texture indices were calculated based on the gray level co-occurrence matrix (GLCM). The results showed that entropy and homogeneity were more suitable than contrast and energy as dose indices for higher linearity and sensitivity of the dose response curves. After parameter optimization, an R (2) value of 0.993 can be achieved for homogeneity using 500× magnified SEM images with 27 pixel offsets and no outlier exclusion. For dose verification, the percentage errors between the prescribed dose and the measured dose for 5, 10, 15, and 20 Gy were -7.60%, 5.80%, 2.53%, and -0.95%, respectively. We conclude that texture analysis can be applied to the SEM images of gel dosimeters to accurately convert micro-scale structural features to absorbed doses. The proposed method may extend the feasibility of applying gel dosimeters in the fields of diagnostic radiology and radiation protection.

  13. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  14. EU and OECD benchmarking and peer review compared

    NARCIS (Netherlands)

    Groenendijk, Nico

    2009-01-01

    Benchmarking and peer review are essential elements of the so-called EU open method of coordination (OMC) which has been contested in the literature for lack of effectiveness. In this paper we compare benchmarking and peer review procedures as used by the EU with those used by the OECD. Different ty

  15. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  16. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  17. 基于目标规划的标杆自然选择方法及其应用%Natural Benchmark Selection Method and Application Based on Goal Programming

    Institute of Scientific and Technical Information of China (English)

    关志民; 董恩伏; 张莉莉

    2011-01-01

    为了使标杆管理能够引导和鼓励行为主体不断探索事物发展变化的规律并展示优良行为,提出基于竞优思想的标杆选择方法.采用行为主体个性化选择与群体客观推荐相结合的方式,将个体代理评价和民主代理评价共同用于标杆选择过程,兼顾了群众基础和科学性,符合心理学和管理科学的逻辑,具有理论价值.最后,通过电力公司的实例表明,本方法具有有效性和可行性.%To make benchmark managment encourage behavior subjects to continuously explore the development regularity of things and show the good behaviors,a benchmark selection method based on the thought of Conforming to Natural Rule was proposed.This method adopts the way of combining behavior subjects' personalized options with group objective recommendation,and puts individual agent evaluation and democratic agent evaluation together to the process of benchmark selection.It takes into account the mass basis and scientificalness,and satisfies the logic of psychological science and management science.It also has theoretical value.Finally,the application of electric power company shows the effectiveness and feasibility of the proposed method.

  18. General benchmarks for quantum repeaters

    CERN Document Server

    Pirandola, Stefano

    2015-01-01

    Using a technique based on quantum teleportation, we simplify the most general adaptive protocols for key distribution, entanglement distillation and quantum communication over a wide class of quantum channels in arbitrary dimension. Thanks to this method, we bound the ultimate rates for secret key generation and quantum communication through single-mode Gaussian channels and several discrete-variable channels. In particular, we derive exact formulas for the two-way assisted capacities of the bosonic quantum-limited amplifier and the dephasing channel in arbitrary dimension, as well as the secret key capacity of the qubit erasure channel. Our results establish the limits of quantum communication with arbitrary systems and set the most general and precise benchmarks for testing quantum repeaters in both discrete- and continuous-variable settings.

  19. COG validation: SINBAD Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  20. A new method for synthesizing radiation dose-response data from multiple trials applied to prostate cancer

    DEFF Research Database (Denmark)

    Diez, Patricia; Vogelius, Ivan S; Bentzen, Søren M

    2010-01-01

    A new method is presented for synthesizing dose-response data for biochemical control of prostate cancer according to study design (randomized vs. nonrandomized) and risk group (low vs. intermediate-high)....

  1. Investigation of the HU-density conversion method and comparison of dose distribution for dose calculation on MV cone beam CT images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Joo; Lee, Seu Ran; Suh, Tae Suk [Dept. of Biomedical Engineering, The Catholic University of Korea, Bucheon (Korea, Republic of)

    2011-11-15

    Modern radiation therapy techniques, such as Image-guided radiation therapy (IGRT), Adaptive radiation therapy (ART) has become a routine clinical practice on linear accelerators for the increase the tumor dose conformity and improvement of normal tissue sparing at the same time. For these highly developed techniques, megavoltage cone beam computed tomography (MVCBCT) system produce volumetric images at just one rotation of the x-ray beam source and detector on the bottom of conventional linear accelerator for real-time application of patient condition into treatment planning. MV CBCT image scan be directly registered to a reference CT data set which is usually kilo-voltage fan-beam computed tomography (kVFBCT) on treatment planning system and the registered image scan be used to adjust patient set-up error. However, to use MV CBCT images in radiotherapy, reliable electron density (ED) distribution are required. Patients scattering, beam hardening and softening effect caused by different energy application between kVCT, MV CBCT can cause cupping artifacts in MV CBCT images and distortion of Houns field Unit (HU) to ED conversion. The goal of this study, for reliable application of MV CBCT images into dose calculation, MV CBCT images was modified to correct distortion of HU to ED using the relationship of HU and ED from kV FBCT and MV CBCT images. The HU-density conversion was performed on MV CBCT image set using Dose difference map was showing in Figure 1. Finally, percentage differences above 3% were reduced depending on applying density calibration method. As a result, total error co uld be reduced to under 3%. The present study demonstrates that dose calculation accuracy using MV CBCT image set can be improved my applying HU-density conversion method. The dose calculation and comparison of dose distribution from MV CBCT image set with/without HU-density conversion method was performed. An advantage of this study compared to other approaches is that HU

  2. bcrm: Bayesian Continual Reassessment Method Designs for Phase I Dose-Finding Trials

    Directory of Open Access Journals (Sweden)

    Michael Sweeting

    2013-09-01

    Full Text Available This paper presents the R package bcrm for conducting and assessing Bayesian continual reassessment method (CRM designs in Phase I dose-escalation trials. CRM designsare a class of adaptive design that select the dose to be given to the next recruited patient based on accumulating toxicity data from patients already recruited into the trial, often using Bayesian methodology. Despite the original CRM design being proposed in 1990, the methodology is still not widely implemented within oncology Phase I trials. The aim of this paper is to demonstrate, through example of the bcrm package, how a variety of possible designs can be easily implemented within the R statistical software, and how properties of the designs can be communicated to trial investigators using simple textual and graphical output obtained from the package. This in turn should facilitate an iterative process to allow a design to be chosen that is suitable to the needs of the investigator. Our bcrm package is the first to offer a large comprehensive choice of CRM designs, priors and escalation procedures, which can be easily compared and contrasted within the package through the assessment of operating characteristics.

  3. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  4. Evaluation of the stepwise collimation method for the reduction of the patient dose in full spine radiography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Boram [Korea University, Seoul (Korea, Republic of); Sun Medical Center, Daejeon (Korea, Republic of); Lee, Sunyoung [Sun Medical Center, Daejeon (Korea, Republic of); Yang, Injeong [Seoul National University Hospital Medical Center, Seoul (Korea, Republic of); Yoon, Myeonggeun [Korea University, Seoul (Korea, Republic of)

    2014-05-15

    The purpose of this study is to evaluate the dose reduction when using the stepwise collimation method for scoliosis patients undergoing full spine radiography. A Monte Carlo simulation was carried out to acquire dose vs. volume data for organs at risk (OAR) in the human body. While the effective doses in full spine radiography were reduced by 8, 15, 27 and 44% by using four different sizes of the collimation, the doses to the skin were reduced by 31, 44, 55 and 66%, indicating that the reduction of the dose to the skin is higher than that to organs inside the body. Although the reduction rates were low for the gonad, being 9, 14, 18 and 23%, there was more than a 30% reduction in the dose to the heart, suggesting that the dose reduction depends significantly on the location of the OARs in the human body. The reduction rate of the secondary cancer risk based on the excess absolute risk (EAR) varied from 0.6 to 3.4 per 10,000 persons, depending on the size of the collimation. Our results suggest that the stepwise collimation method in full spine radiography can effectively reduce the patient dose and the radiation-induced secondary cancer risk.

  5. Benchmark job – Watch out!

    CERN Document Server

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  6. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  7. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  8. A passive dosing method to determine fugacity capacities and partitioning properties of leaves

    DEFF Research Database (Denmark)

    Bolinius, Damien Johann; Macleod, Matthew; McLachlan, Michael S.;

    2016-01-01

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (Kleaf/PDMS) for intact leaves, using...... polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium Kleaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.......075 (PCB 180) to 0.371 (PCB 3). The Kleaf/PDMS data can readily be converted to fugacity capacities of leaves (Zleaf) and subsequently leaf/water or leaf/air partition ratios (Kleaf/water and Kleaf/air) using partitioning data from the literature. Results of our measurements are within the variability...

  9. Preliminary Study on the Quantitative Value Transfer Method of Absorbed Dose to Water in 60Co γ Radiation

    Directory of Open Access Journals (Sweden)

    SONG Ming-zhe

    2015-01-01

    Full Text Available Absorbed dose to water in 60Co γ radiation is the basic physics quantity in the quantitative value system of radiation therapy, it is very necessary for radiation therapy. The study on the quantitative value transfer method of absorbed dose to water in 60Co γ Radiation could provide important technical support to the establishment of Chinese absorbed dose to water quantity system. Based on PTW-30013 ionization chamber, PMMA water phantom and 3D mobile platform, quantitative value transfer standard instrument was established, combined with the requirement of IAEA-TRS398, developed preliminary study of 60Co absorbed dose to water quantity value transfer method. After the quantity value transfer, the expanded uncertainty of absorbed dose to water calibration factor of PTW-30013 was 0.90% (k=2, the expanded uncertainty of absorbed dose to water of 60Co γ reference radiation in Radiation Metrology Center (SSDL of IAEA was 1.4% (k=2. The results showed that, this value transfer method can reduce the uncertainty of 60Co absorbed dose to water effectively in Secondary Standard Dosimetry Laboratory.

  10. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  11. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  12. Standard Guide for Benchmark Testing of Light Water Reactor Calculations

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This guide covers general approaches for benchmarking neutron transport calculations in light water reactor systems. A companion guide (Guide E2005) covers use of benchmark fields for testing neutron transport calculations and cross sections in well controlled environments. This guide covers experimental benchmarking of neutron fluence calculations (or calculations of other exposure parameters such as dpa) in more complex geometries relevant to reactor surveillance. Particular sections of the guide discuss: the use of well-characterized benchmark neutron fields to provide an indication of the accuracy of the calculational methods and nuclear data when applied to typical cases; and the use of plant specific measurements to indicate bias in individual plant calculations. Use of these two benchmark techniques will serve to limit plant-specific calculational uncertainty, and, when combined with analytical uncertainty estimates for the calculations, will provide uncertainty estimates for reactor fluences with ...

  13. Intra and inter-organizational learning from benchmarking IS services

    DEFF Research Database (Denmark)

    Mengiste, Shegaw Anagaw; Kræmmergaard, Pernille; Hansen, Bettina

    2016-01-01

    in benchmarking their IS services and functions since 2006. Particularly, this research tackled existing IS benchmarking approaches and methods by turning to a learning-oriented perspective and by empirically exploring the dynamic process of intra and inter-organizational learning from benchmarking IS/IT services......This paper reports a case study of benchmarking IS services in Danish municipalities. Drawing on Holmqvist’s (2004) organizational learning model of exploration and exploitation, the paper explores intra and inter-organizational learning dynamics among Danish municipalities that are involved....... The paper also makes a contribution by emphasizing the importance of informal cross-municipality consortiums to facilitate learning and experience sharing across municipalities. The findings of the case study demonstrated that the IS benchmarking scheme is relatively successful in sharing good practices...

  14. Derivative Spectrophotometric Method for Estimation of Antiretroviral Drugs in Fixed Dose Combinations

    Directory of Open Access Journals (Sweden)

    Mohite P.B.

    2012-06-01

    Full Text Available Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed.

  15. Evaluation of the applicability of the Benchmark approach to existing toxicological data. Framework: Chemical compounds in the working place

    OpenAIRE

    Appel MJ; Bouman HGM; Pieters MN; Slob W; Adviescentrum voor chemische arbeidsomstandigheden (ACCA) TNO; CSR

    2001-01-01

    Five chemicals used in workplace, for which a risk assessment had already been carried out, were selected and the relevant critical studies re-analyzed by the Benchmark approach. The endpoints involved included continuous, and ordinal data. Dose-response modeling could be reasonablyapplied to the dose-response data encountered, and Critical Effect Doses (CEDs) could be derived for almost all of the endpoints considered. The resulting benchmark dose for the study as a whole was close to the NO...

  16. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  17. Comparing Environmental Dose Rate Meters: A Method to Determine Natural and Non-natural Variations in External Radiation Levels

    Energy Technology Data Exchange (ETDEWEB)

    Reinen, A.J.M.; Slaper, H.; Overwater, R.M.W.; Stoop, P

    2000-07-01

    A method is described to determine low excess dose rates from a radiation source in the environment, which are small compared to the natural fluctuations of the background radiation. First a 'virtual reference dose rate meter' is constructed from data of the national monitoring network, to know the natural variations of the background radiation. Results from this virtual monitor are then compared to data of dose rate meters at sites of interest, to determine non-natural or very local natural variations and excess dose rates. Daily averaged excess dose rates down to 2 to 3 nSv.h{sup -1} can be identified. The method is applied successfully near nuclear installations in the Netherlands and can be used for all types of dose rate meters and sample frequencies. Finally, the calculations to derive the 'virtual reference dose rate meter' can also be used as a quality assessment tool for environmental radiation monitoring networks. (author)

  18. The Nature and Predictive Validity of a Benchmark Assessment Program in an American Indian School District

    Science.gov (United States)

    Payne, Beverly J. R.

    2013-01-01

    This mixed methods study explored the nature of a benchmark assessment program and how well the benchmark assessments predicted End-of-Grade (EOG) and End-of-Course (EOC) test scores in an American Indian school district. Five major themes were identified and used to develop a Dimensions of Benchmark Assessment Program Effectiveness model:…

  19. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method

    Science.gov (United States)

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-01

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose

  20. A novel method for interactive multi-objective dose-guided patient positioning

    Science.gov (United States)

    Haehnle, Jonas; Süss, Philipp; Landry, Guillaume; Teichert, Katrin; Hille, Lucas; Hofmaier, Jan; Nowak, Dimitri; Kamp, Florian; Reiner, Michael; Thieke, Christian; Ganswindt, Ute; Belka, Claus; Parodi, Katia; Küfer, Karl-Heinz; Kurz, Christopher

    2017-01-01

    In intensity-modulated radiation therapy (IMRT), 3D in-room imaging data is typically utilized for accurate patient alignment on the basis of anatomical landmarks. In the presence of non-rigid anatomical changes, it is often not obvious which patient position is most suitable. Thus, dose-guided patient alignment is an interesting approach to use available in-room imaging data for up-to-date dose calculation, aimed at finding the position that yields the optimal dose distribution. This contribution presents the first implementation of dose-guided patient alignment as multi-criteria optimization problem. User-defined clinical objectives are employed for setting up a multi-objective problem. Using pre-calculated dose distributions at a limited number of patient shifts and dose interpolation, a continuous space of Pareto-efficient patient shifts becomes accessible. Pareto sliders facilitate interactive browsing of the possible shifts with real-time dose display to the user. Dose interpolation accuracy is validated and the potential of multi-objective dose-guided positioning demonstrated for three head and neck (H&N) and three prostate cancer patients. Dose-guided positioning is compared to replanning for all cases. A delineated replanning CT served as surrogate for in-room imaging data. Dose interpolation accuracy was high. Using a 2 % dose difference criterion, a median pass-rate of 95.7% for H&N and 99.6% for prostate cases was determined in a comparison to exact dose calculations. For all patients, dose-guided positioning allowed to find a clinically preferable dose distribution compared to bony anatomy based alignment. For all H&N cases, mean dose to the spared parotid glands was below 26~\\text{Gy} (up to 27.5~\\text{Gy} with bony alignment) and clinical target volume (CTV) {{V}95 % } above 99.1% (compared to 95.1%). For all prostate patients, CTV {{V}95 % } was above 98.9% (compared to 88.5%) and {{V}50~\\text{Gy}} to the rectum below 50 % (compared to 56

  1. A method for verification of treatment times for high-dose-rate intraluminal brachytherapy treatment

    Directory of Open Access Journals (Sweden)

    Muhammad Asghar Gadhi

    2016-06-01

    Full Text Available Purpose: This study was aimed to increase the quality of high dose rate (HDR intraluminal brachytherapy treatment. For this purpose, an easy, fast and accurate patient-specific quality assurance (QA tool has been developed. This tool has been implemented at Bahawalpur Institute of Nuclear Medicine and Oncology (BINO, Bahawalpur, Pakistan.Methods: ABACUS 3.1 Treatment planning system (TPS has been used for treatment planning and calculation of total dwell time and then results were compared with the time calculated using the proposed method. This method has been used to verify the total dwell time for different rectum applicators for relevant treatment lengths (2-7 cm and depths (1.5-2.5 cm, different oesophagus applicators of relevant treatment lengths (6-10 cm and depths (0.9 & 1.0 cm, and a bronchus applicator for relevant treatment lengths (4-7.5 cm and depth (0.5 cm.Results: The average percentage differences between treatment time TM with manual calculation and as calculated by the TPS is 0.32% (standard deviation 1.32% for rectum, 0.24% (standard deviation 2.36% for oesophagus and 1.96% (standard deviation 0.55% for bronchus, respectively. These results advocate that the proposed method is valuable for independent verification of patient-specific treatment planning QA.Conclusion: The technique illustrated in the current study is an easy, simple, quick and useful for independent verification of the total dwell time for HDR intraluminal brachytherapy. Our method is able to identify human error-related planning mistakes and to evaluate the quality of treatment planning. It enhances the quality of brachytherapy treatment and reliability of the system.

  2. Evaluation of HIFU-induced lesion region using temperature threshold and equivalent thermal dose methods

    Science.gov (United States)

    Chang, Shihui; Xue, Fanfan; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi

    2017-03-01

    Usually, numerical simulation is used to predict the acoustic filed and temperature distribution of high intensity focused ultrasound (HIFU). In this paper, the simulated lesion volumes obtained by temperature threshold (TRT) 60 °C and equivalent thermal dose (ETD) 240 min were compared with the experimental results which were obtained by animal tissue experiment in vitro. In the simulation, the calculated model was established according to the vitro tissue experiment, and the Finite Difference Time Domain (FDTD) method was used to calculate the acoustic field and temperature distribution in bovine liver by the Westervelt formula and Pennes bio-heat transfer equation, and the non-linear characteristics of the ultrasound was considered. In the experiment, the fresh bovine liver was exposed for 8s, 10s, 12s under different power conditions (150W, 170W, 190W, 210W), and the exposure was repeated 6 times under the same dose. After the exposures, the liver was sliced and photographed every 0.2mm, and the area of the lesion region in every photo was calculated. Then, every value of the areas was multiplied by 0.2mm, and summed to get the approximation volume of the lesion region. The comparison result shows that the lesion volume of the region calculated by TRT 60 °C in simulation was much closer to the lesion volume obtained in experiment, and the volume of the region above 60 °C was larger than the experimental results, but the volume deviation was not exceed 10%. The volume of the lesion region calculated by ETD 240 min was larger than that calculated by TRT 60 °C in simulation, and the volume deviations were ranged from 4.9% to 23.7%.

  3. A new method to estimate doses to the normal tissues after past extended and involved field radiotherapy for Hodgkin lymphoma

    DEFF Research Database (Denmark)

    Maraldo, Maja V; Lundemann, Michael; Vogelius, Ivan R;

    2015-01-01

    INTRODUCTION: Reconstruction of radiotherapy (RT) performed decades ago is challenging, but is necessary to address dose-response questions from epidemiological data and may be relevant in re-irradiation scenarios. Here, a novel method to reconstruct extended and involved field RT for patients...... with Hodgkin lymphoma was used. MATERIALS AND METHODS: For 46 model patients, 29 organs at risk (OARs) were contoured and seven treatment fields reconstructed (mantle, mediastinal, right/left neck, right/left axillary, and spleen field). Extended and involved field RT were simulated by generating RT plans...... by superpositions of the seven individual fields. The mean (standard deviation) of the 46 individual mean organ doses were extracted as percent of prescribed dose for each field superposition. RESULTS: The estimated mean doses to the OARs from 17 field combinations were presented. The inter-patient variability...

  4. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  5. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  6. Benchmark Assessment of Density Functional Methods on Group II-VI MX (M = Zn, Cd; X = S, Se, Te) Quantum Dots.

    Science.gov (United States)

    Azpiroz, Jon M; Ugalde, Jesus M; Infante, Ivan

    2014-01-14

    In this work, we build a benchmark data set of geometrical parameters, vibrational normal modes, and low-lying excitation energies for MX quantum dots, with M = Cd, Zn, and X = S, Se, Te. The reference database has been constructed by ab initio resolution-of-identity second-order approximate coupled cluster RI-CC2/def2-TZVPP calculations on (MX)6 model molecules in the wurtzite structure. We have tested 26 exchange-correlation density functionals, ranging from local generalized gradient approximation (GGA) and hybrid GGA to meta-GGA, meta-hybrid, and long-range corrected. The best overall functional is the hybrid PBE0 that outperforms all other functionals, especially for excited state energies, which are of particular relevance for the systems studied here. Among the DFT methodologies with no Hartree-Fock exchange, the M06-L is the best one. Local GGA functionals usually provide satisfactory results for geometrical structures and vibrational frequencies but perform rather poorly for excitation energies. Regarding the CdSe cluster, we also present a test of several basis sets that include relativistic effects via effective core potentials (ECPs) or via the ZORA approximation. The best basis sets in terms of computational efficiency and accuracy are the SBKJC and def2-SV(P). The LANL2DZ basis set, commonly employed nowadays on these types of nanoclusters, performs very disappointingly. Finally, we also provide some suggestions on how to perform calculations on larger systems keeping a balance between computational load and accuracy.

  7. Active vibration control of nonlinear benchmark buildings

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xing-de; CHEN Dao-zheng

    2007-01-01

    The present nonlinear model reduction methods unfit the nonlinear benchmark buildings as their vibration equations belong to a non-affine system. Meanwhile,the controllers designed directly by the nonlinear control strategy have a high order, and they are difficult to be applied actually. Therefore, a new active vibration control way which fits the nonlinear buildings is proposed. The idea of the proposed way is based on the model identification and structural model linearization, and exerting the control force to the built model according to the force action principle. This proposed way has a better practicability as the built model can be reduced by the balance reduction method based on the empirical Grammian matrix. A three-story benchmark structure is presented and the simulation results illustrate that the proposed method is viable for the civil engineering structures.

  8. Benchmarking Dosimetric Quality Assessment of Prostate Intensity-Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Senthi, Sashendra, E-mail: sasha.senthi@petermac.org [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Gill, Suki S. [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Haworth, Annette; Kron, Tomas; Cramb, Jim [Department of Physical Sciences, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Rolfo, Aldo [Radiation Therapy Services, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Thomas, Jessica [Biostatistics and Clinical Trials, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Duchesne, Gillian M. [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia); Hamilton, Christopher H.; Joon, Daryl Lim [Radiation Oncology Department, Austin Repatriation Hospital, Heidelberg, VIC (Australia); Bowden, Patrick [Radiation Oncology Department, Tattersall' s Cancer Center, East Melbourne, VIC (Australia); Foroudi, Farshad [Division of Radiation Oncology, Peter MacCallum Cancer Center, East Melbourne, VIC (Australia)

    2012-02-01

    Purpose: To benchmark the dosimetric quality assessment of prostate intensity-modulated radiotherapy and determine whether the quality is influenced by disease or treatment factors. Patients and Methods: We retrospectively analyzed the data from 155 consecutive men treated radically for prostate cancer using intensity-modulated radiotherapy to 78 Gy between January 2007 and March 2009 across six radiotherapy treatment centers. The plan quality was determined by the measures of coverage, homogeneity, and conformity. Tumor coverage was measured using the planning target volume (PTV) receiving 95% and 100% of the prescribed dose (V{sub 95%} and V{sub 100%}, respectively) and the clinical target volume (CTV) receiving 95% and 100% of the prescribed dose. Homogeneity was measured using the sigma index of the PTV and CTV. Conformity was measured using the lesion coverage factor, healthy tissue conformity index, and the conformity number. Multivariate regression models were created to determine the relationship between these and T stage, risk status, androgen deprivation therapy use, treatment center, planning system, and treatment date. Results: The largest discriminatory measurements of coverage, homogeneity, and conformity were the PTV V{sub 95%}, PTV sigma index, and conformity number. The mean PTV V{sub 95%} was 92.5% (95% confidence interval, 91.3-93.7%). The mean PTV sigma index was 2.10 Gy (95% confidence interval, 1.90-2.20). The mean conformity number was 0.78 (95% confidence interval, 0.76-0.79). The treatment center independently influenced the coverage, homogeneity, and conformity (all p < .0001). The planning system independently influenced homogeneity (p = .038) and conformity (p = .021). The treatment date independently influenced the PTV V{sub 95%} only, with it being better at the start (p = .013). Risk status, T stage, and the use of androgen deprivation therapy did not influence any aspect of plan quality. Conclusion: Our study has benchmarked measures

  9. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  10. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H; Zhen, X; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China); Zhong, Z [The University of Texas at Dallas, Department of Computer Science, TX (United States); Pompos, A; Yan, H; Jiang, S; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported

  11. Determination of surface dose rate of indigenous (32)P patch brachytherapy source by experimental and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N

    2015-09-01

    Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications.

  12. Effect of Total Dose of Lidocaine on Duration of Adductor Canal Block, Assessed by Different Test Methods

    DEFF Research Database (Denmark)

    Jæger, Pia; Koscielniak-Nielsen, Zbigniew J.; Hilsted, Karen Lisa;

    2016-01-01

    BACKGROUND: The binary aims of this study were to investigate the effect of total dose of lidocaine on duration of an adductor canal block (ACB) and to validate different methods used to assess nerve blocks. METHODS: We performed 2 blinded, randomized, controlled crossover trials, including healthy...

  13. DAG Based on the Method of Benchmark Interest Rate of China%基于DAG方法的中国基准利率研究

    Institute of Scientific and Technical Information of China (English)

    许亦平; 洪露; 周芊

    2012-01-01

    随着中国利率市场化改革的进行,基准利率的选择变得越来越重要。本文旨在借助一种新的计量模型DAG来确定我国目前短期利率中的关键利率。本文主要选取了六种非常活跃的、具有非常好的流动性的短期利率进行研究。最终得到的因果关系图显示隔夜Shibor、7天Shibor和隔夜同业拆借利率在货币市场中都起到了基础性作用,但这三者之间不存在因果关系,也就是说不存在唯一的利率可以直接地或间接地影响其他所有利率而充当基准利率的角色。%It has become more and more important to determine the bench- mark interest in China with the progress of interest marketization. This paper pro- poses a DAG based approach trying to identify the key short term interest rate among a group of six short term interest rates with active trading and good liquidity. The results show that the overnight Shibor rate, 7-day Shibor rate and the overnight interbank lending rate are key rates. But there is no causality effect existing among these three rates, which means that no single interest rate can olav the role of benchmark rate and affect all other rates.

  14. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  15. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  16. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  17. Benchmarking Nature Tourism between Zhangjiajie and Repovesi

    OpenAIRE

    Wu, Zhou

    2014-01-01

    Since nature tourism became a booming business in modern society, more and more tourists choose nature-based tourism destination for their holidays. To find ways to promote Repovesi national park is quite significant, in a bid to reinforce the competitiveness of Repovesi national park. The topic of this thesis is both to find good marketing strategies used by the Zhangjiajie national park, via benchmarking and to provide some suggestions to Repovesi national park. The Method used in t...

  18. An investigation of the dose distribution effect related with collimator angle for VMAT method

    Science.gov (United States)

    Tas, B.; Bilge, H.; Ozturk, S. Tokdemir

    2016-03-01

    Aim of this study is to investigate the efficacy of dose distribution in eleven prostate cancer patients with single VMAT and double VMAT when varying collimator angle. We generated optimum single and double VMAT treatment plans when collimator angle was 0°. We recalculated single VMAT plans at different collimator angles(0°,15°,30°,45°,60°,75°,90°) for double VMAT plans(0°-0°,15°-345°,30°-330°,45°-315°,60°-300°,75°-285°,90°-270°) without changing any optimization parameters. HI, DVH and %95 dose coverage of PTV calculated and analyzed. We determined better dose distribution with some collimator angles. Plans were verified using the 2 dimensional ion chamber array Matrixx® and 3 dimensional Compass® software program. A higher %95 dose coverage of PTV was found for single VMAT in the 15° collimator angle, for double VMAT in the 60°-300° and 75°-285° collimator angles. Because of lower rectum doses, we suggested 75°-285°. When we compared single and double VMAT's dose distribution, we had better % 95 dose coverage of PTV and lower HI with double VMAT. Our result was significant statistically. These finds are informative for choosing 75°-285° collimator angles in double VMAT plans for prostate cancer.

  19. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  20. DETECTORS AND EXPERIMENTAL METHODS: ELDRS and dose-rate dependence of vertical NPN transistor

    Science.gov (United States)

    Zheng, Yu-Zhan; Lu, Wu; Ren, Di-Yuan; Wang, Gai-Li; Yu, Xue-Feng; Guo, Qi

    2009-01-01

    The enhanced low-dose-rate sensitivity (ELDRS) and dose-rate dependence of vertical NPN transistors are investigated in this article. The results show that the vertical NPN transistors exhibit more degradation at low dose rate, and that this degradation is attributed to the increase on base current. The oxide trapped positive charge near the SiO2-Si interface and interface traps at the interface can contribute to the increase on base current and the two-stage hydrogen mechanism associated with space charge effect can well explain the experimental results.

  1. Intake risk and dose evaluation methods for workers in radiochemistry labs of a medical cyclotron facility.

    Science.gov (United States)

    Calandrino, Riccardo; del Vecchio, Antonella; Savi, Annarita; Todde, Sergio; Belloli, Sara

    2009-10-01

    The aim of this paper is to evaluate the risks and doses for the internal contamination of the radiochemistry staff in a high workload medical cyclotron facility. The doses from internal contamination derive from the inhalation of radioactive gas leakage from the cells by personnel involved in the synthesis processes and are calculated from urine sample measurements. Various models are considered for the calculation of the effective committed dose from the analysis of these urine samples, and the results are compared with data obtained from local environmental measurement of the radioactivity released inside the lab.

  2. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano;

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we...... compare numerical predictions of the concrete sample final shape for these two benchmark flows obtained by various research teams around the world using various numerical techniques. Our results show that all numerical techniques compared here give very similar results suggesting that numerical...

  3. A Two-stage Damage Detection Method with Application to the Phase I ASCE SHM Benchmark Building%基于ASCE SHM Benchmark模型的两阶段结构损伤识别方法

    Institute of Scientific and Technical Information of China (English)

    刘朝; 雷鹰

    2011-01-01

    Recently,a new method has been proposed by the authors for detecting structural local damage under limited input and output measurements. This method can be extended to detect structural local damage in complex structures based on substructure approach. In this paper, based on this structural damage detection and localization method, a two-stage damage detection strategy is developed with application to the ASCE SHM benchmark building to test its efficacy and provide a systemic solution to the Phase Ⅰbenchmark problem far damage detection. In the first stage, an 8-DOF identification model is used to identify the floors and directions ( X or Y) in which damages are present. Then,the detection is focused on the floors where damage occurs. A substructure approach is utilized for damage localization in the second stage. A 12-DOF identification model is used for the substructure containing the damaged structural floors to identify the exact locations of damage. Structural parameters and the unknown inputs are identified by a recursive algorithm based on sequential application of the Kalman extended estimator for the extended state vector and the least squares estimation for the unknown inputs. Only a limited number of measured acceleration responses of the benchmark structure subject to unmeasured excitation inputs are utilized. Damage detection results indicate that the new method can detect and localize various damage patterns of the benchmark problems with good accuracy.%近年来,一种激励未知和输出部分已知条件下结构局部损伤识别的新方法得以提出,这种方法能基于子结构思想进行复杂结构的局部损伤识别,基于此方法,提出两阶段的损伤识别策略,并将其应用于第1阶段ASCESHM benchmark模型的case4的4种损伤识别,以检验此方法的有效性.在第1阶段,提出8-DOF的损伤模型以进行损伤层的定位;第2阶段,利用子结构的思想,提出12-DOF的损伤模型,在包含损伤层的

  4. Benchmark Data Through The International Reactor Physics Experiment Evaluation Project (IRPHEP)

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Dr. Enrico Sartori

    2005-09-01

    The International Reactor Physics Experiments Evaluation Project (IRPhEP) was initiated by the Organization for Economic Cooperation and Development (OECD) Nuclear Energy Agency’s (NEA) Nuclear Science Committee (NSC) in June of 2002. The IRPhEP focus is on the derivation of internationally peer reviewed benchmark models for several types of integral measurements, in addition to the critical configuration. While the benchmarks produced by the IRPhEP are of primary interest to the Reactor Physics Community, many of the benchmarks can be of significant value to the Criticality Safety and Nuclear Data Communities. Benchmarks that support the Next Generation Nuclear Plant (NGNP), for example, also support fuel manufacture, handling, transportation, and storage activities and could challenge current analytical methods. The IRPhEP is patterned after the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and is closely coordinated with the ICSBEP. This paper highlights the benchmarks that are currently being prepared by the IRPhEP that are also of interest to the Criticality Safety Community. The different types of measurements and associated benchmarks that can be expected in the first publication and beyond are described. The protocol for inclusion of IRPhEP benchmarks as ICSBEP benchmarks and for inclusion of ICSBEP benchmarks as IRPhEP benchmarks is detailed. The format for IRPhEP benchmark evaluations is described as an extension of the ICSBEP format. Benchmarks produced by the IRPhEP add new dimension to criticality safety benchmarking efforts and expand the collection of available integral benchmarks for nuclear data testing. The first publication of the "International Handbook of Evaluated Reactor Physics Benchmark Experiments" is scheduled for January of 2006.

  5. Method for calculating dose when lung tissue lies in the treatment field

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, S.C.; Keller, B.E.; Rubin, P.

    1976-07-01

    The absorbed dose in lung and beyond lung as a result of increased lung transmission of x and ..gamma.. irradiation is described. The correction factor used to calculate the absorbed dose is a function of beam energy, field area, lung density, and lung and soft tissue depth. Agreement between measurements and calculations in the Alderson phantom is within 3%. An example of how this technique can be used is described. (AIP)

  6. An alternate method for estimating the dose-response relationships of neuromuscular blocking drugs.

    Science.gov (United States)

    Kopman, A F; Klewicka, M M; Neuman, G G

    2000-05-01

    Slopes of the dose-response relationships for all available neuromuscular blocking drugs appear to be essentially parallel and to approximate a log-dose/logit value of 4.75. We tested the possibility of estimating both 50% effective dose (ED(50)) and 95% effective dose (ED(95)) values from a single dose-response data point when that slope is postulated. We compared the ED(50) and ED(95) values of rocuronium and succinylcholine calculated by using traditional log-dose/logit regression analysis with the same values obtained by averaging individual estimates of potency as determined by using the Hill equation. After the induction of anesthesia (propofol/alfentanil), tracheal intubation was accomplished without the administration of neuromuscular blocking drugs. Anesthesia was maintained with nitrous oxide and propofol. The evoked electromyographic response to 0.10-Hz single stimuli was continuously recorded. After baseline stabilization, a single IV bolus of succinylcholine (0.08-0.26 mg/kg, n = 50) or rocuronium (0. 13-0.30 mg/kg, n = 40) was administered and the peak effect noted. By using log-dose/logit regression analysis, we calculated ED(50) and ED(95) values for rocuronium of 0.17 and 0.33 mg/kg and 0.14 and 0.27 mg/kg for succinylcholine. When potency was calculated from the Hill equation, the resultant ED(50) and ED(95) values did not differ by more than +/-4% from those obtained by using regression analysis. Averaging of single-dose estimates of neuromuscular potency provides a useful adjunct and reasonable alternative to conventional regression analysis.

  7. New method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. Clinical and hormonal studies

    Energy Technology Data Exchange (ETDEWEB)

    Gronroos, M.; Turunen, T.; Raekallio, J.; Ruotsalinen, P.; Salmi, T. (Turku Univ. (Finland). Dept. of Obstetrics and Gynecology)

    1982-08-01

    The authors present a new method for the induction of therapeutic amenorrhea: low dose endometrial afterloading irradiation. The problem with this method has been how to inactivate the endometrium while maintaining the physiological function of the ovaries. In 5/29 young patients regular or irregular bleedings occurred after an endometrial dose of 11+-1 Gy. These subjects were given a repeat low dose intrauterine irradiation. Thereafter no bleedings were found in four out of five patients. Two to 9 years after the repeat irradiation the plasma levels of E/sub 1/, E/sub 2/, FSH and LH corresponded closely to those of healthy women in reproductive age in three out of five patients; some high plasma P levels indicated ovulation. In two patients the E/sub 1/, E/sub 2/, and P values were more likely postmenopausal but, on the other hand, FSH and LH values reproductive ones. 19 refs.

  8. A two-dose-rate method for general recombination correction for liquid ionization chambers in pulsed beams

    Energy Technology Data Exchange (ETDEWEB)

    Toelli, Heikki; Sjoegren, Rickard; Wendelsten, Mikael, E-mail: heikki.tolli@radfys.umu.s [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 85 Umeaa (Sweden)

    2010-08-07

    The correction for general recombination losses in liquid ionization chambers (LICs) is more complex than that in air-filled ionization chambers. The reason for this is that the saturation charge in LICs, i.e. the charge that escapes initial recombination, depends on the applied voltage. This paper presents a method, based on measurements at two different dose rates in a pulsed beam, for general recombination correction in LICs. The Boag theory for pulsed beams is used and the collection efficiency is determined by numerical methods which are equivalent to the two-voltage method used in dosimetry with air-filled ionization chambers. The method has been tested in experiments in water in a 20 MeV electron beam using two LICs filled with isooctane and tetramethylsilane. The dose per pulse in the electron beam was varied between 0.1 mGy/pulse and 8 mGy/pulse. The relative standard deviations of the collection efficiencies determined with the two-dose-rate method ranged between 0.1% and 1.5%. The dose-rate variations of the general recombination corrected charge measured with the LICs are in excellent agreement with the corresponding values obtained with an air-filled plane parallel ionization chamber.

  9. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  10. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  11. The Type of Container and Filling Method Have Consequences on Semen Quality in Swine AI Doses

    Directory of Open Access Journals (Sweden)

    Iulian Ibanescu

    2016-05-01

    Full Text Available The automatic filling of semen doses for artificial insemination in swine shows economic advantages over the old-style, manual filling. However, no data could be found regarding the impact, if any, of this packing method on semen quality. This study aimed to compare two types of containers for boar semen, namely the automatically-filled tube and the manually-filled bottle, in terms of preserving the quality of boar semen. Five ejaculates from five different boars were diluted with the same extender and then divided in two aliquots. First aliquot was loaded in tubes filled by an automatic machine while the second was loaded manually in special plastic bottles. The semen was stored in liquid state at 17°C, regardless of the type of container and examined daily, for five days of storage by means of a computer-assisted sperm analyzer. Both types of containers maintained the semen within acceptable values, but after five days of storage significant differences (p<0.05 between the container types were observed in terms of all selected kinetic parameters. The tube showed better values for sperm motility and velocity, while the bottle showed superior values for straightness and linearity of sperm movement. The automatically-filled tubes offered better sperm motility in every day of the study. Given the fact that sperm motility is still the main criterion in assessing semen quality in semen production centers, the main conclusion of this study is that the automatic loading in tubes is superior and recommended over the old-style manual loading in bottles.

  12. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Supanich, MP [Rush University Medical Center, Chicago, IL (United States)

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in the central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.

  13. Thermoluminescence dating of the ancient Chinese porcelain using a regression method of saturation exponential in pre-dose technique

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper studies the thermoluminescence (TL) dating of the ancient porcelain using a regression method of saturation exponential in the pre-dose technique. The experimental results show that the measured errors are 15% (±1σ) for the paleodose and 17% (±1σ) for the annual dose respectively, and the TL age error is 23% (±1σ) in this method. The larger Chinese porcelains from the museum and the nation-wide collectors have been dated by this method. The results show that the certainty about the authenticity testing is larger than 95%, and the measurable porcelains make up about 95% of the porcelain dated. It is very successful in discrimination for the imitations of ancient Chinese porcelains. This paper describes the measured principle and method for the paleodose of porcelains. The TL ages are dated by this method for the 39 shards and porcelains from past dynasties of China and the detailed data in the measurement are reported.

  14. Improved Pharmacy Department Workflow with New Method of Order Entry for Single-Agent, High-Dose Methotrexate

    Science.gov (United States)

    VanDyke, Thomas H.; Athmann, Paul W.; Mills, Lisa B.; Bonter, Michael P.; Bremer, Matthew W.; Dougherty, Mary L.; Foster, Ryan W.; Knight, Sandra K.; Slot, Martha G.; Steinmetz-Malato, Laura L.

    2014-01-01

    Purpose: To determine whether a process change impacted the proportion of orders for single-agent, high-dose methotrexate entered by chemotherapy pharmacists instead of general pharmacy staff. Coordination of antiemetic premedication and leucovorin rescue with the new method of order entry was evaluated. Methods: Adults treated with single-agent, high-dose methotrexate were identified retrospectively. Order entry of methotrexate and ancillary medications was examined to determine whether the old or new method was used and whether it was performed by a chemotherapy pharmacist. The fundamental difference between the old and new methods for order entry is use of the “unscheduled” frequency of medication administration to replace the administration frequency of “once” with a specified date and time. Timing of antiemetic premedication and leucovorin rescue relative to methotrexate administration were tallied for the new method. Chi-square analysis was performed for the primary objective. Observational statistics were performed otherwise. Results: The number of evaluable encounters identified was 158. A chemotherapy pharmacist entered a greater proportion of orders when the new method was utilized (P < .0001). The proportion of orders entered by a chemotherapy pharmacist increased during the hours of 0700 and 2259 with the new method. Appropriate coordination of antiemetic and leucovorin administration was documented for 96% and 100% of cases with the new method of order entry. Conclusion: The proportion of orders for single-agent, high-dose methotrexate entered by a chemotherapy pharmacist was significantly greater with the use of the new method. Administration of antiemetic premedication and leucovorin rescue were appropriately coordinated with the use of the new method for order entry of single-agent, high-dose methotrexate. PMID:25673893

  15. A method to reduce patient's eye lens dose in neuro-interventional radiology procedures

    Science.gov (United States)

    Safari, M. J.; Wong, J. H. D.; Kadir, K. A. A.; Sani, F. M.; Ng, K. H.

    2016-08-01

    Complex and prolonged neuro-interventional radiology procedures using the biplane angiography system increase the patient's risk of radiation-induced cataract. Physical collimation is the most effective way of reducing the radiation dose to the patient's eye lens, but in instances where collimation is not possible, an attenuator may be useful in protecting the eyes. In this study, an eye lens protector was designed and fabricated to reduce the radiation dose to the patients' eye lens during neuro-interventional procedures. The eye protector was characterised before being tested on its effectiveness in a simulated aneurysm procedure on an anthropomorphic phantom. Effects on the automatic dose rate control (ADRC) and image quality are also evaluated. The eye protector reduced the radiation dose by up to 62.1% at the eye lens. The eye protector is faintly visible in the fluoroscopy images and increased the tube current by a maximum of 3.7%. It is completely invisible in the acquisition mode and does not interfere with the clinical procedure. The eye protector placed within the radiation field of view was able to reduce the radiation dose to the eye lens by direct radiation beam of the lateral x-ray tube with minimal effect on the ADRC system.

  16. Methods for meta-analysis of pharmacodynamic dose-response data with application to multi-arm studies of alogliptin

    NARCIS (Netherlands)

    Langford, Oliver; Aronson, Jeffrey K; van Valkenhoef, Gert; Stevens, Richard J

    2016-01-01

    Standard methods for meta-analysis of dose-response data in epidemiology assume a model with a single scalar parameter, such as log-linear relationships between exposure and outcome; such models are implicitly unbounded. In contrast, in pharmacology, multi-parameter models, such as the widely used E

  17. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  18. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-12-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  19. A Benchmark Approach of Counterparty Credit Exposure of Bermudan Option under Lévy Process: The Monte Carlo-COS Method

    NARCIS (Netherlands)

    Shen, Y.; Van der Weide, J.A.M.; Anderluh, J.H.M.

    2013-01-01

    An advanced method, which we call Monte Carlo-COS method, is proposed for computing the counterparty credit exposure profile of Bermudan options under Lévy process. The different exposure profiles and exercise intensity under different mea- sures, P and Q, are discussed. Since the COS method [1] del

  20. A Method of the Benchmarks Selection Based on the Choice Preference of Competitive Strategies%基于企业竞争战略选择偏好的标杆筛选方法

    Institute of Scientific and Technical Information of China (English)

    葛虹; 张艳霞

    2013-01-01

    This study uses self-organizing map to identify potential benchmarks based on the similarity of input use.The supper DEA efficiencies are used to identify the industry leaders.Different from other researches,the appropriate target is finally determined based not only on the DEA efficiency scores but also on the choice preference of competitive strategies such as cost leadership,product differentiation and focus strategy for the decision making units.An empirical study on 50 Chinese banks with 2011 Annual Report data shows that the proposed method is very practical in selection of competitive strategies-oriented benchmarks for inefficient units.%通过企业生产性投入的相似性程度来确认标杆的可追赶性,利用自组织映射图来对企业的相似性进行划分;通过与同类企业的DEA超效率比较来确认标杆的超前性.根据企业对成本领先战略、差异化战略以及集中化竞争战略的选择偏好,筛选出有利于企业未来发展的标杆企业.利用我国50家银行2011年年报数据进行实证分析的结论表明:新方法能为企业提供具有战略导向的标杆选择方案.

  1. Reanalysis of cancer mortality in Japanese A-bomb survivors exposed to low doses of radiation: bootstrap and simulation methods

    Directory of Open Access Journals (Sweden)

    Dropkin Greg

    2009-12-01

    Full Text Available Abstract Background The International Commission on Radiological Protection (ICRP recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years, liver (36.9, lung (13.6, leukaemia (23.66, and pancreas (11.86 and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of

  2. Dose-Response Modeling Under Simple Order Restrictions Using Bayesian Variable Selection Methods

    OpenAIRE

    Otava, Martin; Shkedy, Ziv; Lin, Dan; Goehlmann, Hinrich W. H.; Bijnens, Luc; Talloen, Willem; Kasim, Adetayo

    2014-01-01

    Bayesian modeling of dose–response data offers the possibility to establish the relationship between a clinical or a genomic response and increasing doses of a therapeutic compound and to determine the nature of the relationship wherever it exists. In this article, we focus on an order-restricted one-way ANOVA model which can be used to test the null hypothesis of no dose effect against an ordered alternative. Within the framework of the dose–response modeling, a model uncertainty can be addr...

  3. Research on Benchmarks and Methods of Regional Strategic Emerging Industries——Based on the Data from Industrial Enterprises in Chongqing%区域战略性新兴产业选择基准和方法研究——以重庆市工业行业为例

    Institute of Scientific and Technical Information of China (English)

    敖永春; 金霞

    2012-01-01

    According to the connotation of strategic emerging industries and the choice benchmarks of area dominant industry , the paper puts forward five choice benchmarks of regional strategic emerging industries, namely comprehensive efficiency of benchmarks, spill - over benchmark, technology resources benchmark, capacity for sustainable development benchmark and regional comparative advantage of benchmark. It analyzes several typical methods about the regional strategic e-merging industries selected. At last, with the factor analysis, the paper has a selection and evaluation on Chongqing industries, the results show that Chongqing should focus on new materials, equipment manufacturing, biotechnology and new energy vehicles.%结合战略性新兴产业的内涵和区域主导产业的选择基准提出了5个区域战略性新兴产业的选择基准,即综合效益基准、带动性基准、技术资源基准、可持续发展能力基准和区域比较优势基准,并分析了战略性新兴产业选择的几种方法.最后,用因子分析法对重庆的工业行业进行了选择评价研究.结果显示,重庆市在工业行业方面应把重点放在新材料、高端装备制造、生物技术和新能源汽车这些领域.

  4. SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, S; Inoue, T; Kurokawa, C; Usui, K; Sasai, K [Juntendo University, Bunkyo, Tokyo, JP (Japan); Utsunomiya, S [Niigata University, Niigata, Nigata, JP (Japan); Ebe, K [Joetsu General Hospital, Joetsu, Niigata, JP (Japan)

    2014-06-01

    Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbal motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.

  5. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  6. Randomized benchmarking in measurement-based quantum computing

    Science.gov (United States)

    Alexander, Rafael N.; Turner, Peter S.; Bartlett, Stephen D.

    2016-09-01

    Randomized benchmarking is routinely used as an efficient method for characterizing the performance of sets of elementary logic gates in small quantum devices. In the measurement-based model of quantum computation, logic gates are implemented via single-site measurements on a fixed universal resource state. Here we adapt the randomized benchmarking protocol for a single qubit to a linear cluster state computation, which provides partial, yet efficient characterization of the noise associated with the target gate set. Applying randomized benchmarking to measurement-based quantum computation exhibits an interesting interplay between the inherent randomness associated with logic gates in the measurement-based model and the random gate sequences used in benchmarking. We consider two different approaches: the first makes use of the standard single-qubit Clifford group, while the second uses recently introduced (non-Clifford) measurement-based 2-designs, which harness inherent randomness to implement gate sequences.

  7. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations

    Science.gov (United States)

    Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

    2014-06-01

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  8. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations.

    Science.gov (United States)

    Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I

    2014-06-07

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  9. Dosimetry in radiotherapy using a-Si EPIDs: Systems, methods, and applications focusing on 3D patient dose estimation

    Science.gov (United States)

    McCurdy, B. M. C.

    2013-06-01

    An overview is provided of the use of amorphous silicon electronic portal imaging devices (EPIDs) for dosimetric purposes in radiation therapy, focusing on 3D patient dose estimation. EPIDs were originally developed to provide on-treatment radiological imaging to assist with patient setup, but there has also been a natural interest in using them as dosimeters since they use the megavoltage therapy beam to form images. The current generation of clinically available EPID technology, amorphous-silicon (a-Si) flat panel imagers, possess many characteristics that make them much better suited to dosimetric applications than earlier EPID technologies. Features such as linearity with dose/dose rate, high spatial resolution, realtime capability, minimal optical glare, and digital operation combine with the convenience of a compact, retractable detector system directly mounted on the linear accelerator to provide a system that is well-suited to dosimetric applications. This review will discuss clinically available a-Si EPID systems, highlighting dosimetric characteristics and remaining limitations. Methods for using EPIDs in dosimetry applications will be discussed. Dosimetric applications using a-Si EPIDs to estimate three-dimensional dose in the patient during treatment will be overviewed. Clinics throughout the world are implementing increasingly complex treatments such as dynamic intensity modulated radiation therapy and volumetric modulated arc therapy, as well as specialized treatment techniques using large doses per fraction and short treatment courses (ie. hypofractionation and stereotactic radiosurgery). These factors drive the continued strong interest in using EPIDs as dosimeters for patient treatment verification.

  10. Determination of Absorbed dose of patients organs under kidney Scintigraphy by using the MIRD Dosimetry method

    Directory of Open Access Journals (Sweden)

    Shokofeh Pirdomooie

    2016-07-01

    13±0.66, 2.1±0.24, 2.2±0.38, 335.43±3.3 mrad/mCi respectively. Conclusion: in this study, Bladder and liver received highest and lowest absorbed doses respectively. Also, the results of this study, showed good agreement with ICRP no.106 report.

  11. A dose optimization method for electron radiotherapy using randomized aperture beams.

    Science.gov (United States)

    Engel, Konrad; Gauer, Tobias

    2009-09-01

    The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.

  12. Methods for estimating doses to organisms from radioactive materials released into the aquatic environment

    Energy Technology Data Exchange (ETDEWEB)

    Baker, D.A.; Soldat, J.K.

    1992-06-01

    The US Department of Energy recently published an interim dose limit of 1 rad d{sup {minus}1} for controlling the radiation exposure of nature aquatic organisms. A computer program named CRITR, developed previously for calculating radiation doses to aquatic organisms and their predators, has been updated as an activity of the Hanford Site Surface Environmental Surveillance Project to facilitate demonstration of compliance with this limit. This report presents the revised models and the updated computer program, CRITR2, for the assessment of radiological doses to aquatic organisms and their predators; tables of the required input parameters are also provided. Both internal and external doses to fish, crustacea, mollusks, and algae, as well as organisms that subsist on them, such as muskrats, raccoons, and ducks, may be estimated using CRITR2. Concentrations of radionuclides in the water to which the organisms are exposed may be entered directly into the user-input file or may be calculated from a source term and standard dilution models developed for the National Council on Radiation Protection and Measurements.

  13. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  14. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.;

    2013-01-01

    and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...... already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity. © IWA Publishing 2013....

  15. A method for high-energy, low-dose mammography using edge illumination x-ray phase-contrast imaging

    Science.gov (United States)

    Diemoz, Paul C.; Bravin, Alberto; Sztrókay-Gaul, Anikó; Ruat, Marie; Grandl, Susanne; Mayr, Doris; Auweter, Sigrid; Mittone, Alberto; Brun, Emmanuel; Ponchut, Cyril; Reiser, Maximilian F.; Coan, Paola; Olivo, Alessandro

    2016-12-01

    Since the breast is one of the most radiosensitive organs, mammography is arguably the area where lowering radiation dose is of the uttermost importance. Phase-based x-ray imaging methods can provide opportunities in this sense, since they do not require x-rays to be stopped in tissue for image contrast to be generated. Therefore, x-ray energy can be considerably increased compared to those usually exploited by conventional mammography. In this article we show how a novel, optimized approach can lead to considerable dose reductions. This was achieved by matching the edge-illumination phase method, which reaches very high angular sensitivity also at high x-ray energies, to an appropriate image processing algorithm and to a virtually noise-free detection technology capable of reaching almost 100% efficiency at the same energies. Importantly, while proof-of-concept was obtained at a synchrotron, the method has potential for a translation to conventional sources.

  16. RESRAD benchmarking against six radiation exposure pathway models

    Energy Technology Data Exchange (ETDEWEB)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    1994-10-01

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, input parameters such as occupancy, shielding, and consumption factors.

  17. Benchmarking Universiteitsvastgoed: Managementinformatie bij vastgoedbeslissingen

    NARCIS (Netherlands)

    Den Heijer, A.C.; De Vries, J.C.

    2004-01-01

    Voor u ligt het eindrapport van het onderzoek "Benchmarking universiteitsvastgoed". Dit rapport is de samenvoeging van twee deel producten: het theorierapport (verschenen in december 2003) en het praktijkrapport (verschenen in januari 2004). Onderwerpen in het theoriedeel zijn de analyse van andere

  18. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  19. A general method to derive tissue parameters for Monte Carlo dose calculation with multi-energy CT.

    Science.gov (United States)

    Lalonde, Arthur; Bouchard, Hugo

    2016-11-21

    To develop a general method for human tissue characterization with dual- and multi-energy CT and evaluate its performance in determining elemental compositions and quantities relevant to radiotherapy Monte Carlo dose calculation. Ideal materials to describe human tissue are obtained applying principal component analysis on elemental weight and density data available in literature. The theory is adapted to elemental composition for solving tissue information from CT data. A novel stoichiometric calibration method is integrated to the technique to make it suitable for a clinical environment. The performance of the method is compared with two techniques known in literature using theoretical CT data. In determining elemental weights with dual-energy CT, the method is shown to be systematically superior to the water-lipid-protein material decomposition and comparable to the parameterization technique. In determining proton stopping powers and energy absorption coefficients with dual-energy CT, the method generally shows better accuracy and unbiased results. The generality of the method is demonstrated simulating multi-energy CT data to show the potential to extract more information with multiple energies. The method proposed in this paper shows good performance to determine elemental compositions from dual-energy CT data and physical quantities relevant to radiotherapy dose calculation. The method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.

  20. A general method to derive tissue parameters for Monte Carlo dose calculation with multi-energy CT

    Science.gov (United States)

    Lalonde, Arthur; Bouchard, Hugo

    2016-11-01

    To develop a general method for human tissue characterization with dual- and multi-energy CT and evaluate its performance in determining elemental compositions and quantities relevant to radiotherapy Monte Carlo dose calculation. Ideal materials to describe human tissue are obtained applying principal component analysis on elemental weight and density data available in literature. The theory is adapted to elemental composition for solving tissue information from CT data. A novel stoichiometric calibration method is integrated to the technique to make it suitable for a clinical environment. The performance of the method is compared with two techniques known in literature using theoretical CT data. In determining elemental weights with dual-energy CT, the method is shown to be systematically superior to the water-lipid-protein material decomposition and comparable to the parameterization technique. In determining proton stopping powers and energy absorption coefficients with dual-energy CT, the method generally shows better accuracy and unbiased results. The generality of the method is demonstrated simulating multi-energy CT data to show the potential to extract more information with multiple energies. The method proposed in this paper shows good performance to determine elemental compositions from dual-energy CT data and physical quantities relevant to radiotherapy dose calculation. The method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.

  1. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  2. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    Energy Technology Data Exchange (ETDEWEB)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr [Center for Research in Epidemiology and Population Health (CESP) INSERM 1018 Radiation, Epidemiology Group, Villejuif (France); Université Paris sud, Le Kremlin-Bicêtre (France); Institut Gustave Roussy, Villejuif (France); Blanchard, Pierre [Université Paris sud, Le Kremlin-Bicêtre (France); Department of Radiation Oncology, Institut Gustave Roussy, Villejuif (France); Schwartz, Boris [Center for Research in Epidemiology and Population Health (CESP) INSERM 1018 Radiation, Epidemiology Group, Villejuif (France); Université Paris sud, Le Kremlin-Bicêtre (France); Institut Gustave Roussy, Villejuif (France); Champoudry, Jérôme [Department of Radiation Oncology, CHU de la Timone, Marseille (France); Bouaita, Ryan [Department of Radiation Oncology, CHU Henri Mondor, Creteil (France); Lefkopoulos, Dimitri [Department of Radiation Physics, Institut Gustave Roussy, Villejuif (France); Deutsch, Eric [Université Paris sud, Le Kremlin-Bicêtre (France); Department of Radiation Oncology, Institut Gustave Roussy, Villejuif (France); INSERM 1030, Molecular Radiotherapy, Villejuif (France); Diallo, Ibrahima [Center for Research in Epidemiology and Population Health (CESP) INSERM 1018 Radiation, Epidemiology Group, Villejuif (France); Université Paris sud, Le Kremlin-Bicêtre (France); Institut Gustave Roussy, Villejuif (France); Cardot, Hervé [Institut de Mathématiques de Bourgogne, Université de Bourgogne, Dijon (France); and others

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principal components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional

  3. Investigation of the gold nanoparticles effects on the prostate dose distribution in brachytherapy: gel dosimetry and Monte Carlo method

    Science.gov (United States)

    Hashemi, Bijan; Rahmani, Faezeh; Ebadi, Ahmad

    2016-01-01

    Purpose In this work, gold nanoparticles (GNPs) were embedded in the MAGIC-f polymer gel irradiated with the 192Ir brachytherapy sources. Material and methods At the first plexiglas phantom was made as the human pelvis. The GNPs were synthesized with 15 nm in diameter and 0.1 mM (0.0197 mg/ml) in concentration by using a chemical reduction method. Then, the MAGIC-f gel was synthesized. The fabricated gel was poured into the tubes located at the prostate (with and without the GNPs) locations of the phantom. The phantom was irradiated with 192Ir brachytherapy sources for prostate cancer. After 24 hours, the irradiated gels was read by using Siemens 1.5 Tesla MRI scanner. Following the brachytherapy practices, the absolute doses at the reference points and isodose curves were extracted and compared by experimental measurements and Monte Carlo (MC) simulations. Results The mean absorbed doses in the presence of the GNPs in prostate were 14% higher than the corresponding values without the GNPs in the brachytherapy. The gamma index analysis (between gel and MC) using 7%/7 mm was also applied to the data and a high pass rate achieved (91.7% and 86.4% for analysis with/without GNPs, respectively). Conclusions The real three-dimensional analysis shows the comparison of the dose-volume histograms measured for planning volumes and the expected one from the MC calculation. The results indicate that the polymer gel dosimetry method, which developed and used in this study, could be recommended as a reliable method for investigating the dose enhancement factor of GNPs in brachytherapy. PMID:27895684

  4. Comparison of dose estimates using the buildup-factor method and a Baryon transport code (BRYNTRN) with Monte Carlo results

    Science.gov (United States)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.; Cucinotta, Francis A.

    1990-01-01

    Continuing efforts toward validating the buildup factor method and the BRYNTRN code, which use the deterministic approach in solving radiation transport problems and are the candidate engineering tools in space radiation shielding analyses, are presented. A simplified theory of proton buildup factors assuming no neutron coupling is derived to verify a previously chosen form for parameterizing the dose conversion factor that includes the secondary particle buildup effect. Estimates of dose in tissue made by the two deterministic approaches and the Monte Carlo method are intercompared for cases with various thicknesses of shields and various types of proton spectra. The results are found to be in reasonable agreement but with some overestimation by the buildup factor method when the effect of neutron production in the shield is significant. Future improvement to include neutron coupling in the buildup factor theory is suggested to alleviate this shortcoming. Impressive agreement for individual components of doses, such as those from the secondaries and heavy particle recoils, are obtained between BRYNTRN and Monte Carlo results.

  5. Application of the two-dose-rate method for general recombination correction for liquid ionization chambers in continuous beams

    Science.gov (United States)

    Andersson, Jonas; Tölli, Heikki

    2011-01-01

    A method to correct for the general recombination losses for liquid ionization chambers in continuous beams has been developed. The proposed method has been derived from Greening's theory for continuous beams and is based on measuring the signal from a liquid ionization chamber and an air filled monitor ionization chamber at two different dose rates. The method has been tested with two plane parallel liquid ionization chambers in a continuous radiation x-ray beam with a tube voltage of 120 kV and with dose rates between 2 and 13 Gy min-1. The liquids used as sensitive media in the chambers were isooctane (C8H18) and tetramethylsilane (Si(CH3)4). The general recombination effect was studied using chamber polarizing voltages of 100, 300, 500, 700 and 900 V for both liquids. The relative standard deviation of the results for the collection efficiency with respect to general recombination was found to be a maximum of 0.7% for isooctane and 2.4% for tetramethylsilane. The results are in excellent agreement with Greening's theory for collection efficiencies over 90%. The measured and corrected signals from the liquid ionization chambers used in this work are in very good agreement with the air filled monitor chamber with respect to signal to dose linearity.

  6. Application of the two-dose-rate method for general recombination correction for liquid ionization chambers in continuous beams

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jonas; Toelli, Heikki, E-mail: jonas.andersson@radfys.umu.se [Department of Radiation Sciences, Radiation Physics, Umeaa University, SE-901 85 Umeaa (Sweden)

    2011-01-21

    A method to correct for the general recombination losses for liquid ionization chambers in continuous beams has been developed. The proposed method has been derived from Greening's theory for continuous beams and is based on measuring the signal from a liquid ionization chamber and an air filled monitor ionization chamber at two different dose rates. The method has been tested with two plane parallel liquid ionization chambers in a continuous radiation x-ray beam with a tube voltage of 120 kV and with dose rates between 2 and 13 Gy min{sup -1}. The liquids used as sensitive media in the chambers were isooctane (C{sub 8}H{sub 18}) and tetramethylsilane (Si(CH{sub 3}){sub 4}). The general recombination effect was studied using chamber polarizing voltages of 100, 300, 500, 700 and 900 V for both liquids. The relative standard deviation of the results for the collection efficiency with respect to general recombination was found to be a maximum of 0.7% for isooctane and 2.4% for tetramethylsilane. The results are in excellent agreement with Greening's theory for collection efficiencies over 90%. The measured and corrected signals from the liquid ionization chambers used in this work are in very good agreement with the air filled monitor chamber with respect to signal to dose linearity.

  7. Benchmarking Density Functional Theory Based Methods To Model NiOOH Material Properties: Hubbard and van der Waals Corrections vs Hybrid Functionals.

    Science.gov (United States)

    Zaffran, Jeremie; Caspary Toroker, Maytal

    2016-08-09

    NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation.

  8. Dose estimation by biological methods; Estimacion de dosis por metodos biologicos

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M. [Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico)

    1997-07-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  9. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    Science.gov (United States)

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc.

  10. Benchmarking research of steel companies in Europe

    Directory of Open Access Journals (Sweden)

    M. Antošová

    2013-07-01

    Full Text Available In present time steelworks are at a stage of permanent changes that are marked with still stronger competition pressure. Therefore managers must solve questions of how to decrease production costs, how to overcome competition and how to survive in the world market. Still more attention should be paid to the modern managerial methods of market research and comparison with competition. Benchmarking research is one of the effective tools for such research. The goal of this contribution is to compare chosen steelworks and to indicate new directions for their development with the possibility of increasing the productivity of steel production.

  11. Benchmark Dose Software Development and Maintenance Ten Berge Cxt Models

    Science.gov (United States)

    This report is intended to provide an overview of beta version 1.0 of the implementation of a concentration-time (CxT) model originally programmed and provided by Wil ten Berge (referred to hereafter as the ten Berge model). The recoding and development described here represent ...

  12. Refined hazard characterization of 3-MCPD using benchmark dose modeling

    NARCIS (Netherlands)

    Rietjens, I.M.C.M.; Scholz, G.; Berg, van den I.; Schilter, B.; Slob, W.

    2012-01-01

    3-Monochloropropane-1,2-diol (3-MCPD)-esters represent a newly identified class of food-borne process contaminants of possible health concern. Due to hydrolysis 3-MCPD esters constitute a potentially significant source of free 3-MCPD exposure and their preliminary risk assessment was based on toxico

  13. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care were...... assessed: Compliance with current guidelines on initiation of 1) combination antiretroviral therapy (cART), 2) chemoprophylaxis, 3) frequency of laboratory monitoring, and 4) virological response to cART (proportion of patients with HIV-RNA 90% of time on cART). RESULTS: 7097 Euro...... to North, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for East and Argentina (adjusted OR 0.16[95%CI 0.11-0.23, p HIV health care utilization...

  14. Benchmarking ETL Workflows

    Science.gov (United States)

    Simitsis, Alkis; Vassiliadis, Panos; Dayal, Umeshwar; Karagiannis, Anastasios; Tziovara, Vasiliki

    Extraction-Transform-Load (ETL) processes comprise complex data workflows, which are responsible for the maintenance of a Data Warehouse. A plethora of ETL tools is currently available constituting a multi-million dollar market. Each ETL tool uses its own technique for the design and implementation of an ETL workflow, making the task of assessing ETL tools extremely difficult. In this paper, we identify common characteristics of ETL workflows in an effort of proposing a unified evaluation method for ETL. We also identify the main points of interest in designing, implementing, and maintaining ETL workflows. Finally, we propose a principled organization of test suites based on the TPC-H schema for the problem of experimenting with ETL workflows.

  15. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  16. A new method to explore the spectral impact of the piriform fossae on the singing voice: benchmarking using MRI-based 3D-printed vocal tracts.

    Directory of Open Access Journals (Sweden)

    Bertrand Delvaux

    Full Text Available The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics is introduced. The findings support results in the literature: the piriform fossae create a spectral trough in the region 4-5 kHz and act as formants repellents. Moreover, this study extends those results by demonstrating numerically and perceptually the impact of having large piriform fossae on the sung output.

  17. A new method to explore the spectral impact of the piriform fossae on the singing voice: benchmarking using MRI-based 3D-printed vocal tracts.

    Science.gov (United States)

    Delvaux, Bertrand; Howard, David

    2014-01-01

    The piriform fossae are the 2 pear-shaped cavities lateral to the laryngeal vestibule at the lower end of the vocal tract. They act acoustically as side-branches to the main tract, resulting in a spectral zero in the output of the human voice. This study investigates their spectral role by comparing numerical and experimental results of MRI-based 3D printed Vocal Tracts, for which a new experimental method (based on room acoustics) is introduced. The findings support results in the literature: the piriform fossae create a spectral trough in the region 4-5 kHz and act as formants repellents. Moreover, this study extends those results by demonstrating numerically and perceptually the impact of having large piriform fossae on the sung output.

  18. The calculation of dose from external photon exposures using reference human phantoms and Monte Carlo methods. Pt. 7. Organ doses due to parallel and environmental exposure geometries

    Energy Technology Data Exchange (ETDEWEB)

    Zankl, M. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Drexler, G. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Petoussi-Henss, N. [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz; Saito, K. [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan)

    1997-03-01

    This report presents a tabulation of organ and tissue equivalent dose as well as effective dose conversion coefficients, normalised to air kerma free in air, for occupational exposures and environmental exposures of the public to external photon radiation. For occupational exposures, whole-body irradiation with idealised geometries, i.e. broad parallel beams and fully isotropic radiation incidence, is considered. The directions of incidence for the parallel beams are anterior-posterior, posterior-anterior, left lateral, right lateral and a full 360 rotation around the body`s longitudinal axis. The influence of beam divergence on the body doses is also considered as well as the dependence of effective dose on the angle of radiation incidence. Regarding exposure of the public to environmental sources, three source geometries are considered: exposure from a radioactive cloud, from ground contamination and from the natural radionuclides distributed homogeneously in the ground. The precise angular and energy distributions of the gamma rays incident on the human body were taken into account. The organ dose conversion coefficients given in this catalogue were calculated using a Monte Carlo code simulating the photon transport in mathematical models of an adult male and an adult female, respectively. Conversion coefficients are given for the equivalent dose of 23 organs and tissues as well as for effective dose and the equivalent dose of the so-called `remainder`. The organ equivalent dose conversion coefficients are given separately for the adult male and female models and - as arithmetic mean of the conversion coefficients of both - for an average adult. Fitted data of the coefficients are presented in tables; the primary raw data as resulting from the Monte Carlo calculation are shown in figures together with the fitted data. (orig.)

  19. The fixed-point iteration method for IMRT optimization with truncated dose deposition coefficient matrix

    CERN Document Server

    Tian, Zhen; Jia, Xun; Jiang, Steve B

    2013-01-01

    In the treatment plan optimization for intensity modulated radiation therapy (IMRT), dose-deposition coefficient (DDC) matrix is often pre-computed to parameterize the dose contribution to each voxel in the volume of interest from each beamlet of unit intensity. However, due to the limitation of computer memory and the requirement on computational efficiency, in practice matrix elements of small values are usually truncated, which inevitably compromises the quality of the resulting plan. A fixed-point iteration scheme has been applied in IMRT optimization to solve this problem, which has been reported to be effective and efficient based on the observations of the numerical experiments. In this paper, we aim to point out the mathematics behind this scheme and to answer the following three questions: 1) whether the fixed-point iteration algorithm converges or not? 2) when it converges, whether the fixed point solution is same as the original solution obtained with the complete DDC matrix? 3) if not the same, wh...

  20. A numerical method to optimise the spatial dose distribution in carbon ion radiotherapy planning.

    Science.gov (United States)

    Grzanka, L; Korcyl, M; Olko, P; Waligorski, M P R

    2015-09-01

    The authors describe a numerical algorithm to optimise the entrance spectra of a composition of pristine carbon ion beams which delivers a pre-assumed dose-depth profile over a given depth range within the spread-out Bragg peak. The physical beam transport model is based on tabularised data generated using the SHIELD-HIT10A Monte-Carlo code. Depth-dose profile optimisation is achieved by minimising the deviation from the pre-assumed profile evaluated on a regular grid of points over a given depth range. This multi-dimensional minimisation problem is solved using the L-BFGS-B algorithm, with parallel processing support. Another multi-dimensional interpolation algorithm is used to calculate at given beam depths the cumulative energy-fluence spectra for primary and secondary ions in the optimised beam composition. Knowledge of such energy-fluence spectra for each ion is required by the mixed-field calculation of Katz's cellular Track Structure Theory (TST) that predicts the resulting depth-survival profile. The optimisation algorithm and the TST mixed-field calculation are essential tools in the development of a one-dimensional kernel of a carbon ion therapy planning system. All codes used in the work are generally accessible within the libamtrack open source platform.

  1. Climate Benchmark Missions: CLARREO

    Science.gov (United States)

    Wielicki, Bruce A.; Young, David F.

    2010-01-01

    CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in

  2. The LDBC Social Network Benchmark: Interactive Workload

    NARCIS (Netherlands)

    Erling, O.; Averbuch, A.; Larriba-Pey, J.; Chafi, H.; Gubichev, A.; Prat, A.; Pham, M.D.; Boncz, P.A.

    2015-01-01

    The Linked Data Benchmark Council (LDBC) is now two years underway and has gathered strong industrial participation for its mission to establish benchmarks, and benchmarking practices for evaluating graph data management systems. The LDBC introduced a new choke-point driven methodology for developin

  3. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  4. Benchmarking: Achieving the best in class

    Energy Technology Data Exchange (ETDEWEB)

    Kaemmerer, L

    1996-05-01

    Oftentimes, people find the process of organizational benchmarking an onerous task, or, because they do not fully understand the nature of the process, end up with results that are less than stellar. This paper presents the challenges of benchmarking and reasons why benchmarking can benefit an organization in today`s economy.

  5. Thermal Performance Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  6. Benchmarking Post-Hartree–Fock Methods To Describe the Nonlinear Optical Properties of Polymethines: An Investigation of the Accuracy of Algebraic Diagrammatic Construction (ADC) Approaches

    KAUST Repository

    Knippenberg, Stefan

    2016-10-07

    Third-order nonlinear optical (NLO) properties of polymethine dyes have been widely studied for applications such as all-optical switching. However, the limited accuracy of the current computational methodologies has prevented a comprehensive understanding of the nature of the lowest excited states and their influence on the molecular optical and NLO properties. Here, attention is paid to the lowest excited-state energies and their energetic ratio, as these characteristics impact the figure-of-merit for all-optical switching. For a series of model polymethines, we compare several algebraic diagrammatic construction (ADC) schemes for the polarization propagator with approximate second-order coupled cluster (CC2) theory, the widely used INDO/MRDCI approach and the symmetry-adapted cluster configuration interaction (SAC-CI) algorithm incorporating singles and doubles linked excitation operators (SAC-CI SD-R). We focus in particular on the ground-to-excited state transition dipole moments and the corresponding state dipole moments, since these quantities are found to be of utmost importance for an effective description of the third-order polarizability γ and two-photon absorption spectra. A sum-overstates expression has been used, which is found to quickly converge. While ADC(3/2) has been found to be the most appropriate method to calculate these properties, CC2 performs poorly.

  7. Benchmarking Post-Hartree-Fock Methods To Describe the Nonlinear Optical Properties of Polymethines: An Investigation of the Accuracy of Algebraic Diagrammatic Construction (ADC) Approaches.

    Science.gov (United States)

    Knippenberg, Stefan; Gieseking, Rebecca L; Rehn, Dirk R; Mukhopadhyay, Sukrit; Dreuw, Andreas; Brédas, Jean-Luc

    2016-11-08

    Third-order nonlinear optical (NLO) properties of polymethine dyes have been widely studied for applications such as all-optical switching. However, the limited accuracy of the current computational methodologies has prevented a comprehensive understanding of the nature of the lowest excited states and their influence on the molecular optical and NLO properties. Here, attention is paid to the lowest excited-state energies and their energetic ratio, as these characteristics impact the figure-of-merit for all-optical switching. For a series of model polymethines, we compare several algebraic diagrammatic construction (ADC) schemes for the polarization propagator with approximate second-order coupled cluster (CC2) theory, the widely used INDO/MRDCI approach and the symmetry-adapted cluster configuration interaction (SAC-CI) algorithm incorporating singles and doubles linked excitation operators (SAC-CI SD-R). We focus in particular on the ground-to-excited state transition dipole moments and the corresponding state dipole moments, since these quantities are found to be of utmost importance for an effective description of the third-order polarizability γ and two-photon absorption spectra. A sum-overstates expression has been used, which is found to quickly converge. While ADC(3/2) has been found to be the most appropriate method to calculate these properties, CC2 performs poorly.

  8. Electronically Excited States of Vitamin B12: Benchmark Calculations Including Time-Dependent Density Functional Theory and Correlated Ab Initio Methods

    CERN Document Server

    Kornobis, Karina; Wong, Bryan M; Lodowski, Piotr; Jaworska, Maria; Andruniów, Tadeusz; Rudd, Kenneth; Kozlowski, Pawel M; 10.1021/jp110914y

    2011-01-01

    Time-dependent density functional theory (TD-DFT) and correlated ab initio methods have been applied to the electronically excited states of vitamin B12 (cyanocobalamin or CNCbl). Different experimental techniques have been used to probe the excited states of CNCbl, revealing many issues that remain poorly understood from an electronic structure point of view. Due to its efficient scaling with size, TD-DFT emerges as one of the most practical tools that can be used to predict the electronic properties of these fairly complex molecules. However, the description of excited states is strongly dependent on the type of functional used in the calculations. In the present contribution, the choice of a proper functional for vitamin B12 was evaluated in terms of its agreement with both experimental results and correlated ab initio calculations. Three different functionals, i.e. B3LYP, BP86, and LC-BLYP, were tested. In addition, the effect of relative contributions of DFT and HF to the exchange-correlation functional ...

  9. Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition

    Science.gov (United States)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.

    2015-11-01

    In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover

  10. Mechanisms of Fatal Cardiotoxicity following High-Dose Cyclophosphamide Therapy and a Method for Its Prevention.

    Directory of Open Access Journals (Sweden)

    Takuro Nishikawa

    Full Text Available Observed only after administration of high doses, cardiotoxicity is the dose-limiting effect of cyclophosphamide (CY. We investigated the poorly understood cardiotoxic mechanisms of high-dose CY. A rat cardiac myocardial cell line, H9c2, was exposed to CY metabolized by S9 fraction of rat liver homogenate mixed with co-factors (CYS9. Cytotoxicity was then evaluated by 3-(4,5-dimethyl-2-thiazolyl¬2,5-diphenyl¬2H-tetrazolium bromide (MTT assay, lactate dehydrogenase release, production of reactive oxygen species (ROS, and incidence of apoptosis. We also investigated how the myocardial cellular effects of CYS9 were modified by acrolein scavenger N-acetylcysteine (NAC, antioxidant isorhamnetin (ISO, and CYP inhibitor β-ionone (BIO. Quantifying CY and CY metabolites by means of liquid chromatography coupled with electrospray tandem mass spectrometry, we assayed culture supernatants of CYS9 with and without candidate cardioprotectant agents. Assay results for MTT showed that treatment with CY (125-500 μM did not induce cytotoxicity. CYS9, however, exhibited myocardial cytotoxicity when CY concentration was 250 μM or more. After 250 μM of CY was metabolized in S9 mix for 2 h, the concentration of CY was 73.6 ± 8.0 μM, 4-hydroxy-cyclophosphamide (HCY 17.6 ± 4.3, o-carboxyethyl-phosphoramide (CEPM 26.6 ± 5.3 μM, and acrolein 26.7 ± 2.5 μM. Inhibition of CYS9-induced cytotoxicity occurred with NAC, ISO, and BIO. When treated with ISO or BIO, metabolism of CY was significantly inhibited. Pre-treatment with NAC, however, did not inhibit the metabolism of CY: compared to control samples, we observed no difference in HCY, a significant increase of CEPM, and a significant decrease of acrolein. Furthermore, NAC pre-treatment did not affect intracellular amounts of ROS produced by CYS9. Since acrolein seems to be heavily implicated in the onset of cardiotoxicity, any competitive metabolic processing of CY that reduces its transformation to acrolein

  11. The European Union benchmarking experience. From euphoria to fatigue?

    Directory of Open Access Journals (Sweden)

    Michael Zängle

    2004-06-01

    Full Text Available Even if one may agree with the possible criticism of the Lisbon process as being too vague in com-mitment or as lacking appropriate statistical techniques and indicators, the benchmarking system pro-vided by EUROSTAT seems to be sufficiently effective in warning against imminent failure. The Lisbon objectives are very demanding. This holds true even if each of the objectives is looked at in isolation. But 'Lisbon' is more demanding than that, requiring a combination of several objectives to be achieved simultaneously (GDP growth, labour productivity, job-content of growth, higher quality of jobs and greater social cohesion. Even to countries like Ireland, showing exceptionally high performance in GDP growth and employment promotion during the period under investigation, achieving potentially conflicting objectives simultaneously seems to be beyond feasibility. The European Union benchmark-ing exercise is embedded in the context of the Open Method(s of Co-ordination (OMC. This context makes the benchmarking approach part and parcel of an overarching philosophy, which relates the benchmarking indicators to each other and assigns to them their role in corroborating the increasingly dominating project of the 'embedded neo-liberalism'. Against this background, the present paper is focussed on the following point. With the EU bench-marking system being effective enough to make the imminent under-achievement visible, there is a danger of disillusionment and 'benchmarking fatigue', which may provoke an ideological crisis. The dominant project being so deeply rooted, however, chances are high that this crisis will be solved im-manently in terms of embedded neo-liberalism by strengthening the neo-liberal branch of the Euro-pean project. Confining itself to the Europe of Fifteen, the analysis draws on EUROSTAT's database of Structural Indicators. ...

  12. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  13. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  14. Absorbed dose measurements in mammography using Monte Carlo method and ZrO{sub 2}+PTFE dosemeters

    Energy Technology Data Exchange (ETDEWEB)

    Duran M, H. A.; Hernandez O, M. [Departamento de Investigacion en Polimeros y Materiales, Universidad de Sonora, Blvd. Luis Encinas y Rosales s/n, Col. Centro, 83190 Hermosillo, Sonora (Mexico); Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R. [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Cipres 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Pinedo S, A.; Ventura M, J.; Chacon, F. [Hospital General de Zona No. 1, IMSS, Interior Alameda 45, 98000 Zacatecas (Mexico); Rivera M, T. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F.(Mexico)], e-mail: hduran20_1@hotmail.com

    2009-10-15

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO{sub 2}+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  15. A Benchmark for Management Effectiveness

    OpenAIRE

    Zimmermann, Bill; Chanaron, Jean-Jacques; Klieb, Leslie

    2007-01-01

    International audience; This study presents a tool to gauge managerial effectiveness in the form of a questionnaire that is easy to administer and score. The instrument covers eight distinct areas of organisational climate and culture of management inside a company or department. Benchmark scores were determined by administering sample-surveys to a wide cross-section of individuals from numerous firms in Southeast Louisiana, USA. Scores remained relatively constant over a seven-year timeframe...

  16. Effect of intensiti modulated radiation therapy according to equivalent uniform dose optimization method on patients with lung cancer

    Institute of Scientific and Technical Information of China (English)

    Yu-Fu Zhou; Qian Sun; Ya-Jun Zhang; Geng-Ming Wang; Bin He; Tao Qi; An Zhou

    2016-01-01

    Objective:To analyze the effect of the intensity modulated radiation therapy according to equivalent uniform dose optimization method on patients with lung cancer.Methods:A total of 82 cases of non-small cell lung cancer were divided into observation group and control group according to the random number table method. Patients in the control group received conventional radiotherapy while observation group received intensity modulated radiotherapy based on equivalent uniform dose optimization method. The treatment effects, survival times, blood vessel-related factors, blood coagulation function and the levels of inflammatory factors and so on were compared between the two groups of patients.Results:The effective rate of the observation group after treatment was higher than that of the control group. Progression free survival and median overall survival times were longer than those of patients in the control group (P<0.05). The serum VEGF and HIF-αα levels as well as D-D, TT, PT, APTT and FIB levels were lower in observation group patients after treatment than those in the control group(P<0.05). At the same time point, serum TNF-αα, CRP and PCT levels in the observation group after treatment were lower than those in the control group (P<0.05). Serum M2-PK, CA125, CEA and SCC values of patients in the observation group after treatment were all significantly lower than those in the control group (P< 0.05).Conclusions:Intensity modulated radiation therapy based on equivalent uniform dose optimized method can improve the treatment effect, prolong the survival time, optimize micro inflammatory environment and inhibit tumor biological behavior at the same time.

  17. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    OpenAIRE

    Hermans, Michel; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos; OPTIMISE (OPtimal Type 2 dIabetes Management Including benchmarking and Standard trEatment) International Steering Committee.

    2013-01-01

    OBJECTIVE: To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS: Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both...

  18. SU-E-T-561: Development of Depth Dose Measurement Technique Using the Multilayer Ionization Chamber for Spot Scanning Method

    Energy Technology Data Exchange (ETDEWEB)

    Takayanagi, T; Fujitaka, S; Umezawa, M [Hitachi, Ltd., Hitachi Research Laboratory, Hitachi-shi, Ibaraki-ken (Japan); Ito, Y; Nakashima, C; Matsuda, K [Hitachi, Ltd., Hitachi Works, Hitachi-shi, Ibaraki-ken (Japan)

    2014-06-01

    Purpose: To develop a measurement technique which suppresses the difference between profiles obtained with a multilayer ionization chamber (MLIC) and with a water phantom. Methods: The developed technique multiplies the raw MLIC data by a correction factor that depends on the initial beam range and water equivalent depth. The correction factor is derived based on a Bragg curve calculation formula considering range straggling and fluence loss caused by nuclear reactions. Furthermore, the correction factor is adjusted based on several integrated depth doses measured with a water phantom and the MLIC. The measured depth dose profiles along the central axis of the proton field with a nominal field size of 10 by 10 cm were compared between the MLIC using the new technique and the water phantom. The spread out Bragg peak was 20 cm for fields with a range of 30.6 cm and 6.9 cm. Raw MLIC data were obtained with each energy layer, and integrated after multiplying by the correction factor. The measurements were performed by a spot scanning nozzle at Nagoya Proton Therapy Center, Japan. Results: The profile measured with the MLIC using the new technique is consistent with that of the water phantom. Moreover, 97% of the points passed the 1% dose /1mm distance agreement criterion of the gamma index. Conclusion: We have demonstrated that the new technique suppresses the difference between profiles obtained with the MLIC and with the water phantom. It was concluded that this technique is useful for depth dose measurement in proton spot scanning method.

  19. Performance comparison of various maximum likelihood nonlinear mixed-effects estimation methods for dose-response models.

    Science.gov (United States)

    Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie

    2012-09-01

    Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.

  20. New investigation of distribution imaging and content uniformity of very low dose drugs using hot-melt extrusion method.

    Science.gov (United States)

    Park, Jun-Bom; Kang, Chin-Yang; Kang, Wie-Soo; Choi, Han-Gon; Han, Hyo-Kyung; Lee, Beom-Jin

    2013-12-31

    The content uniformity of low dose drugs in dosage forms is very important for quality assurance. The aim of this study was to prepare uniformly and homogeneously distributed dosage forms of very low-dose drugs using twin screw hot-melt extrusion (HME) and to investigate the distribution of drugs using instrumental analyses. For the feasibility of HME method, a very low amount of coumarin-6, a fluorescent dye, was used to visualize distribution images using confocal laser scanning microscope (CLSM). Limaprost, tamsulosin and glimepiride were then used as low-dose model drugs to study the applicability of HME for content uniformity and distribution behaviors. Hydrophilic thermosensitive polymers with low melting point, such as Poloxamer188 and polyethylene glycol (PEG) 6000, were chosen as carriers. The melt extrusion was carried out around 50°C, at which both carriers were easily dissolved but model drugs remained in solid form. The physicochemical properties of the hot-melt extrudates, including differential scanning calorimetry (DSC), powder X-ray diffraction (PXRD) and Fourier transform infrared spectroscopy (FT-IR), were measured. Content uniformity of the drugs was also checked by HPLC. CLSM imaging showed that model drugs were well distributed throughout the hot-melt extrudate, giving better content uniformity with low batch-to-batch variations compared with simple physical mixtures. DSC, PXRD and FT-IR data showed that there was no interaction or interference between model drugs and thermosensitive polymers. The current HME methods could be used to prepare uniformly distributed and reproducible solid dosage forms containing very low dose drugs for further pharmaceutical applications.

  1. Comparison of methods for the measurement of radiation dose distributions in high dose rate (HDR) brachytherapy: Ge-doped optical fiber, EBT3 Gafchromic film, and PRESAGE{sup Registered-Sign} radiochromic plastic

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, A. L. [Department of Physics, Faculty of Engineering and Physical Science, University of Surrey, Surrey GU2 7JP (United Kingdom); Department of Medical Physics, F-Level, Queen Alexandra Hospital, Portsmouth Hospitals NHS Trust, Portsmouth, Hampshire PO6 3LY (United Kingdom); Di Pietro, P.; Alobaidli, S.; Issa, F.; Doran, S.; Bradley, D. [Department of Physics, Faculty of Engineering and Physical Science, University of Surrey, Surrey GU2 7JP (United Kingdom); Nisbet, A. [Department of Physics, Faculty of Engineering and Physical Science, University of Surrey, Surrey GU2 7JP (United Kingdom); Department of Medical Physics, Royal Surrey County Hospital NHS Foundation Trust, Guildford, Surrey GU2 7XX (United Kingdom)

    2013-06-15

    Purpose: Dose distribution measurement in clinical high dose rate (HDR) brachytherapy is challenging, because of the high dose gradients, large dose variations, and small scale, but it is essential to verify accurate treatment planning and treatment equipment performance. The authors compare and evaluate three dosimetry systems for potential use in brachytherapy dose distribution measurement: Ge-doped optical fibers, EBT3 Gafchromic film with multichannel analysis, and the radiochromic material PRESAGE{sup Registered-Sign} with optical-CT readout. Methods: Ge-doped SiO{sub 2} fibers with 6 {mu}m active core and 5.0 mm length were sensitivity-batched and their thermoluminescent properties used via conventional heating and annealing cycles. EBT3 Gafchromic film of 30 {mu}m active thickness was calibrated in three color channels using a nominal 6 MV linear accelerator. A 48-bit transmission scanner and advanced multichannel analysis method were utilized to derive dose measurements. Samples of the solid radiochromic polymer PRESAGE{sup Registered-Sign }, 60 mm diameter and 100 mm height, were analyzed with a parallel beam optical CT scanner. Each dosimetry system was used to measure the dose as a function of radial distance from a Co-60 HDR source, with results compared to Monte Carlo TG-43 model data. Each system was then used to measure the dose distribution along one or more lines through typical clinical dose distributions for cervix brachytherapy, with results compared to treatment planning system (TPS) calculations. Purpose-designed test objects constructed of Solid Water and held within a full-scatter water tank were utilized. Results: All three dosimetry systems reproduced the general shape of the isolated source radial dose function and the TPS dose distribution. However, the dynamic range of EBT3 exceeded those of doped optical fibers and PRESAGE{sup Registered-Sign }, and the latter two suffered from unacceptable noise and artifact. For the experimental

  2. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  4. Comparing measurement-derived (3DVH) and machine log file-derived dose reconstruction methods for VMAT QA in patient geometries.

    Science.gov (United States)

    Tyagi, Neelam; Yang, Kai; Yan, Di

    2014-07-08

    The purpose of this study was to compare the measurement-derived (3DVH) dose reconstruction method with machine log file-derived dose reconstruction method in patient geometries for VMAT delivery. A total of ten patient plans were selected from a regular fractionation plan to complex SBRT plans. Treatment sites in the lung and abdomen were chosen to explore the effects of tissue heterogeneity on the respective dose reconstruction algorithms. Single- and multiple-arc VMAT plans were generated to achieve the desired target objectives. Delivered plan in the patient geometry was reconstructed by using ArcCHECK Planned Dose Perturbation (ACPDP) within 3DVH software, and by converting the machine log file to Pinnacle3 9.0 treatment plan format and recalculating dose with CVSP algorithm. In addition, delivered gantry angles between machine log file and 3DVH 4D measurement were also compared to evaluate the accuracy of the virtual inclinometer within the 3DVH. Measured ion chamber and 3DVH-derived isocenter dose agreed with planned dose within 0.4% ± 1.2% and -1.0% ± 1.6%, respectively. 3D gamma analysis showed greater than 98% between log files and 3DVH reconstructed dose. Machine log file reconstructed doses and TPS dose agreed to within 2% in PTV and OARs over the entire treatment. 3DVH reconstructed dose showed an average maximum dose difference of 3% ± 1.2% in PTV, and an average mean difference of -4.5% ± 10.5% in OAR doses. The average virtual inclinometer error (VIE) was -0.65° ± 1.6° for all patients, with a maximum error of -5.16° ± 4.54° for an SRS case. The time averaged VIE was within 1°-2°, and did not have a large impact on the overall accuracy of the estimated patient dose from ACPDP algorithm. In this study, we have compared two independent dose reconstruction methods for VMAT QA. Both methods are capable of taking into account the measurement and delivery parameter discrepancy, and display the delivered dose in CT patient geometry rather than

  5. μ-synthesis for the coupled mass benchmark problem

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, J.; Tøffner-Clausen, S.;

    1997-01-01

    A robust controller design for the coupled mass benchmark problem is presented in this paper. The applied design method is based on a modified D-K iteration, i.e. μ-synthesis which take care of mixed real and complex perturbations sets. This μ-synthesis method for mixed perturbation sets...

  6. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  7. Analytic estimates of secondary neutron dose in proton therapy.

    Science.gov (United States)

    Anferov, V

    2010-12-21

    Proton beam losses in various components of a treatment nozzle generate secondary neutrons, which bring unwanted out of field dose during treatments. The purpose of this study was to develop an analytic method for estimating neutron dose to a distant organ at risk during proton therapy. Based on radiation shielding calculation methods proposed by Sullivan, we developed an analytical model for converting the proton beam losses in the nozzle components and in the treatment volume into the secondary neutron dose at a point of interest. Using the MCNPx Monte Carlo code, we benchmarked the neutron dose rates generated by the proton beam stopped at various media. The Monte Carlo calculations confirmed the validity of the analytical model for simple beam stop geometry. The analytical model was then applied to neutron dose equivalent measurements performed on double scattering and uniform scanning nozzles at the Midwest Proton Radiotherapy Institute (MPRI). Good agreement was obtained between the model predictions and the data measured at MPRI. This work provides a method for estimating analytically the neutron dose equivalent to a distant organ at risk. This method can be used as a tool for optimizing dose delivery techniques in proton therapy.

  8. Sustainable value assessment of farms using frontier efficiency benchmarks.

    Science.gov (United States)

    Van Passel, Steven; Van Huylenbroeck, Guido; Lauwers, Ludwig; Mathijs, Erik

    2009-07-01

    Appropriate assessment of firm sustainability facilitates actor-driven processes towards sustainable development. The methodology in this paper builds further on two proven methodologies for the assessment of sustainability performance: it combines the sustainable value approach with frontier efficiency benchmarks. The sustainable value methodology tries to relate firm performance to the use of different resources. This approach assesses contributions to corporate sustainability by comparing firm resource productivity with the resource productivity of a benchmark, and this for all resources considered. The efficiency is calculated by estimating the production frontier indicating the maximum feasible production possibilities. In this research, the sustainable value approach is combined with efficiency analysis methods to benchmark sustainability assessment. In this way, the production theoretical underpinnings of efficiency analysis enrich the sustainable value approach. The methodology is presented using two different functional forms: the Cobb-Douglas and the translog functional forms. The simplicity of the Cobb-Douglas functional form as benchmark is very attractive but it lacks flexibility. The translog functional form is more flexible but has the disadvantage that it requires a lot of data to avoid estimation problems. Using frontier methods for deriving firm specific benchmarks has the advantage that the particular situation of each company is taken into account when assessing sustainability. Finally, we showed that the methodology can be used as an integrative sustainability assessment tool for policy measures.

  9. Developing a benchmark for emotional analysis of music

    Science.gov (United States)

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  10. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  11. Baseline and benchmark model development for hotels

    Science.gov (United States)

    Hooks, Edward T., Jr.

    The hotel industry currently faces rising energy costs and requires the tools to maximize energy efficiency. In order to achieve this goal a clear definition of the current methods used to measure and monitor energy consumption is made. Uncovering the limitations to the most common practiced analysis strategies and presenting methods that can potentially overcome those limitations is the main purpose. Techniques presented can be used for measurement and verification of energy efficiency plans and retrofits. Also, modern energy modeling tool are introduced to demonstrate how they can be utilized for benchmarking and baseline models. This will provide the ability to obtain energy saving recommendations and parametric analysis to explore energy savings potential. These same energy models can be used in design decisions for new construction. An energy model is created of a resort style hotel that over one million square feet and has over one thousand rooms. A simulation and detailed analysis is performed on a hotel room. The planning process for creating the model and acquiring data from the hotel room to calibrate and verify the simulation will be explained. An explanation as to how this type of modeling can potentially be beneficial for future baseline and benchmarking strategies for the hotel industry. Ultimately the conclusion will address some common obstacles the hotel industry has in reaching their full potential of energy efficiency and how these techniques can best serve them.

  12. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  13. Methods for Estimation of Radiation Risk in Epidemiological Studies Accounting for Classical and Berkson Errors in Doses

    KAUST Repository

    Kukush, Alexander

    2011-01-16

    With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.

  14. Benchmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The National Flood Hazard Layer (NFHL) data incorporates all Digital Flood Insurance Rate Map(DFIRM) databases published by FEMA, and any Letters Of Map Revision...

  15. 中子剂量测量及估算方法%The measurement and calculation method for neutron dose

    Institute of Scientific and Technical Information of China (English)

    向剑; 戴光复; 苑淑渝; 丁艳秋; 张良安

    2008-01-01

    Company with the development of science,the neutron is used more and more widely,for example,neutron therapy cancer,neutron logging,neutron photograph and so on.The most wide application on medical treatment with neutron is boron neutron capture therapy.But it also brings some problems when it is in use.When the operator perform with the neutron,it may receive neutron irradiation.So the measurement and calculation for neutron dose become important.At home the research of neutron dose need to be advanced research.So the measurement and calculation method of neutron dose are conformed and summarized in this article for advance research.%随着科技的发展,中子在许多行业得到越来越广泛的应用,在医疗上应用最广泛的是硼中子俘获治疗.但在使用中子辐射的过程中,操作人员可能会受到中子辐射,因此中子剂量的测量和估算问题也就变得重要起来.目前,国内关于中子剂量的研究在有些方面还不是很深人,因此对中子剂量的测量和估算方法进行了归纳和阐述.

  16. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  17. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Science.gov (United States)

    Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja

    2015-01-01

    The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  18. Simple methods to reduce patient dose in a Varian cone beam CT system for delivery verification in pelvic radiotherapy.

    Science.gov (United States)

    Roxby, P; Kron, T; Foroudi, F; Haworth, A; Fox, C; Mullen, A; Cramb, J

    2009-10-01

    Cone-beam computed tomography (CBCT) is a three-dimensional imaging modality that has recently become available on linear accelerators for radiotherapy patient position verification. It was the aim of the present study to implement simple strategies for reduction of the dose delivered in a commercial CBCT system. The dose delivered in a CBCT procedure (Varian, half-fan acquisition, 650 projections, 125 kVp) was assessed using a cylindrical Perspex phantom (diameter, 32 cm) with a calibrated Farmer type ionisation chamber. A copper filter (thickness, 0.15 mm) was introduced increasing the half value layer of the beam from 5.5 mm Al to 8 mm Al. Image quality and noise were assessed using an image quality phantom (CatPhan) while the exposure settings per projection were varied from 25 ms/80 mA to 2 ms/2 mA per projection. Using the copper filter reduced the dose to the phantom from approximately 45 mGy to 30 mGy at standard settings (centre/periphery weighting 1/3 to 2/3). Multiple CBCT images were acquired for six patients with pelvic malignancies to compare CBCTs with and without a copper filter. Although the reconstructed image is somewhat noisier with the filter, it features similar contrast in the centre of the patient and was often preferred by the radiation oncologist because of greater image uniformity. The X-ray shutters were adjusted to the minimum size required to obtain the desired image volume for a given patient diameter. The simple methods described here reduce the effective dose to patients undergoing daily CBCT and are easy to implement, and initial evidence suggests that they do not affect the ability to identify soft tissue for the purpose of treatment verification.

  19. SU-E-T-224: Is Monte Carlo Dose Calculation Method Necessary for Cyberknife Brain Treatment Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L; Fourkal, E; Hayes, S; Jin, L; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2014-06-01

    Purpose: To study the dosimetric difference resulted in using the pencil beam algorithm instead of Monte Carlo (MC) methods for tumors adjacent to the skull. Methods: We retrospectively calculated the dosimetric differences between RT and MC algorithms for brain tumors treated with CyberKnife located adjacent to the skull for 18 patients (total of 27 tumors). The median tumor sizes was 0.53-cc (range 0.018-cc to 26.2-cc). The absolute mean distance from the tumor to the skull was 2.11 mm (range - 17.0 mm to 9.2 mm). The dosimetric variables examined include the mean, maximum, and minimum doses to the target, the target coverage (TC) and conformality index. The MC calculation used the same MUs as the RT dose calculation without further normalization and 1% statistical uncertainty. The differences were analyzed by tumor size and distance from the skull. Results: The TC was generally reduced with the MC calculation (24 out of 27 cases). The average difference in TC between RT and MC was 3.3% (range 0.0% to 23.5%). When the TC was deemed unacceptable, the plans were re-normalized in order to increase the TC to 99%. This resulted in a 6.9% maximum change in the prescription isodose line. The maximum changes in the mean, maximum, and minimum doses were 5.4 %, 7.7%, and 8.4%, respectively, before re-normalization. When the TC was analyzed with regards to target size, it was found that the worst coverage occurred with the smaller targets (0.018-cc). When the TC was analyzed with regards to the distance to the skull, there was no correlation between proximity to the skull and TC between the RT and MC plans. Conclusions: For smaller targets (< 4.0-cc), MC should be used to re-evaluate the dose coverage after RT is used for the initial dose calculation in order to ensure target coverage.

  20. Benchmarking of Proton Transport in Super Monte Carlo Simulation Program

    Science.gov (United States)

    Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican

    2014-06-01

    The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear

  1. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  2. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  3. [A method of the interactive visual optimization of the therapeutic dose field in contact radiation therapy of malignant tumors (theoretical aspects of the problem)].

    Science.gov (United States)

    Klepper, L Ia

    2003-01-01

    The mathematical and interpretation tasks of a directed shaping of dose fields in the contrast radiation therapy of malignant tumors are defined on the basis of the dose-field homogeneity parameter. A schematic iterative algorithm of how to solve the tasks is described. A method for the visual optimization of such field is elaborated; it is based on preset limits to the dose field in the lesion focus and in the healthy organs and tissues. The dose field is shaped by an applicator with multiple terminal fixed positions of irradiation sources--the effect is achieved due to variability of their exposure duration.

  4. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  5. Analyzing the BBOB results by means of benchmarking concepts.

    Science.gov (United States)

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  6. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  7. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    emerged as actors expressed diverse political interests in the institutionalization of benchmarking. The political struggles accounted for in chapter five constituted a powerful political pressure and called for transformations of the institutionalization in order for benchmarking to attain institutional...... on how to institutionalize new structures in the Danish construction industry? In the methodology chapter, I outline how institutional theory facilitates new and important inquiries into understanding institutionalization of benchmarking. I account for how the choice of theory is influencing my......, the chapter accounts for the data collection methods used to conduct the empirical data collection and the appertaining choices that are made, based on the account for analyzing institutionalization processes. The analysis unfolds over seven chapters, starting with an exposition of the political foundation...

  8. Corporate social responsibility benchmarking. The case of galician firms

    Directory of Open Access Journals (Sweden)

    Encarnación González Vázquez

    2011-12-01

    Full Text Available In this paper we review the concept of corporate social responsibility. Subsequently we analyze the possibilities and problems of the use of benchmarking in CSR by analyzing the latest research that had developed a method of benchmarking. From this analysis we propose a homogeneous indicator that assesses 68 aspects related to the various stakeholders involved in CSR. It also provides information on the importance attached by respondents to these aspects. The results for each of the 5 sectors considered show the areas in which the work in CSR is greatest and others where improvement is needed.

  9. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.;

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  10. A dosimetry method for low dose rate brachytherapy by EGS5 combined with regression to reflect source strength shortage

    Science.gov (United States)

    Tanaka, Kenichi; Tateoka, Kunihiko; Asanuma, Osamu; Kamo, Ken-ichi; Sato, Kaori; Takeda, Hiromitsu; Takagi, Masaru; Hareyama, Masato; Takada, Jun

    2014-01-01

    The post-implantation dosimetry for brachytherapy using Monte Carlo calculation by EGS5 code combined with the source strength regression was investigated with respect to its validity. In this method, the source strength for the EGS5 calculation was adjusted with the regression, so that the calculation would reproduce the dose monitored with the glass rod dosimeters (GRDs) on a water phantom. The experiments were performed, simulating the case where one of two 125I sources of Oncoseed 6711 was lacking strength by 4–48%. As a result, the calculation without regression was in agreement with the GRD measurement within 26–62%. In this case, the shortage in strength of a source was neglected. By the regression, in order to reflect the strength shortage, the agreement was improved up to 17–24%. This agreement was also comparable with accuracy of the dose calculation for single source geometry reported previously. These results suggest the validity of the dosimetry method proposed in this study. PMID:24449715

  11. A simple pharmacokinetic method to evaluate the pulmonary dose in clinical practice--analyses of inhaled sodium cromoglycate.

    Science.gov (United States)

    Lindström, Maria; Svensson, Jan Olof; Meurling, Lennart; Svartengren, Katharina; Anderson, Martin; Svartengren, Magnus

    2004-01-01

    When the expected effect of an inhaled drug is not achieved, the cause could be poor inhalation technique and consequently a low pulmonary dose. A simple in vivo test to evaluate the pulmonary dose would be a benefit. This study evaluates the relative and systemic bioavailability following inhalation of nebulized sodium cromoglycate (SCG) in healthy subjects. Blood samples were collected during 240 min and urine was collected in two portions, up to 6 h post-inhalation. Two exposures were performed and comparisons based on the quantification of drug in plasma and urine by a high-performance liquid chromatography (HPLC) procedure were done. In one of the exposures, a pulmonary function test was performed to study if an expected effect of increased absorption could be detected. There was a good correlation between the two exposures shown in the plasma concentrations, but not in the urine analyses. The forced exhaled volume manoeuvres were associated with a higher Cmax and plasma concentrations up to 60 min post-inhalation (P<0.01). This effect was not detected in the urine analyses. We conclude that this pharmacokinetic method with inhaled SCG and plasma analyses could be used to evaluate individual inhalation technique. The HPLC method used was rapid and had adequate sensitivity.

  12. Development of modern approach to absorbed dose assessment in radionuclide therapy, based on Monte Carlo method simulation of patient scintigraphy

    Science.gov (United States)

    Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya

    2017-01-01

    One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.

  13. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  14. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  15. Ship Propulsion System as a Benchmark for Fault-Tolerant Control

    DEFF Research Database (Denmark)

    Izadi-Zamanabadi, Roozbeh; Blanke, M.

    1998-01-01

    -tolerant control is a fairly new area. The paper presents a ship propulsion system as a benchmark that should be useful as a platform for development of new ideas and comparison of methods. The benchmark has two main elements. One is development of efficient FDI algorithms, the other is analysis and implementation...

  16. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)

    Science.gov (United States)

    This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm ...

  17. Benchmarking road safety performance by grouping local territories : a study in The Netherlands.

    NARCIS (Netherlands)

    Aarts, L.T. & Houwing, S.

    2015-01-01

    The method of benchmarking provides an opportunity to learn from better performing territories to improve the effectiveness and efficiency of activities in a particular field of interest. Such a field of interest could be road safety. Road safety benchmarking can include several indicators, ranging

  18. Results of the brugge benchmark study for flooding optimization and history matching

    NARCIS (Netherlands)

    Peters, E.; Arts, R.J.; Brouwer, G.K.; Geel, C.R.; Cullick, S.; Lorentzen, R.J.; Chen, Y.; Dunlop, K.N.B.; Vossepoel, F.C.; Xu, R.; Sarma, P.; Alhutali, A.H.; Reynolds, A.C.

    2010-01-01

    In preparation for the SPE Applied Technology Workshop (ATW) held in Brugge in June 2008, a unique benchmark project was organized to test the combined use of waterflooding-optimization and history-matching methods in a closed-loop workflow. The benchmark was organized in the form of an interactive

  19. Benchmarking van gemeentelijke verkeersveiligheid in de praktijk : een verdere uitwerking en toetsing van behoeften bij gemeenten.

    NARCIS (Netherlands)

    Aarts, L.T.

    2016-01-01

    Benchmarking municipal road safety put into practice : further elaboration and testing out of municipal requirements. Policy makers are faced with the question of how efficient and effective their policy is, if it can be done better and if so: how? Benchmarking can provide a method for this: compari

  20. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    Science.gov (United States)

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  1. Benchmarking by cross-institutional comparison of student achievement in a progress test

    NARCIS (Netherlands)

    Muijtjens, Arno M. M.; Schuwirth, Lambert W. T.; Cohen-Schotanus, Janke; Thoben, Arnold J. N. M.; van der Vleuten, Cees P. M.; van, der

    2008-01-01

    OBJECTIVE To determine the effectiveness of single-point benchmarking and longitudinal benchmarking for inter-school educational evaluation. METHODS We carried out a mixed, longitudinal, cross-sectional study using data from 24 annual measurement moments (4 tests x 6 year groups) over 4 years for 4

  2. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  3. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  4. CTA-enhanced perfusion CT: an original method to perform ultra-low-dose CTA-enhanced perfusion CT

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Elizabeth; Wintermark, Max [University of Virginia, Department of Radiology, Neuroradiology Division, Charlottesville, VA (United States)

    2014-11-15

    Utilizing CT angiography enhances image quality in PCT, thereby permitting acquisition at ultra-low dose. Dynamic CT acquisitions were obtained at 80 kVp with decreasing tube current-time product [milliamperes x seconds (mAs)] in patients suspected of ischemic stroke, with concurrent CTA of the cervical and intracranial arteries. By utilizing fast Fourier transformation, high spatial frequencies of CTA were combined with low spatial frequencies of PCT to create a virtual PCT dataset. The real and virtual PCT datasets with decreasing mAs were compared by assessing contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and noise and PCT values and by visual inspection of PCT parametric maps. Virtual PCT attained CNR and SNR three- to sevenfold superior to real PCT and noise reduction by a factor of 4-6 (p < 0.05). At 20 mAs, virtual PCT achieved diagnostic parametric maps, while the quality of real PCT maps was inadequate. At 10 mAs, both real and virtual PCT maps were nondiagnostic. Virtual PCT (but not real PCT) maps regained diagnostic quality at 10 mAs by applying 40 % adaptive statistical iterative reconstruction (ASIR) and improved further with 80 % ASIR. Our new method of creating virtual PCT by combining ultra-low-dose PCT with CTA information yields diagnostic perfusion parametric maps from PCT acquired at 20 or 10 mAs with 80 % ASIR. Effective dose is approximately 0.20 mSv, equivalent to two chest radiographs. (orig.)

  5. Comparison of two methods for assessing leakage radiation dose around the head of the medical linear accelerators

    Institute of Scientific and Technical Information of China (English)

    Ehab M Attalla

    2013-01-01

    Objective:The aim of this study was to measure the leakage by two methods with ion chamber and ready packs film, and to investigate the feasibility and the advantages of using two dosimetry methods for assessing leakage radiation around the head of the linear accelerators. Methods:Measurements were performed using a 30 cm3 ion chamber;the gantry at 0°, the X-ray head at 0°, the field size at between the central axis and a plane surface at a FSD of 100 as a reference, a series of concentric circles having radi of 50, 75, and 100 cm with their common centre at the reference point. The absorbed dose was measured at the reference point, and this would be used as the reference dose. With the diaphragm closed, the measurements were taken along the circumference of the three circles and at 45° intervals. Results:Leakage radiations while the treatment head was in the vertical position varied between 0.016%–0.04%. With the head lying horizontal y, leak-age radiation was the same order magnitude and varied between 0.02%–0.07%. In the second method, the verification was accomplished by closing the col imator jaws and covering the head of the treatment unit with the ready pack films. The films were marked to permit the determination of their positions on the machine after exposed and processed. With the diaphragm closed, and the ready packs films around the linear accelerator the beam turned on for 2500 cGy (2500 MU). The optical den-sity of these films was measured and compared with this of the reference dose. Leakage radiation varied according to the film positions and the magnitude of leakage was between 0.005%–0.075%. Conclusion:The dif erences between the values of the leakage radiation levels observed at dif erent measurement points do not only reflect dif erences in the ef ective shielding thickness of the head wal , but are also related to dif erences in the distances between the target and the measurement points. The experimental errors involved in dosimetric

  6. Benchmarking for controllere: Metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels; Dietrichson, Lars

    2008-01-01

    Der vil i artiklen blive stillet skarpt på begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det. Der vil blive redegjort for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et benchmarkingprojekt......, inden man går i gang. Forskellen på resultatbenchmarking og procesbenchmarking vil blive behandlet, hvorefter brugen af intern hhv. ekstern benchmarking vil blive diskuteret. Endelig introduceres brugen af benchmarking i budgetlægning og budgetopfølgning....

  7. Comparative analysis of dose rates in bricks determined by neutron activation analysis, alpha counting and X-ray fluorescence analysis for the thermoluminescence fine grain dating method

    Science.gov (United States)

    Bártová, H.; Kučera, J.; Musílek, L.; Trojek, T.

    2014-11-01

    In order to evaluate the age from the equivalent dose and to obtain an optimized and efficient procedure for thermoluminescence (TL) dating, it is necessary to obtain the values of both the internal and the external dose rates from dated samples and from their environment. The measurements described and compared in this paper refer to bricks from historic buildings and a fine-grain dating method. The external doses are therefore negligible, if the samples are taken from a sufficient depth in the wall. However, both the alpha dose rate and the beta and gamma dose rates must be taken into account in the internal dose. The internal dose rate to fine-grain samples is caused by the concentrations of natural radionuclides 238U, 235U, 232Th and members of their decay chains, and by 40K concentrations. Various methods can be used for determining trace concentrations of these natural radionuclides and their contributions to the dose rate. The dose rate fraction from 238U and 232Th can be calculated, e.g., from the alpha count rate, or from the concentrations of 238U and 232Th, measured by neutron activation analysis (NAA). The dose rate fraction from 40K can be calculated from the concentration of potassium measured, e.g., by X-ray fluorescence analysis (XRF) or by NAA. Alpha counting and XRF are relatively simple and are accessible for an ordinary laboratory. NAA can be considered as a more accurate method, but it is more demanding regarding time and costs, since it needs a nuclear reactor as a neutron source. A comparison of these methods allows us to decide whether the time- and cost-saving simpler techniques introduce uncertainty that is still acceptable.

  8. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    O. P. Mendiratta; D. K. Ploetz

    2000-02-29

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

  9. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    Science.gov (United States)

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  10. Benchmarking Implementations of Functional Languages with ``Pseudoknot'', a Float-Intensive Benchmark

    NARCIS (Netherlands)

    Hartel, P.H.; Feeley, M.; Alt, M.; Augustsson, L.

    1996-01-01

    Over 25 implementations of different functional languages are benchmarked using the same program, a floatingpoint intensive application taken from molecular biology. The principal aspects studied are compile time and execution time for the various implementations that were benchmarked. An important

  11. SU-E-T-288: Dose Volume Population Histogram (DVPH): A New Method to Evaluate Intensity Modulated Proton Therapy Plans With Geometrical Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, T; Mai, N [University of Science, Ho Chi Minh City (Viet Nam); Nguyen, B [Prowess Inc, Concord, CA (United States)

    2015-06-15

    Purpose: In Proton therapy, especially intensity modulated proton therapy(IMPT), the dose distribution shape is very sensitive to errors due to sharp dose gradients at the Bragg peaks. The concept of the conventional margin is based on the assumption that dose distribution is shifted rather than deformed due to geometrical uncertainties. The goal of this study is to access the validity of the margin concept as well as propose a new approach using Dose Volume Population Histogram (DVPH) in evaluating IMPT plans. Methods: For a prostate case, an intensity modulated proton therapy is optimized based on the conventional PTV based objective function. The plan is evaluated based on the PTV DVH and CTV DVPH (dose volume population histogram) which explicitly taking into account geometric uncertainties. The DVPH is calculated based on 2197 dose distributions at different CTV possible positions for both random and systematic errors. The DVPH with a 90% confidence level is used for the comparison. Results: The minimum dose of the CTV DVPH with a 90% confidence level is only about 18% of the prescribed dose, while the minimum dose of the PTV is 95%. For bladder DVHs, the D50 and D35 is 26% and 30%, compared to 65% and 70% of the prescribed dose from the bladder DVPH with 90% confidence level. Conclusion: The results showed that the PTV concept for ensuring the prescribed dose actually delivered to the CTV is invalid in proton therapy. The good PTV DVH might Result in an underdose to the target and should not be used for IMPT optimization. For OARs, the conventional evaluation approach underestimates dose volume end points. The new concept DVPH has been proved to provide a more accurate DVH evaluation in proton therapy.

  12. TU-EF-204-09: A Preliminary Method of Risk-Informed Optimization of Tube Current Modulation for Dose Reduction in CT

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Y; Liu, B; Kalra, M; Caracappa, P [Rensselaer Polytechnic Institute, Troy, NY (United States); Liu, T; Li, X [Massachusetts General Hospital, Boston, MA (United States); Xu, X [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2015-06-15

    Purpose: X-rays from CT scans can increase cancer risk to patients. Lifetime Attributable Risk of Cancer Incidence for adult patients has been investigated and shown to decrease as patient age. However, a new risk model shows an increasing risk trend for several radiosensitive organs for middle age patients. This study investigates the feasibility of a general method for optimizing tube current modulation (TCM) functions to minimize risk by reducing radiation dose to radiosensitive organs of patients. Methods: Organ-based TCM has been investigated in literature for eye lens dose and breast dose. Adopting the concept in organ-based TCM, this study seeks to find an optimized tube current for minimal total risk to breasts and lungs by reducing dose to these organs. The contributions of each CT view to organ dose are determined through simulations of CT scan view-by-view using a GPU-based fast Monte Carlo code, ARCHER. A Linear Programming problem is established for tube current optimization, with Monte Carlo results as weighting factors at each view. A pre-determined dose is used as upper dose boundary, and tube current of each view is optimized to minimize the total risk. Results: An optimized tube current is found to minimize the total risk of lungs and breasts: compared to fixed current, the risk is reduced by 13%, with breast dose reduced by 38% and lung dose reduced by 7%. The average tube current is maintained during optimization to maintain image quality. In addition, dose to other organs in chest region is slightly affected, with relative change in dose smaller than 10%. Conclusion: Optimized tube current plans can be generated to minimize cancer risk to lungs and breasts while maintaining image quality. In the future, various risk models and greater number of projections per rotation will be simulated on phantoms of different gender and age. National Institutes of Health R01EB015478.

  13. Statistical Methods for the Evaluation of Health Effects of Prenatal Mercury Exposure

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe;

    2002-01-01

    Environmental epidemiology; Structural equation; Exposure measurement error; multiple endpoints; effect of prenatal mercury exposure; Exposure standards; Benchmark dose......Environmental epidemiology; Structural equation; Exposure measurement error; multiple endpoints; effect of prenatal mercury exposure; Exposure standards; Benchmark dose...

  14. GPU-Accelerated Monte Carlo Electron Transport Methods: Development and Application for Radiation Dose Calculations Using Six GPU cards

    Science.gov (United States)

    Su, Lin; Du, Xining; Liu, Tianyu; Xu, X. George

    2014-06-01

    An electron-photon coupled Monte Carlo code ARCHER - Accelerated Radiation-transport Computations in Heterogeneous EnviRonments - is being developed at Rensselaer Polytechnic Institute as a software testbed for emerging heterogeneous high performance computers that utilize accelerators such as GPUs. This paper presents the preliminary code development and the testing involving radiation dose related problems. In particular, the paper discusses the electron transport simulations using the class-II condensed history method. The considered electron energy ranges from a few hundreds of keV to 30 MeV. For photon part, photoelectric effect, Compton scattering and pair production were modeled. Voxelized geometry was supported. A serial CPU code was first written in C++. The code was then transplanted to the GPU using the CUDA C 5.0 standards. The hardware involved a desktop PC with an Intel Xeon X5660 CPU and six NVIDIA Tesla™ M2090 GPUs. The code was tested for a case of 20 MeV electron beam incident perpendicularly on a water-aluminum-water phantom. The depth and later dose profiles were found to agree with results obtained from well tested MC codes. Using six GPU cards, 6x106 electron histories were simulated within 2 seconds. In comparison, the same case running the EGSnrc and MCNPX codes required 1645 seconds and 9213 seconds, respectively. On-going work continues to test the code for different medical applications such as radiotherapy and brachytherapy.

  15. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2015-04-01

    This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.

  16. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    Science.gov (United States)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-07

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  17. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  18. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  19. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  20. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths