WorldWideScience

Sample records for va releases benchmarks

  1. VA office of inspector general releases scathing report of Phoenix VA

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2014-08-01

    Full Text Available No abstract available. Article truncated at 150 words. The long-awaited Office of Inspector General’s (OIG report on the Phoenix VA Health Care System (PVAHCS was released on August 27, 2014 (1. The report was scathing in its evaluation of VA practices and leadership. Five questions were investigated: 1.Were there clinically significant delays in care? 2. Did PVAHCS omit the names of veterans waiting for care from its Electronic Wait List (EWL? 3. Were PVAHCS personnel not following established scheduling procedures? 4. Did the PVAHCS culture emphasize goals at the expense of patient care? 5. Are scheduling deficiencies systemic throughout the VA? In each case, the OIG found that the allegations were true. Despite initial denials, the OIG report showed that former PVAHCS director Sharon Helman, associate director Lance Robinson, hospital administration director Brad Curry, chief of staff Darren Deering and other senior executives were aware of delays in care and unofficial wait lists. Perhaps most disturbing is ...

  2. Continuous-energy version of KENO V.a for criticality safety applications

    International Nuclear Information System (INIS)

    Dunn, Michael E.; Greene, N. Maurice; Petrie, Lester M.

    2003-01-01

    KENO V.a is a multigroup Monte Carlo code that solves the Boltzmann transport equation and is used extensively in the criticality safety community to calculate the effective multiplication factor of systems with fissionable material. In this work, a continuous-energy or pointwise version of KENO V.a has been developed by first designing a new continuous-energy cross-section format and then by developing the appropriate Monte Carlo transport procedures to sample the new cross-section format. In order to generate pointwise cross sections for a test library, a series of cross-section processing modules were developed and used to process 50 ENDF/B-6 Release 7 nuclides for the test library. Once the cross-section processing procedures were in place, a continuous-energy version of KENO V.a was developed and tested by calculating 21 critical benchmark experiments. The point KENO-calculated results for the 21 benchmarks are in agreement with calculated results obtained with the multigroup version of KENO V.a using the 238-group ENDF/B-5 and 199-group ENDF/B-6 Release 3 libraries. Based on the calculated results with the prototypic cross-section library, a continuous-energy version of the KENO V.a code has been successfully developed and demonstrated for modeling systems with fissionable material. (author)

  3. Benchmark calculations by KENO-Va using the JEF 2.2 library

    Energy Technology Data Exchange (ETDEWEB)

    Markova, L.

    1994-12-01

    This work has to be a contribution to the validation of the JEF2.2 neutron cross-section libarary, following the earlier published benchmark calculations having been performed to validate the previous version JEF1.1 of the libarary. Several simple calculational problems and one experimental problem were chosen for a criticality calculations. In addition also a realistic hexagonal arrangement of the VVER-440 fuel assemblies in a spent fuel cask were analyzed in a partly cylindrized model. All criticality calculations, carried out by the KENO-Va code using the JEF2.2 neutron cross-section library in 172 energy groups, resulted in multiplication factors (k{sub eff}) which were tabulated and compared with the results of other available calculations of the same problems. (orig.).

  4. Assessment of Degree of Applicability of Benchmarks for Gadolinium Using KENO V.a and the 238-Group SCALE Cross-Section Library

    Energy Technology Data Exchange (ETDEWEB)

    Goluoglu, S.

    2003-12-01

    A review of the degree of applicability of benchmarks containing gadolinium using the computer code KENO V.a and the gadolinium cross sections from the 238-group SCALE cross-section library has been performed for a system that contains {sup 239}Pu, H{sub 2}O, and Gd{sub 2}O{sub 3}. The system (practical problem) is a water-reflected spherical mixture that represents a dry-out condition on the bottom of a sludge receipt and adjustment tank around steam coils. Due to variability of the mixture volume and the H/{sup 239}Pu ratio, approximations to the practical problem, referred to as applications, have been made to envelop possible ranges of mixture volumes and H/{sup 239}Pu ratios. A newly developed methodology has been applied to determine the degree of applicability of benchmarks as well as the penalty that should be added to the safety margin due to insufficient benchmarks.

  5. On Setting Day-Ahead Equity Trading Risk Limits: VaR Prediction at Market Close or Open?

    Directory of Open Access Journals (Sweden)

    Ana-Maria Fuertes

    2016-09-01

    Full Text Available This paper investigates the information content of the ex post overnight return for one-day-ahead equity Value-at-Risk (VaR forecasting. To do so, we deploy a univariate VaR modeling approach that constructs the forecast at market open and, accordingly, exploits the available overnight close-to-open price variation. The benchmark is the bivariate VaR modeling approach proposed by Ahoniemi et al. that constructs the forecast at the market close instead and, accordingly, it models separately the daytime and overnight return processes and their covariance. For a small cap portfolio, the bivariate VaR approach affords superior predictive ability than the ex post overnight VaR approach whereas for a large cap portfolio the results are reversed. The contrast indicates that price discovery at the market open is less efficient for small capitalization, thinly traded stocks.

  6. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  7. CoVaCS: a consensus variant calling system.

    Science.gov (United States)

    Chiara, Matteo; Gioiosa, Silvia; Chillemi, Giovanni; D'Antonio, Mattia; Flati, Tiziano; Picardi, Ernesto; Zambelli, Federico; Horner, David Stephen; Pesole, Graziano; Castrignanò, Tiziana

    2018-02-05

    The advent and ongoing development of next generation sequencing technologies (NGS) has led to a rapid increase in the rate of human genome re-sequencing data, paving the way for personalized genomics and precision medicine. The body of genome resequencing data is progressively increasing underlining the need for accurate and time-effective bioinformatics systems for genotyping - a crucial prerequisite for identification of candidate causal mutations in diagnostic screens. Here we present CoVaCS, a fully automated, highly accurate system with a web based graphical interface for genotyping and variant annotation. Extensive tests on a gold standard benchmark data-set -the NA12878 Illumina platinum genome- confirm that call-sets based on our consensus strategy are completely in line with those attained by similar command line based approaches, and far more accurate than call-sets from any individual tool. Importantly our system exhibits better sensitivity and higher specificity than equivalent commercial software. CoVaCS offers optimized pipelines integrating state of the art tools for variant calling and annotation for whole genome sequencing (WGS), whole-exome sequencing (WES) and target-gene sequencing (TGS) data. The system is currently hosted at Cineca, and offers the speed of a HPC computing facility, a crucial consideration when large numbers of samples must be analysed. Importantly, all the analyses are performed automatically allowing high reproducibility of the results. As such, we believe that CoVaCS can be a valuable tool for the analysis of human genome resequencing studies. CoVaCS is available at: https://bioinformatics.cineca.it/covacs .

  8. A PC [personal computer]-based version of KENO V.a

    International Nuclear Information System (INIS)

    Nigg, D.A.; Atkinson, C.A.; Briggs, J.B.; Taylor, J.T.

    1990-01-01

    The use of personal computers (PCs) and engineering workstations for complex scientific computations has expanded rapidly in the last few years. This trend is expected to continue in the future with the introduction of increasingly sophisticated microprocessors and microcomputer systems. For a number of reasons, including security, economy, user convenience, and productivity, an integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past 3 yr. Nuclear cross-section data and resonance parameters are preprocessed from the Evaluated Nuclear Data Files Version 5 (ENDF/B-V) and supplied in a form suitable for use in a PC-based spectrum calculation and multigroup cross-section generation module. This module produces application-specific data libraries that can then be used in various neutron transport and diffusion theory code modules. This paper discusses several details of the Monte Carlo criticality module, which is based on the well-known highly-sophisticated KENO V.a package developed at Oak Ridge National Laboratory and previously released in mainframe form by the Radiation Shielding Information Center (RSIC). The conversion process and a variety of benchmarking results are described

  9. 75 FR 66057 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Science.gov (United States)

    2010-10-27

    ... (CSX Transp. II), 584 F.3d 1076 (DC Cir. 2009), the Board modified its simplified rail rate guidelines...- Benchmark approach for smaller rail rate disputes. The Three-Benchmark method compares a challenged rate of...: The RSAM and R/VC >180 benchmarks. See Rate Guidelines--Non-Coal Proceedings, (Rate Guidelines) 1 S.T...

  10. Comparison of outcomes for veterans receiving dialysis care from VA and non-VA providers.

    Science.gov (United States)

    Wang, Virginia; Maciejewski, Matthew L; Patel, Uptal D; Stechuchak, Karen M; Hynes, Denise M; Weinberger, Morris

    2013-01-18

    Demand for dialysis treatment exceeds its supply within the Veterans Health Administration (VA), requiring VA to outsource dialysis care by purchasing private sector dialysis for veterans on a fee-for-service basis. It is unclear whether outcomes are similar for veterans receiving dialysis from VA versus non-VA providers. We assessed the extent of chronic dialysis treatment utilization and differences in all-cause hospitalizations and mortality between veterans receiving dialysis from VA versus VA-outsourced providers. We constructed a retrospective cohort of veterans in 2 VA regions who received chronic dialysis treatment financed by VA between January 2007 and December 2008. From VA administrative data, we identified veterans who received outpatient dialysis in (1) VA, (2) VA-outsourced settings, or (3) both ("dual") settings. In adjusted analyses, we used two-part and logistic regression to examine associations between dialysis setting and all-cause hospitalization and mortality one-year from veterans' baseline dialysis date. Of 1,388 veterans, 27% received dialysis exclusively in VA, 47% in VA-outsourced settings, and 25% in dual settings. Overall, half (48%) were hospitalized and 12% died. In adjusted analysis, veterans in VA-outsourced settings incurred fewer hospitalizations and shorter hospital stays than users of VA due to favorable selection. Dual-system dialysis patients had lower one-year mortality than veterans receiving VA dialysis. VA expenditures for "buying" outsourced dialysis are high and increasing relative to "making" dialysis treatment within its own system. Outcomes comparisons inform future make-or-buy decisions and suggest the need for VA to consider veterans' access to care, long-term VA savings, and optimal patient outcomes in its placement decisions for dialysis services.

  11. Comparison of outcomes for veterans receiving dialysis care from VA and non-VA providers

    Directory of Open Access Journals (Sweden)

    Wang Virginia

    2013-01-01

    Full Text Available Abstract Background Demand for dialysis treatment exceeds its supply within the Veterans Health Administration (VA, requiring VA to outsource dialysis care by purchasing private sector dialysis for veterans on a fee-for-service basis. It is unclear whether outcomes are similar for veterans receiving dialysis from VA versus non-VA providers. We assessed the extent of chronic dialysis treatment utilization and differences in all-cause hospitalizations and mortality between veterans receiving dialysis from VA versus VA-outsourced providers. Methods We constructed a retrospective cohort of veterans in 2 VA regions who received chronic dialysis treatment financed by VA between January 2007 and December 2008. From VA administrative data, we identified veterans who received outpatient dialysis in (1 VA, (2 VA-outsourced settings, or (3 both (“dual” settings. In adjusted analyses, we used two-part and logistic regression to examine associations between dialysis setting and all-cause hospitalization and mortality one-year from veterans’ baseline dialysis date. Results Of 1,388 veterans, 27% received dialysis exclusively in VA, 47% in VA-outsourced settings, and 25% in dual settings. Overall, half (48% were hospitalized and 12% died. In adjusted analysis, veterans in VA-outsourced settings incurred fewer hospitalizations and shorter hospital stays than users of VA due to favorable selection. Dual-system dialysis patients had lower one-year mortality than veterans receiving VA dialysis. Conclusions VA expenditures for “buying” outsourced dialysis are high and increasing relative to “making” dialysis treatment within its own system. Outcomes comparisons inform future make-or-buy decisions and suggest the need for VA to consider veterans’ access to care, long-term VA savings, and optimal patient outcomes in its placement decisions for dialysis services.

  12. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  13. KENO-VA-PVM KENO-VA-SM, KENO5A for Parallel Processors

    International Nuclear Information System (INIS)

    Ramon, Javier; Pena, Jorge

    2002-01-01

    1 - Description of program or function: This package contains versions KENO-Va-SM (Shared Memory version) and KENO-Va-PVM (Parallel Virtual Machine version) based on SCALE-4.1. KENO-Va three-dimensional Boltzmann transport equation for neutron multiplying systems. The primary purpose of KENO-Va is to determine k-effective. Other calculated quantities include lifetime and generation time, energy-dependent leakages, energy- and region-dependent absorptions, fissions, fluxes, and fission densities. 2 - Method of solution: KENO-Va employs the Monte Carlo technique

  14. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  15. OneVA Pharmacy

    Data.gov (United States)

    Department of Veterans Affairs — The OneVA Pharmacy application design consists of 3 main components: VistA Medication Profile screen, Health Data Record Clinical Data Service (HDR/CDS), and OneVA...

  16. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.

    2012-01-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  17. Comparing Catheter-associated Urinary Tract Infection Prevention Programs Between VA and Non-VA Nursing Homes

    Science.gov (United States)

    Mody, Lona; Greene, M. Todd; Saint, Sanjay; Meddings, Jennifer; Trautner, Barbara W.; Wald, Heidi L.; Crnich, Christopher; Banaszak-Holl, Jane; McNamara, Sara E.; King, Beth J.; Hogikyan, Robert; Edson, Barbara; Krein, Sarah L.

    2018-01-01

    OBJECTIVE The impact of healthcare system integration on infection prevention programs is unknown. Using catheter-associated urinary tract infection (CAUTI) prevention as an example, we hypothesize that U.S. Department of Veterans Affairs (VA) nursing homes have a more robust infection prevention infrastructure due to integration and centralization compared with non-VA nursing homes. SETTING VA and non-VA nursing homes participating in the “AHRQ Safety Program for Long-term Care” collaborative. METHODS Nursing homes provided baseline information about their infection prevention programs to assess strengths and gaps related to CAUTI prevention. RESULTS A total of 353 (71%; 47 VA, 306 non-VA) of 494 nursing homes from 41 states responded. VA nursing homes reported more hours/week devoted to infection prevention-related activities (31 vs. 12 hours, P<.001), and were more likely to have committees that reviewed healthcare-associated infections. Compared with non-VA facilities, a higher percentage of VA nursing homes reported tracking CAUTI rates (94% vs. 66%, P<.001), sharing CAUTI data with leadership (94% vs. 70%, P=.014) and nursing personnel (85% vs. 56%, P=.003). However, fewer VA nursing homes reported having policies for appropriate catheter use (64% vs. 81%, P=.004) and catheter insertion (83% vs. 94%, P=.004). CONCLUSIONS Among nursing homes participating in an AHRQ-funded collaborative, VA and non-VA nursing homes differed in their approach to CAUTI prevention. Best practices from both settings should be applied universally to create an optimal infection prevention program within emerging integrated healthcare systems. PMID:27917728

  18. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  19. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  20. 75 FR 78806 - Agency Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Science.gov (United States)

    2010-12-16

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt.... 2900-0474.'' SUPPLEMENTARY INFORMATION: Title: Create Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt, VA Form 26-8986. OMB Control Number: 2900...

  1. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  2. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  3. A Role for Myosin Va in Human Cytomegalovirus Nuclear Egress.

    Science.gov (United States)

    Wilkie, Adrian R; Sharma, Mayuri; Pesola, Jean M; Ericsson, Maria; Fernandez, Rosio; Coen, Donald M

    2018-03-15

    Herpesviruses replicate and package their genomes into capsids in replication compartments within the nuclear interior. Capsids then move to the inner nuclear membrane for envelopment and release into the cytoplasm in a process called nuclear egress. We previously found that nuclear F-actin is induced upon infection with the betaherpesvirus human cytomegalovirus (HCMV) and is important for nuclear egress and capsid localization away from replication compartment-like inclusions toward the nuclear rim. Despite these and related findings, it has not been shown that any specific motor protein is involved in herpesvirus nuclear egress. In this study, we have investigated whether the host motor protein, myosin Va, could be fulfilling this role. Using immunofluorescence microscopy and coimmunoprecipitation, we observed associations between a nuclear population of myosin Va and the viral major capsid protein, with both concentrating at the periphery of replication compartments. Immunoelectron microscopy showed that nearly 40% of assembled nuclear capsids associate with myosin Va. We also found that myosin Va and major capsid protein colocalize with nuclear F-actin. Importantly, antagonism of myosin Va with RNA interference or a dominant negative mutant revealed that myosin Va is important for the efficient production of infectious virus, capsid accumulation in the cytoplasm, and capsid localization away from replication compartment-like inclusions toward the nuclear rim. Our results lead us to suggest a working model whereby human cytomegalovirus capsids associate with myosin Va for movement from replication compartments to the nuclear periphery during nuclear egress. IMPORTANCE Little is known regarding how newly assembled and packaged herpesvirus capsids move from the nuclear interior to the periphery during nuclear egress. While it has been proposed that an actomyosin-based mechanism facilitates intranuclear movement of alphaherpesvirus capsids, a functional role for

  4. The Geometric-VaR Backtesting Method

    DEFF Research Database (Denmark)

    Wei, Wei; Pelletier, Denis

    2014-01-01

    This paper develops a new test to evaluate Value af Risk (VaR) forecasts. VaR is a standard risk measure widely utilized by financial institutions and regulators, yet estimating VaR is a challenging problem, and popular VaR forecast relies on unrealistic assumptions. Hence, assessing...

  5. 78 FR 59771 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Science.gov (United States)

    2013-09-27

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); a Computer Generated Funding Fee Receipt.... Title: Create Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated... through the Federal Docket Management System (FDMS) at www.Regulations.gov or to Nancy J. Kessinger...

  6. VA Vascular Injury Study (VAVIS): VA-DoD extremity injury outcomes collaboration.

    Science.gov (United States)

    Shireman, Paula K; Rasmussen, Todd E; Jaramillo, Carlos A; Pugh, Mary Jo

    2015-02-03

    Limb injuries comprise 50-60% of U.S. Service member's casualties of wars in Afghanistan and Iraq. Combat-related vascular injuries are present in 12% of this cohort, a rate 5 times higher than in prior wars. Improvements in medical and surgical trauma care, including initial in-theatre limb salvage approaches (IILS) have resulted in improved survival and fewer amputations, however, the long-term outcomes such as morbidity, functional decline, and risk for late amputation of salvaged limbs using current process of care have not been studied. The long-term care of these injured warfighters poses a significant challenge to the Department of Defense (DoD) and Department of Veterans Affairs (VA). The VA Vascular Injury Study (VAVIS): VA-DoD Extremity Injury Outcomes Collaborative, funded by the VA, Health Services Research and Development Service, is a longitudinal cohort study of Veterans with vascular extremity injuries. Enrollment will begin April, 2015 and continue for 3 years. Individuals with a validated extremity vascular injury in the Department of Defense Trauma Registry will be contacted and will complete a set of validated demographic, social, behavioral, and functional status measures during interview and online/ mailed survey. Primary outcome measures will: 1) Compare injury, demographic and geospatial characteristics of patients with IILS and identify late vascular surgery related limb complications and health care utilization in Veterans receiving VA vs. non-VA care, 2) Characterize the preventive services received by individuals with vascular repair and related outcomes, and 3) Describe patient-reported functional outcomes in Veterans with traumatic vascular limb injuries. This study will provide key information about the current process of care for Active Duty Service members and Veterans with polytrauma/vascular injuries at risk for persistent morbidity and late amputation. The results of this study will be the first step for clinicians in VA and

  7. 75 FR 61252 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Science.gov (United States)

    2010-10-04

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... information through the Federal Docket Management System (FDMS) at http://www.Regulations.gov or to Nancy J...

  8. 75 FR 61859 - Proposed Information Collection (Create Payment Request for the VA Funding Fee Payment System (VA...

    Science.gov (United States)

    2010-10-06

    ... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... Payment Request for the VA Funding Fee Payment System (VA FFPS); A Computer Generated Funding Fee Receipt... information through the Federal Docket Management System (FDMS) at http://www.Regulations.gov or to Nancy J...

  9. VA announces aggressive new approach to produce rapid improvements in VA medical centers

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2018-02-01

    Full Text Available No abstract available. Article truncated at 150 words. The U.S. Department of Veterans Affairs (VA announced steps that it is taking as part of an aggressive new approach to produce rapid improvements at VA’s low-performing medical facilities nationwide (1. VA defines its low-performing facilities as those medical centers that receive the lowest score in its Strategic Analytics for Improvement and Learning (SAIL star rating system, or a one-star rating out of five. The SAIL star rating was initiated in 2016 and uses a variety of measures including mortality, length of hospital stay, readmission rates, hospital complications, physician productivity and efficiency. A complete listing of the VA facilities, their star ratings and the metrics used to determine the ratings is available through the end of fiscal year 2017 (2. Based on the latest ratings, the VA currently has 15 one-star facilities including Denver, Loma Linda, and Phoenix in the Southwest (Table 1. Table 1. VA facilities with one-star ratings …

  10. 48 CFR 853.215-70 - VA Form 10-1170, Application for Furnishing Nursing Home Care to Beneficiaries of VA.

    Science.gov (United States)

    2010-10-01

    ..., Application for Furnishing Nursing Home Care to Beneficiaries of VA. 853.215-70 Section 853.215-70 Federal... 853.215-70 VA Form 10-1170, Application for Furnishing Nursing Home Care to Beneficiaries of VA. VA Form 10-1170, Application for Furnishing Nursing Home Care to Beneficiaries of VA, will be used for...

  11. Do Older Rural and Urban Veterans Experience Different Rates of Unplanned Readmission to VA and Non-VA Hospitals?

    Science.gov (United States)

    Weeks, William B.; Lee, Richard E.; Wallace, Amy E.; West, Alan N.; Bagian, James P.

    2009-01-01

    Context: Unplanned readmission within 30 days of discharge is an indicator of hospital quality. Purpose: We wanted to determine whether older rural veterans who were enrolled in the VA had different rates of unplanned readmission to VA or non-VA hospitals than their urban counterparts. Methods: We used the combined VA/Medicare dataset to examine…

  12. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  13. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  14. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  15. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  16. Non-VA Hospital System (NVH)

    Data.gov (United States)

    Department of Veterans Affairs — The Veterans Health Administration (VHA) pays for care provided to VA beneficiaries in non-VA hospitals through its contract hospitalization program as mandated by...

  17. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  18. Accessing VA Healthcare During Large-Scale Natural Disasters.

    Science.gov (United States)

    Der-Martirosian, Claudia; Pinnock, Laura; Dobalian, Aram

    2017-01-01

    Natural disasters can lead to the closure of medical facilities including the Veterans Affairs (VA), thus impacting access to healthcare for U.S. military veteran VA users. We examined the characteristics of VA patients who reported having difficulty accessing care if their usual source of VA care was closed because of natural disasters. A total of 2,264 veteran VA users living in the U.S. northeast region participated in a 2015 cross-sectional representative survey. The study used VA administrative data in a complex stratified survey design with a multimode approach. A total of 36% of veteran VA users reported having difficulty accessing care elsewhere, negatively impacting the functionally impaired and lower income VA patients.

  19. Comparison of 250 MHz R10K Origin 2000 and 400 MHz Origin 2000 Using NAS Parallel Benchmarks

    Science.gov (United States)

    Turney, Raymond D.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    This report describes results of benchmark tests on Steger, a 250 MHz Origin 2000 system with R10K processors, currently installed at the NASA Ames National Advanced Supercomputing (NAS) facility. For comparison purposes, the tests were also run on Lomax, a 400 MHz Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to measure system performance. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versions used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  20. Benefits of the delta K of depletion benchmarks for burnup credit validation

    International Nuclear Information System (INIS)

    Lancaster, D.; Machiels, A.

    2012-01-01

    Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO 2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, k eff . The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)

  1. Categorical Regression and Benchmark Dose Software 3.0

    Science.gov (United States)

    The objective of this full-day course is to provide participants with interactive training on the use of the U.S. Environmental Protection Agency’s (EPA) Benchmark Dose software (BMDS, version 3.0, released fall 2018) and Categorical Regression software (CatReg, version 3.1...

  2. Performance assessment of new neutron cross section libraries using MCNP code and some critical benchmarks

    International Nuclear Information System (INIS)

    Bakkari, B El; Bardouni, T El.; Erradi, L.; Chakir, E.; Meroun, O.; Azahra, M.; Boukhal, H.; Khoukhi, T El.; Htet, A.

    2007-01-01

    Full text: New releases of nuclear data files made available during the few recent years. The reference MCNP5 code (1) for Monte Carlo calculations is usually distributed with only one standard nuclear data library for neutron interactions based on ENDF/B-VI. The main goal of this work is to process new neutron cross sections libraries in ACE continuous format for MCNP code based on the most recent data files recently made available for the scientific community : ENDF/B-VII.b2, ENDF/B-VI (release 8), JEFF3.0, JEFF-3.1, JENDL-3.3 and JEF2.2. In our data treatment, we used the modular NJOY system (release 99.9) (2) in conjunction with its most recent upadates. Assessment of the processed point wise cross sections libraries performances was made by means of some criticality prediction and analysis of other integral parameters for a set of reactor benchmarks. Almost all the analyzed benchmarks were taken from the international handbook of Evaluated criticality safety benchmarks experiments from OECD (3). Some revised benchmarks were taken from references (4,5). These benchmarks use Pu-239 or U-235 as the main fissionable materiel in different forms, different enrichments and cover various geometries. Monte Carlo calculations were performed in 3D with maximum details of benchmark description and the S(α,β) cross section treatment was adopted in all thermal cases. The resulting one standard deviation confidence interval for the eigenvalue is typically +/-13% to +/-20 pcm [fr

  3. Assessing the quality of VA Human Research Protection Programs: VA vs. affiliated University Institutional Review Board.

    Science.gov (United States)

    Tsan, Min-Fu; Nguyen, Yen; Brooks, Robert

    2013-04-01

    We compared the Human Research Protection Program (HRPP) quality indicator data of the Department of Veterans Affairs (VA) facilities using their own VA institutional review boards (IRBs) with those using affiliated university IRBs. From a total of 25 performance metrics, 13 did not demonstrate statistically significant differences, while 12 reached statistically significance differences. Among the 12 with statistically significant differences, facilities using their own VA IRBs performed better on four of the metrics, while facilities using affiliate IRBs performed better on eight. However, the absolute difference was small (0.2-2.7%) in all instances, suggesting that they were of no practical significance. We conclude that it is acceptable for facilities to use their own VA IRBs or affiliated university IRBs as their IRBs of record.

  4. Formulation of 3D Printed Tablet for Rapid Drug Release by Fused Deposition Modeling: Screening Polymers for Drug Release, Drug-Polymer Miscibility and Printability.

    Science.gov (United States)

    Solanki, Nayan G; Tahsin, Md; Shah, Ankita V; Serajuddin, Abu T M

    2018-01-01

    The primary aim of this study was to identify pharmaceutically acceptable amorphous polymers for producing 3D printed tablets of a model drug, haloperidol, for rapid release by fused deposition modeling. Filaments for 3D printing were prepared by hot melt extrusion at 150°C with 10% and 20% w/w of haloperidol using Kollidon ® VA64, Kollicoat ® IR, Affinsiol ™ 15 cP, and HPMCAS either individually or as binary blends (Kollidon ® VA64 + Affinisol ™ 15 cP, 1:1; Kollidon ® VA64 + HPMCAS, 1:1). Dissolution of crushed extrudates was studied at pH 2 and 6.8, and formulations demonstrating rapid dissolution rates were then analyzed for drug-polymer, polymer-polymer and drug-polymer-polymer miscibility by film casting. Polymer-polymer (1:1) and drug-polymer-polymer (1:5:5 and 2:5:5) mixtures were found to be miscible. Tablets with 100% and 60% infill were printed using MakerBot printer at 210°C, and dissolution tests of tablets were conducted at pH 2 and 6.8. Extruded filaments of Kollidon ® VA64-Affinisol ™ 15 cP mixtures were flexible and had optimum mechanical strength for 3D printing. Tablets containing 10% drug with 60% and 100% infill showed complete drug release at pH 2 in 45 and 120 min, respectively. Relatively high dissolution rates were also observed at pH 6.8. The 1:1-mixture of Kollidon ® VA64 and Affinisol ™ 15 cP was thus identified as a suitable polymer system for 3D printing and rapid drug release. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Characteristics Associated With Utilization of VA and Non-VA Care Among Iraq and Afghanistan Veterans With Post-Traumatic Stress Disorder.

    Science.gov (United States)

    Finley, Erin P; Mader, Michael; Bollinger, Mary J; Haro, Elizabeth K; Garcia, Hector A; Huynh, Alexis K; Pugh, Jacqueline A; Pugh, Mary Jo

    2017-11-01

    Post-traumatic stress disorder (PTSD) affects nearly one-fifth of Iraq and Afghanistan Veterans (IAV). The Department of Veterans Affairs (VA) has invested in making evidence-based psychotherapies for PTSD available at every VA facility nationwide; however, an unknown number of veterans opt to receive care in the community rather than with VA. We compared PTSD care utilization patterns among Texas IAV with PTSD, an ethnically, geographically, and economically diverse group. To identify IAV in Texas with service-connected disability for PTSD, we used a crosswalk of VA administrative data from the Operation Enduring Freedom/Operation Iraqi Freedom Roster and service-connected disability data from the Veterans Benefits Administration. We then surveyed a random sample of 1,128 veterans from the cohort, stratified by sex, rurality, and past use/nonuse of any VA care. Respondents were classified into current utilization groups (VA only, non-VA only, dual care, and no professional PTSD treatment) on the basis of reported PTSD care in the prior 12 months. Responses were weighted to account for sample stratification and for response rate within each strata. Utilization group characteristics were compared to the population mean using the one sample Z-test for proportions, or the t-test for means. A multinomial logistic regression model was used to identify survey variables significantly associated with current utilization group. 249 IAV completed the survey (28.4% response rate). Respondents reported receiving PTSD care: in the VA only (58.3%); in military or community-based settings (including private practitioners) (non-VA only, 8.7%); and in both VA and non-VA settings (dual care, 14.5%). The remainder (18.5%) reported no professional PTSD care in the prior year. Veterans ineligible for Department of Defense care, uncomfortable talking about their problems, and opposed to medication were more likely to receive non-VA care only, whereas those with lower household income

  6. KENO V.a Primer: A Primer for Criticality Calculations with SCALE/KENO V.a Using CSPAN for Input

    International Nuclear Information System (INIS)

    Busch, R.D.

    2003-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory (ORNL) is widely used and accepted around the world for criticality safety analyses. The well-known KENO V.a three-dimensional Monte Carlo criticality computer code is the primary criticality safety analysis tool in SCALE. The KENO V.a primer is designed to help a new user understand and use the SCALE/KENO V.a Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO V.a in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO V.a that are useful in criticality analyses. The primer is based on SCALE 4.4a, which includes the Criticality Safety Processor for Analysis (CSPAN) input processor for Windows personal computers (PCs). A second edition of the primer, which uses the new KENO Visual Editor, is currently under development at ORNL and is planned for publication in late 2003. Each example in this first edition of the primer uses CSPAN to provide the framework for data input. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO V.a input and allows the user to quickly run a simple criticality problem with SCALE/KENO V.a. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO V.a features which are covered in detail in the example problems in that section. Upon completion of the primer, a new user should be comfortable using CSPAN to set up criticality problems in SCALE/KENO V.a

  7. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  8. Report of VA Medical Training Programs

    Data.gov (United States)

    Department of Veterans Affairs — The Report of VA Medical Training Programs Database is used to track medical center health services trainees and VA physicians serving as faculty. The database also...

  9. VaR Methodology Application for Banking Currency Portfolios

    Directory of Open Access Journals (Sweden)

    Daniel Armeanu

    2007-02-01

    Full Text Available VaR has become the standard measure that financial analysts use to quantify market risk. VaR measures can have many applications, such as in risk management, to evaluate the performance of risk takers and for regulatory requirements, and hence it is very important to develop methodologies that provide accurate estimates. In particular, the Basel Committee on Banking Supervision at the Bank for International Settlements imposes to financial institutions such as banks and investment firms to meet capital requirements based on VaR estimates. In this paper we determine VaR for a banking currency portfolio and respect rules of National Bank of Romania regarding VaR report.

  10. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  11. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  12. The WHO 2016 verbal autopsy instrument: An international standard suitable for automated analysis by InterVA, InSilicoVA, and Tariff 2.0.

    Directory of Open Access Journals (Sweden)

    Erin K Nichols

    2018-01-01

    Full Text Available Verbal autopsy (VA is a practical method for determining probable causes of death at the population level in places where systems for medical certification of cause of death are weak. VA methods suitable for use in routine settings, such as civil registration and vital statistics (CRVS systems, have developed rapidly in the last decade. These developments have been part of a growing global momentum to strengthen CRVS systems in low-income countries. With this momentum have come pressure for continued research and development of VA methods and the need for a single standard VA instrument on which multiple automated diagnostic methods can be developed.In 2016, partners harmonized a WHO VA standard instrument that fully incorporates the indicators necessary to run currently available automated diagnostic algorithms. The WHO 2016 VA instrument, together with validated approaches to analyzing VA data, offers countries solutions to improving information about patterns of cause-specific mortality. This VA instrument offers the opportunity to harmonize the automated diagnostic algorithms in the future.Despite all improvements in design and technology, VA is only recommended where medical certification of cause of death is not possible. The method can nevertheless provide sufficient information to guide public health priorities in communities in which physician certification of deaths is largely unavailable. The WHO 2016 VA instrument, together with validated approaches to analyzing VA data, offers countries solutions to improving information about patterns of cause-specific mortality.

  13. Has the VA Become a White Elephant?

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2016-11-01

    Full Text Available No abstract available. Article truncated at 150 words. As I write this Dennis Wagner is publishing a series of articles in the Arizona Republic describing his quest to find out if care at VA hospitals have improved over the last 2 years (1. To begin the article Wagner describes the fable of the King of Siam who presented albino pachyderms to his enemies knowing they would be bankrupted because the cost of food and care outweighed all usefulness. A modern expression derives from this parable: the white elephant. The Department of Veterans Affairs (VA has prided itself on being a leader in healthcare. It is the largest healthcare system in the US, implemented the first electronic medical record, and more than 70 percent of all US doctors have received training in the VA healthcare system (2. This year the VA is celebrating the 70th anniversary of its partnership with US medical schools. Beginning in 1946, the VA partnered ...

  14. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  15. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.

    1995-11-01

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  16. VaST: A variability search toolkit

    Science.gov (United States)

    Sokolovsky, K. V.; Lebedev, A. A.

    2018-01-01

    Variability Search Toolkit (VaST) is a software package designed to find variable objects in a series of sky images. It can be run from a script or interactively using its graphical interface. VaST relies on source list matching as opposed to image subtraction. SExtractor is used to generate source lists and perform aperture or PSF-fitting photometry (with PSFEx). Variability indices that characterize scatter and smoothness of a lightcurve are computed for all objects. Candidate variables are identified as objects having high variability index values compared to other objects of similar brightness. The two distinguishing features of VaST are its ability to perform accurate aperture photometry of images obtained with non-linear detectors and handle complex image distortions. The software has been successfully applied to images obtained with telescopes ranging from 0.08 to 2.5 m in diameter equipped with a variety of detectors including CCD, CMOS, MIC and photographic plates. About 1800 variable stars have been discovered with VaST. It is used as a transient detection engine in the New Milky Way (NMW) nova patrol. The code is written in C and can be easily compiled on the majority of UNIX-like systems. VaST is free software available at http://scan.sai.msu.ru/vast/.

  17. Technology Reference Model (TRM) Reports: VA Category Mapping Report

    Data.gov (United States)

    Department of Veterans Affairs — The One VA Enterprise Architecture (OneVA EA) is a comprehensive picture of the Department of Veterans Affairs' (VA) operations, capabilities and services and the...

  18. Visionary leadership and the future of VA health system.

    Science.gov (United States)

    Bezold, C; Mayer, E; Dighe, A

    1997-01-01

    As the U.S. Department of Veterans Affairs (VA) makes the change over to Veterans Integrated Service Network (VISNs) the need for new and better leadership is warranted if VA wants to not only survive, but thrive in the emerging twenty-first century healthcare system. VA can prepare for the future and meet the challenges facing them by adopting a system of visionary leadership. The use of scenarios and vision techniques are explained as they relate to VA's efforts to move toward their new system of VISNs. The four scenarios provide snapshots of possible futures for the U.S. healthcare system as well as the possible future role and mission of VA--from VA disappearing to its becoming a premier virtual organization.

  19. ESTIMASI NILAI VaR PORTOFOLIO MENGGUNAKAN FUNGSI ARCHIMEDEAN COPULA

    Directory of Open Access Journals (Sweden)

    AULIA ATIKA PRAWIBTA SUHARTO

    2017-01-01

    Full Text Available Value at Risk explains the magnitude of the worst losses occurred in financial products investments with a certain level of confidence and time interval. The purpose of this study is to estimate the VaR of portfolio using Archimedean Copula family. The methods for calculating the VaR are as follows: (1 calculating the stock return; (2 calculating descriptive statistics of return; (3 checking for the nature of autocorrelation and heteroscedasticity effects on stock return data; (4 checking for the presence of extreme value by using Pareto tail; (5 estimating the parameters of Achimedean Copula family; (6 conducting simulations of Archimedean Copula; (7 estimating the value of the stock portfolio VaR. This study uses the closing price of TLKM and GGRM. At 90% the VaR obtained using Clayton, Gumbel, Frank copulas are 0.9562%, 1.0189%, 0.9827% respectively. At 95% the VaR obtained using Clayton, Gumbel, Frank copulas are 1.2930%, 1.2522%, 1.3152% respectively. At 99% the VaR obtained using Clayton, Gumbel, Frank copulas are 2.0327%, 1.9164%, is 1.8678% respectively. In conclusion estimation of VaR using Clayton copula yields the highest VaR.

  20. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  1. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  2. 38 CFR 1.203 - Information to be reported to VA Police.

    Science.gov (United States)

    2010-07-01

    ... reported to VA Police. 1.203 Section 1.203 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... be reported to VA Police. Information about actual or possible violations of criminal laws related to... occurs on VA premises, will be reported by VA management officials to the VA police component with...

  3. SlaVaComp: Konvertierungstool (= SlaVaComp Fonts Converter

    Directory of Open Access Journals (Sweden)

    Simon Skilevic

    2013-12-01

    Full Text Available Der vorliegende Beitrag informiert über ein Tool, das im Rahmen eines Freiburger Projekts zur historischen Korpuslinguistik entwickelt wurde und dazu dient, kirchenslavische Texte, die ohne Einsatz von Unicode digitalisiert wurden, ohne Verlust von Information bzw. Formatierung ins Unicode-Format zu überführen. Das Tool heißt SlaVaComp-Konvertierer. Es eignet sich für die Konvertierung aller idiosynkratischen Fonts und kann somit nicht nur in der Paläoslavistik, sondern in allen historisch arbeitenden Philologien eingesetzt werden. ____________________ This paper presents a fonts converter that was developed as a part of the Freiburg project on historical corpus linguistics. The tool named SlaVaComp-Konvertierer converts Church Slavonic texts digitized with non-Unicode fonts into the Unicode format without any loss of information contained in the original file and without damage to the original formatting. It is suitable for the conversion of all idiosyncratic fonts—not only Church Slavonic—and therefore can be used not only in Palaeoslavistic, but also in all historical and philological studies.

  4. Poststroke Rehabilitation and Restorative Care Utilization: A Comparison Between VA Community Living Centers and VA-contracted Community Nursing Homes.

    Science.gov (United States)

    Jia, Huanguang; Pei, Qinglin; Sullivan, Charles T; Cowper Ripley, Diane C; Wu, Samuel S; Bates, Barbara E; Vogel, W Bruce; Bidelspach, Douglas E; Wang, Xinping; Hoffman, Nannette

    2016-03-01

    Effective poststroke rehabilitation care can speed patient recovery and minimize patient functional disabilities. Veterans affairs (VA) community living centers (CLCs) and VA-contracted community nursing homes (CNHs) are the 2 major sources of institutional long-term care for Veterans with stroke receiving care under VA auspices. This study compares rehabilitation therapy and restorative nursing care among Veterans residing in VA CLCs versus those Veterans in VA-contracted CNHs. Retrospective observational. All Veterans diagnosed with stroke, newly admitted to the CLCs or CNHs during the study period who completed at least 2 Minimum Data Set assessments postadmission. The outcomes were numbers of days for rehabilitation therapy and restorative nursing care received by the Veterans during their stays in CLCs or CNHs as documented in the Minimum Data Set databases. For rehabilitation therapy, the CLC Veterans had lower user rates (75.2% vs. 76.4%, P=0.078) and fewer observed therapy days (4.9 vs. 6.4, Pcare, CLC Veterans had higher user rates (33.5% vs. 30.6%, Pcare days (9.4 vs. 5.9, Pcare (coefficient=5.48±0.37, Pcare both before and after risk adjustment.

  5. Whistle-blower accuses VA inspector general of a "whitewash"

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2014-09-01

    Full Text Available No abstract available. Article truncated after 150 words. Yesterday, Dr. Sam Foote, the initial whistle-blower at the Phoenix VA, criticized the Department of Veterans Affairs inspector general's (VAOIG report on delays in healthcare at the Phoenix VA at a hearing before the House Committee of Veterans Affairs (1,2. Foote accused the VAOIG of minimizing bad patient outcomes and deliberately confusing readers, downplaying the impact of delayed health care at Phoenix VA facilities. "At its best, this report is a whitewash. At its worst, it is a feeble attempt at a cover-up," said Foote. Foote earlier this year revealed that as many as 40 Phoenix patients died while awaiting care and that the Phoenix VA maintained secret waiting lists while under-reporting patient wait times for appointments. His disclosures triggered the national VA scandal. Richard Griffin, the acting VAOIG, said that nearly 300 patients died while on backlogged wait lists in the Phoenix VA Health Care System, a much higher ...

  6. Helman defends decision to pull VA sponsorship of Veterans day parade

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2013-04-01

    Full Text Available No abstract available. Article truncated after 150 words. Sharon Helman, Phoenix VA Director, defended her decision to cancel VA sponsorship of the annual Phoenix Veterans Day Parade in a 4/10/13 email to VA employees. Helman said that VA sponsorship was cancelled because of “…priorities in the organization (specifically access, and heightened awareness over liability concerns which VA Legal Counsel brought forward”. She concluded her letter by warning “… that all media inquiries should be forwarded to Paul Coupaud, Acting Public Affairs Officer”. VA officials initially said fear of litigation prompted the review of VA support. Last year, a float carrying wounded Veterans in a Midland, Texas, parade collided with a freight train, killing four and injuring 17. Crash victims and their families filed lawsuits in Texas against Union Pacific Railroad and the float owner. The VA was not a defendant, and the VA has not issued any national directives on liability as a result of the tragedy.In…

  7. Technology Reference Model (TRM) Reports: VA Category Framework Count Report

    Data.gov (United States)

    Department of Veterans Affairs — The One VA Enterprise Architecture (OneVA EA) is a comprehensive picture of the Department of Veterans Affairs' (VA) operations, capabilities and services and the...

  8. Benchmarking and validation activities within JEFF project

    OpenAIRE

    Cabellos O.; Alvarez-Velarde F.; Angelone M.; Diez C.J.; Dyrda J.; Fiorito L.; Fischer U.; Fleming M.; Haeck W.; Hill I.; Ichou R.; Kim D. H.; Klix A.; Kodeli I.; Leconte P.

    2017-01-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient be...

  9. Development of a continuous energy version of KENO V.a

    International Nuclear Information System (INIS)

    Dunn, M.E.; Bentley, C.L.; Goluoglu, S.; Paschal, L.S.; Dodds, H.L.

    1997-01-01

    KENO V.a is a multigroup Monte Carlo code that solves the Boltzmann transport equation and is used extensively in the nuclear criticality safety community to calculate the effective multiplication factor k eff of systems containing fissile material. Because of the smaller amount of disk storage and CPU time required in calculations, multigroup approaches have been preferred over continuous energy (point) approaches in the past to solve the transport equation. With the advent of high-performance computers, storage and CPU limitations are less restrictive, thereby making continuous energy methods viable for transport calculations. Moreover, continuous energy methods avoid many of the assumptions and approximations inherent in multigroup methods. Because a continuous energy version of KENO V.a does not exist, the objective of the work is to develop a new version of KENO V.a that utilizes continuous energy cross sections. Currently, a point cross-section library, which is based on a raw continuous energy cross-section library such as ENDF/B-V is not available for implementation in KENO V.a; however, point cross-section libraries are available for MCNP, another widely used Monte Carlo transport code. Since MCNP cross sections are based on ENDF data and are readily available, a new version of KENO V.a named PKENO V.a has been developed that performs the random walk using MCNP cross sections. To utilize point cross sections, extensive modifications have been made to KENO V.a. At this point in the research, testing of the code is underway. In particular, PKENO V.a, KENO V.a, and MCNP have been used to model nine critical experiments and one subcritical problem. The results obtained with PKENO V.a are in excellent agreement with MCNP, KENO V.a, and experiments

  10. Physicochemical properties of direct compression tablets with spray dried and ball milled solid dispersions of tadalafil in PVP-VA.

    Science.gov (United States)

    Wlodarski, K; Tajber, L; Sawicki, W

    2016-12-01

    The aim of this research was to develop immediate release tablets comprising solid dispersion (IRSDTs) of tadalafil (Td) in a vinylpyrrolidone and vinyl acetate block copolymer (PVP-VA), characterized by improved dissolution profiles. The solid dispersion of Td in PVP-VA (Td/PVP-VA) in a weight ratio of 1:1 (w/w) was prepared using two different processes i.e. spray drying and ball milling. While the former process has been well established in the formulation of IRSDTs the latter has not been exploited in these systems yet. Regardless of the preparation method, both Td/PVP-VA solid dispersions were amorphous as confirmed by PXRD, DSC and FTIR. However, different morphology of particles (SEM) resulted in differences in water apparent solubility and disk intrinsic dissolution rate (DIDR). Both solid dispersions and crystalline Td were successfully made into directly compressible tablets at three doses of Td, i.e. 2.5mg, 10mgand20mg, yielding nine different formulations (D 1 -D 9 ). Each of the lots met the requirements set by Ph.Eur. and was evaluated with respect to appearance, diameter, thickness, mass, hardness, friability, disintegration time and content of Td. IRSDTs performed as supersaturable formulations and had significantly improved water dissolution profiles in comparison with equivalent tablets containing crystalline Td and the marketed formulations. Tablets with both spray dried and ball milled Td/PVP-VA revealed the greatest improvement in dissolution depending on the investigated doses, i.e. 2.5mgand20mg, respectively. Also, dissolution of Td from Td/PVP-VA delivered in different forms occurred in the following order: powders>tablets>capsules. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Development of a parallelization method for KENO V.a

    International Nuclear Information System (INIS)

    Basoglu, B.; Bentley, C.; Dunn, M.

    1995-01-01

    The KENO V.a codes is a widely used Monte carlo codes that is part of the SCALE modular codes system for performing standardized computer analysis of nuclear systems for licensing evaluation. In the past few years, attempts have been made to speed up KENO V.a using new generation computers. In this paper we report on the initial development of a parallel version of KENO V.a for the Kendall Square Research supercomputer (KSRI) at ORNL. Investigations thus far have shown that the parallel code provides accurate results with significantly reduced computation times relative to the conventional KENO V.a code

  12. Benchmark testing of Canadol-2.1 for heavy water reactor

    International Nuclear Information System (INIS)

    Liu Ping

    1999-01-01

    The new version evaluated nuclear data library of ENDF-B 6.5 has been released recently. In order to compare the quality of evaluated nuclear data CENDL-2.1 with ENDF-B 6.5, it is necessary to do benchmarks testing for them. In this work, CENDL-2.1 and ENDF-B 6.5 were used to generated the WIMS-69 group library respectively, and benchmarks testing was done for the heavy water reactor, using WIMS5A code. It is obvious that data files of CENDL-2.1 is better than that of old WIMS library for the heavy water reactors calculations, and is in good agreement with those of ENDF-B 6.5

  13. Analysis of VaR on Stock Investing%股票投资的风险价值VaR分析

    Institute of Scientific and Technical Information of China (English)

    张江红; 唐泉

    2011-01-01

    VaR is a tool to measure financial risk,which is supported and recognized by the international financial community in recent years.For equity portfolios consisting of different market factors or different financial instruments,VaR can reliably evaluate the market risks.In the paper the basic principle and calculation of VaR is introduced,the company stock value at risk has been analyzed using normal method,which company has issued convertible bond and stock.Meanwhile the affect that the issuance of convertible bond to the underlying stock fluctuation has been taken into account in order to provide reference for the different types of investors.%VaR是近年来受到国际金融界广泛支持和认可的一种度量金融风险的工具.对于不同市场因子和不同金融工具的投资组合,VaR可以相对可靠地衡量其市场风险.本文介绍了VaR的基本原理和计算方法,并用正态分布对发行有可转债的上市公司股票的风险价值进行分析,考虑了可转债的发行对标的股票波动的影响,以期为资本市场不同类型的投资者在进行资本投资前估计风险提供参考.

  14. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  15. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  16. The VA mission act: Funding to fail?

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2018-06-01

    Full Text Available No abstract available. Article truncated after 150 words. Yesterday on D-Day, the 74th anniversary of the invasion of Normandy, President Trump signed the VA Mission Act. The law directs the VA to combine a number of existing private-care programs, including the so-called Choice program, which was created in 2014 after veterans died waiting for appointments at the Phoenix VA (1. During the signing Trump touted the new law saying “there has never been anything like this in the history of the VA” and saying that veterans “can go right outside [the VA] to a private doctor”-but can they? Although the bill authorizes private care, it appropriates no money to pay for it. Although a bipartisan plan to fund the expansion is proposed in the House, the White House has been lobbying Republicans to vote the plan down (2. Instead Trump has been asking Congress to pay for veteran’s programs by cutting spending elsewhere (2. We in Arizona are …

  17. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  18. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  19. Building capacity in VA to provide emergency gynecology services for women.

    Science.gov (United States)

    Cordasco, Kristina M; Huynh, Alexis K; Zephyrin, Laurie; Hamilton, Alison B; Lau-Herzberg, Amy E; Kessler, Chad S; Yano, Elizabeth M

    2015-04-01

    Visits to Veterans Administration (VA) emergency departments (EDs) are increasingly being made by women. A 2011 national inventory of VA emergency services for women revealed that many EDs have gaps in their resources and processes for gynecologic emergency care. To guide VA in addressing these gaps, we sought to understand factors acting as facilitators and/or barriers to improving VA ED capacity for, and quality of, emergency gynecology care. Semistructured interviews with VA emergency and women's health key informants. ED directors/providers (n=14), ED nurse managers (n=13), and Women Veteran Program Managers (n=13) in 13 VA facilities. Leadership, staff, space, demand, funding, policies, and community were noted as important factors influencing VA EDs building capacity and improving emergency gynecologic care for women Veterans. These factors are intertwined and cross multiple organizational levels so that each ED's capacity is a reflection not only of its own factors, but also those of its local medical center and non-VA community context as well as VA regional and national trends and policies. Policies and quality improvement initiatives aimed at building VA's emergency gynecologic services for women need to be multifactorial and aimed at multiple organizational levels. Policies need to be flexible to account for wide variations across EDs and their medical center and community contexts. Approaches that build and encourage local leadership engagement, such as evidence-based quality improvement methodology, are likely to be most effective.

  20. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  1. 78 FR 56271 - FY 2014-2020 Draft VA Strategic Plan

    Science.gov (United States)

    2013-09-12

    ... DEPARTMENT OF VETERANS AFFAIRS FY 2014-2020 Draft VA Strategic Plan AGENCY: Department of Veterans... Affairs (VA) is announcing the availability of the FY 2014-2020 Draft VA Strategic Plan (Strategic Plan... Act of 2010 (GPRAMA) (Pub. L. 111-352). The Strategic Plan provides the Department's long-term...

  2. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  3. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  4. Troubles continue for the Phoenix VA

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2014-10-01

    Full Text Available No abstract available. Article truncated after 150 words. According to the Joint Commission on the Accreditation of Healthcare Organizations (Joint Commission, JCAHO, an independent organization that reviews hospitals, the Phoenix VA does not comply with U.S. standards for safety, patient care and management (1. The hospital was at the epicenter of the national scandal over the quality of care being afforded to the nation's veterans where the now notorious practice of double-booking patient appointments was first exposed. The hospital's indifferent management provoked congressional investigations that uncovered still more system-wide abuses leading to the removal of the hospital director and the resignation of then VA secretary, Eric Shinseki. The hospital maintains its accreditation but with a follow-up survey in 1-6 months where it must show that it has successfully addressed the 13 identified problems (1. Inspectors who conducted the review in July found that VA employees were unable to report concerns "without retaliatory action from the hospital." Other alarming ...

  5. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  6. 38 CFR 26.7 - VA environmental decision making and documents.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false VA environmental decision making and documents. 26.7 Section 26.7 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) ENVIRONMENTAL EFFECTS OF THE DEPARTMENT OF VETERANS AFFAIRS (VA) ACTIONS § 26.7 VA environmental decision making and document...

  7. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  8. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  9. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  10. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  11. 38 CFR 74.27 - How will VA store information?

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) VETERANS SMALL BUSINESS REGULATIONS Records Management § 74.27 How will VA store information? VA... examination visits will be scanned onto portable media and fully secured in the Center for Veterans Enterprise...

  12. Medical Student Psychiatry Examination Performance at VA and Non-VA Clerkship Sites

    Science.gov (United States)

    Tucker, Phebe; von Schlageter, Margo Shultes; Park, EunMi; Rosenberg, Emily; Benjamin, Ashley B.; Nawar, Ola

    2009-01-01

    Objective: The authors examined the effects of medical student assignment to U.S. Department of Veterans Affairs (VA) Medical Center inpatient and outpatient psychiatry clerkship sites versus other university and community sites on the performance outcome measure of National Board of Medical Examiners (NBME) subject examination scores. Methods:…

  13. 48 CFR 852.219-71 - VA mentor-protégé program.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false VA mentor-protégÃ....219-71 VA mentor-protégé program. As prescribed in 819.7115(a), insert the following clause: VA Mentor-Protégé Program (DEC 2009) (a) Large businesses are encouraged to participate in the VA Mentor-Protégé...

  14. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  15. What does Shulkin's firing mean for the VA?

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2018-03-01

    Full Text Available No abstract available. Article truncated at 150 words. David Shulkin MD, Secretary for Veterans Affairs (VA was finally fired by President Donald Trump ending long speculation (1. Trump nominated his personal physician, Ronny Jackson MD, to fill Shulkin’s post. The day after his firing, Shulkin criticized his firing in a NY Times op-ed claiming pro-privatization factions within the Trump administration led to his ouster (2. “They saw me as an obstacle to privatization who had to be removed,” Dr. Shulkin wrote. “That is because I am convinced that privatization is a political issue aimed at rewarding select people and companies with profits, even if it undermines care for veterans.” Former Secretary Shulkin’s tenure at the VA has had several controversies. First, as undersecretary of Veterans Healthcare and later as secretary money appropriated to the VA to obtain private care under the Veterans Access, Choice, and Accountability Acts of 2014 and the VA Choice and Quality Employment Act of …

  16. Empirical analysis on future-cash arbitrage risk with portfolio VaR

    Science.gov (United States)

    Chen, Rongda; Li, Cong; Wang, Weijin; Wang, Ze

    2014-03-01

    This paper constructs the positive arbitrage position by alternating the spot index with Chinese Exchange Traded Fund (ETF) portfolio and estimating the arbitrage-free interval of futures with the latest trade data. Then, an improved Delta-normal method was used, which replaces the simple linear correlation coefficient with tail dependence correlation coefficient, to measure VaR (Value-at-risk) of the arbitrage position. Analysis of VaR implies that the risk of future-cash arbitrage is less than that of investing completely in either futures or spot market. Then according to the compositional VaR and the marginal VaR, we should increase the futures position and decrease the spot position appropriately to minimize the VaR, which can minimize risk subject to certain revenues.

  17. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  18. Patient deaths blamed on long waits at the Phoenix VA

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2014-04-01

    Full Text Available No abstract available. Article truncated at 150 words. This morning the lead article in the Arizona Republic was a report blaming as many as 40 deaths at the Phoenix VA on long waits (1. Yesterday, Rep. Jeff Miller, the chairman of the House Committee on Veterans Affairs, held a hearing titled “A Continued Assessment of Delays in VA Medical Care and Preventable Veteran Deaths.” “It appears as though there could be as many as 40 veterans whose deaths could be related to delays in care,” Miller announced to a stunned audience. The committee has spent months investigating patient-care scandals and allegations at VA facilities in Pittsburgh, Atlanta, Miami and other cities. said that dozens of VA hospital patients in Phoenix may have died while awaiting medical care. He went on to say that staff investigators have evidence that the Phoenix VA Health Care System keeps two sets of records to conceal prolonged waits that patients must endure for ...

  19. 77 FR 67063 - VA Directive 0005 on Scientific Integrity

    Science.gov (United States)

    2012-11-08

    ... in multiple areas, including data integrity, ethics, privacy, and human research protections, as well... replace the Association for the Accreditation of Human Research Protection Programs (AAHRPP) with Alion... human research protection programs. VA Response: VA is currently reviewing its accreditation...

  20. 75 FR 9277 - Proposed Information Collection (VA National Rehabilitation Special Events, Event Registration...

    Science.gov (United States)

    2010-03-01

    ... Sports Clinic Application, VA Form 0924--233 hours. b. National Veterans Wheelchair Games Application, VA.... National Veterans TEE Tournament Application, VA Form 0927--133 hours. e. National Veterans Summer Sports... Form 0929--67 hours. OMB Control Number: 2900-New (VA Form 0924). Type of Review: Existing collection...

  1. VA National Bed Control System

    Data.gov (United States)

    Department of Veterans Affairs — The VA National Bed Control System records the levels of operating, unavailable and authorized beds at each VAMC, and it tracks requests for changes in these levels....

  2. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  3. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  4. VA Telemedicine: An Analysis of Cost and Time Savings.

    Science.gov (United States)

    Russo, Jack E; McCool, Ryan R; Davies, Louise

    2016-03-01

    The Veterans Affairs (VA) healthcare system provides beneficiary travel reimbursement ("travel pay") to qualifying patients for traveling to appointments. Travel pay is a large expense for the VA and hence the U.S. Government, projected to cost nearly $1 billion in 2015. Telemedicine in the VA system has the potential to save money by reducing patient travel and thus the amount of travel pay disbursed. In this study, we quantify this savings and also report trends in VA telemedicine volumes over time. All telemedicine visits based at the VA Hospital in White River Junction, VT between 2005 and 2013 were reviewed (5,695 visits). Travel distance and time saved as a result of telemedicine were calculated. Clinical volume in the mental health department, which has had the longest participation in telemedicine, was analyzed. Telemedicine resulted in an average travel savings of 145 miles and 142 min per visit. This led to an average travel payment savings of $18,555 per year. Telemedicine volume grew significantly over the study period such that by the final year the travel pay savings had increased to $63,804, or about 3.5% of the total travel pay disbursement for that year. The number of mental health telemedicine visits rose over the study period but remained small relative to the number of face-to-face visits. A higher proportion of telemedicine visits involved new patients. Telemedicine at the VA saves travel distance and time, although the reduction in travel payments remains modest at current telemedicine volumes.

  5. VA Dental Insurance Program--federalism. Direct final rule.

    Science.gov (United States)

    2013-10-22

    The Department of Veterans Affairs (VA) is taking direct final action to amend its regulations related to the VA Dental Insurance Program (VADIP), a pilot program to offer premium-based dental insurance to enrolled veterans and certain survivors and dependents of veterans. Specifically, this rule will add language to clarify the limited preemptive effect of certain criteria in the VADIP regulations.

  6. Comparing VA and private sector healthcare costs for end-stage renal disease.

    Science.gov (United States)

    Hynes, Denise M; Stroupe, Kevin T; Fischer, Michael J; Reda, Domenic J; Manning, Willard; Browning, Margaret M; Huo, Zhiping; Saban, Karen; Kaufman, James S

    2012-02-01

    Healthcare for end-stage renal disease (ESRD) is intensive, expensive, and provided in both the public and private sector. Using a societal perspective, we examined healthcare costs and health outcomes for Department of Veterans Affairs (VA) ESRD patients comparing those who received hemodialysis care at VA versus private sector facilities. Dialysis patients were recruited from 8 VA medical centers from 2001 through 2003 and followed for 12 months in a prospective cohort study. Patient demographics, clinical characteristics, quality of life, healthcare use, and cost data were collected. Healthcare data included utilization (VA), claims (Medicare), and patient self-report. Costs included VA calculated costs, Medicare dialysis facility reports and reimbursement rates, and patient self-report. Multivariable regression was used to compare costs between patients receiving dialysis at VA versus private sector facilities. The cohort comprised 334 patients: 170 patients in the VA dialysis group and 164 patients in the private sector group. The VA dialysis group had more comorbidities at baseline, outpatient and emergency visits, prescriptions, and longer hospital stays; they also had more conservative anemia management and lower baseline urea reduction ratio (67% vs. 72%; Pprivate sector dialysis group (Pprivate sector settings is critical in informing health policy options for patients with complex chronic illnesses such as ESRD.

  7. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  8. VA INFORMATION SYSTEMS: Computer Security Weaknesses Persist at the Veterans Health Administration

    National Research Council Canada - National Science Library

    2000-01-01

    .... To determine the status of computer security within VHA, we (1) evaluated information system general controls at the VA Maryland Health Cafe System, the New Mexico VA Health Care System, and the VA North Texas Health Care System and (2...

  9. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  10. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  11. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  12. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  13. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  14. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  15. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  16. Benchmark of the HDR E11.2 containment hydrogen mixing experiment using the MAAP4 code

    International Nuclear Information System (INIS)

    Lee, Sung, Jin; Paik, Chan Y.; Henry, R.E.

    1997-01-01

    The MAAP4 code was benchmarked against the hydrogen mixing experiment in a full-size nuclear reactor containment. This particular experiment, designated as E11.2, simulated a small loss-of-coolant-accident steam blowdown into the containment followed by the release of a hydrogen-helium gas mixture. It also incorporated external spray cooling of the steel dome near the end of the transient. Specifically, the objective of this bench-mark was to demonstrate that MAAP4, using subnodal physics, can predict an observed gas stratification in the containment

  17. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  18. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  19. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  20. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  1. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  2. Job satisfaction and burnout among VA and community mental health workers.

    Science.gov (United States)

    Salyers, Michelle P; Rollins, Angela L; Kelly, Yu-Fan; Lysaker, Paul H; Williams, Jane R

    2013-03-01

    Building on two independent studies, we compared burnout and job satisfaction of 66 VA staff and 86 community mental health center staff in the same city. VA staff reported significantly greater job satisfaction and accomplishment, less emotional exhaustion and lower likelihood of leaving their job. Sources of work satisfaction were similar (primarily working with clients, helping/witnessing change). VA staff reported fewer challenges with job-related aspects (e.g. flexibility, pay) but more challenges with administration. Community mental health administrators and policymakers may need to address job-related concerns (e.g. pay) whereas VA administrators may focus on reducing, and helping workers navigate, administrative policies.

  3. 48 CFR 852.219-9 - VA Small business subcontracting plan minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false VA Small business... Provisions and Clauses 852.219-9 VA Small business subcontracting plan minimum requirements. As prescribed in subpart 819.709, insert the following clause: VA Small Business Subcontracting Plan Minimum Requirements...

  4. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  5. A CFD benchmarking exercise based on flow mixing in a T-junction

    Energy Technology Data Exchange (ETDEWEB)

    Smith, B.L., E-mail: brian.smith@psi.ch [Thermal Hydraulics Laboratory, Nuclear Energy and Safety Department, Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland); Mahaffy, J.H. [Wheelsmith Farm, Spring Mill, PA (United States); Angele, K. [Vattenfall R and D, Älvkarleby (Sweden)

    2013-11-15

    The paper describes an international benchmarking exercise, sponsored by the OECD Nuclear Energy Agency (NEA), aimed at testing the ability of state-of-the-art computational fluid dynamics (CFD) codes to predict the important fluid flow parameters affecting high-cycle thermal fatigue induced by turbulent mixing in T-junctions. The results from numerical simulations are compared to measured data from an experiment performed at 1:2 scale by Vattenfall Research and Development, Älvkarleby, Sweden. The test data were released only at the end of the exercise making this a truly blind CFD-validation benchmark. Details of the organizational procedures, the experimental set-up and instrumentation, the different modeling approaches adopted, synthesis of results, and overall conclusions and perspectives are presented.

  6. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  7. Identifying Homelessness among Veterans Using VA Administrative Data: Opportunities to Expand Detection Criteria.

    Directory of Open Access Journals (Sweden)

    Rachel Peterson

    Full Text Available Researchers at the U.S. Department of Veterans Affairs (VA have used administrative criteria to identify homelessness among U.S. Veterans. Our objective was to explore the use of these codes in VA health care facilities. We examined VA health records (2002-2012 of Veterans recently separated from the military and identified as homeless using VA conventional identification criteria (ICD-9-CM code V60.0, VA specific codes for homeless services, plus closely allied V60 codes indicating housing instability. Logistic regression analyses examined differences between Veterans who received these codes. Health care services and co-morbidities were analyzed in the 90 days post-identification of homelessness. VA conventional criteria identified 21,021 homeless Veterans from Operations Enduring Freedom, Iraqi Freedom, and New Dawn (rate 2.5%. Adding allied V60 codes increased that to 31,260 (rate 3.3%. While certain demographic differences were noted, Veterans identified as homeless using conventional or allied codes were similar with regards to utilization of homeless, mental health, and substance abuse services, as well as co-morbidities. Differences were noted in the pattern of usage of homelessness-related diagnostic codes in VA facilities nation-wide. Creating an official VA case definition for homelessness, which would include additional ICD-9-CM and other administrative codes for VA homeless services, would likely allow improved identification of homeless and at-risk Veterans. This also presents an opportunity for encouraging uniformity in applying these codes in VA facilities nationwide as well as in other large health care organizations.

  8. Identifying Homelessness among Veterans Using VA Administrative Data: Opportunities to Expand Detection Criteria

    Science.gov (United States)

    Peterson, Rachel; Gundlapalli, Adi V.; Metraux, Stephen; Carter, Marjorie E.; Palmer, Miland; Redd, Andrew; Samore, Matthew H.; Fargo, Jamison D.

    2015-01-01

    Researchers at the U.S. Department of Veterans Affairs (VA) have used administrative criteria to identify homelessness among U.S. Veterans. Our objective was to explore the use of these codes in VA health care facilities. We examined VA health records (2002-2012) of Veterans recently separated from the military and identified as homeless using VA conventional identification criteria (ICD-9-CM code V60.0, VA specific codes for homeless services), plus closely allied V60 codes indicating housing instability. Logistic regression analyses examined differences between Veterans who received these codes. Health care services and co-morbidities were analyzed in the 90 days post-identification of homelessness. VA conventional criteria identified 21,021 homeless Veterans from Operations Enduring Freedom, Iraqi Freedom, and New Dawn (rate 2.5%). Adding allied V60 codes increased that to 31,260 (rate 3.3%). While certain demographic differences were noted, Veterans identified as homeless using conventional or allied codes were similar with regards to utilization of homeless, mental health, and substance abuse services, as well as co-morbidities. Differences were noted in the pattern of usage of homelessness-related diagnostic codes in VA facilities nation-wide. Creating an official VA case definition for homelessness, which would include additional ICD-9-CM and other administrative codes for VA homeless services, would likely allow improved identification of homeless and at-risk Veterans. This also presents an opportunity for encouraging uniformity in applying these codes in VA facilities nationwide as well as in other large health care organizations. PMID:26172386

  9. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  10. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  11. An academic-VA partnership: Student interprofessional teams integrated with VA PACT teams.

    Science.gov (United States)

    Swenty, Constance L; Schaar, Gina L; Butler, Ryan M

    2016-12-01

    Veterans are challenged with multiple unique healthcare issues related to their military service environment. Likewise, health care providers must understand the special concerns associated with military conflict and recognize how the veteran's care can be optimized by interprofessional care delivery. Little is taught didactically or clinically that supports nursing students in addressing the unique issues of the veteran or the student's need to work collaboratively with allied health team members to enhance the veteran's care. Because of limited exposure to the veteran's special conditions, nursing students who may seek a career with the veteran population often face challenges in rendering appropriate care. The VA offers an invaluable opportunity for health profession students to collaborate with VA interprofessional Patient Aligned Care Teams (PACT) ultimately optimizing veteran health outcomes. This academic partnership, that implements an interprofessional model, will prepare students to better embrace the veteran population. This article describes the immersion of health profession students in interprofessional collaborative practice (IPCP) using PACT team principles which ultimately promotes the students' ability to link theory content to patient care delivery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. OneVA EA Vision and Strategy

    Data.gov (United States)

    Department of Veterans Affairs — The outcomes/goals supported by effective use of an EA are: Improved Service Delivery, Functional Integration, Resource Optimization and Authoritative Reference. VA...

  13. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  14. Update of KASHIL-E6 library for shielding analysis and benchmark calculations

    International Nuclear Information System (INIS)

    Kim, D. H.; Kil, C. S.; Jang, J. H.

    2004-01-01

    For various shielding and reactor pressure vessel dosimetry applications, a pseudo-problem-independent neutron-photon coupled MATXS-format library based on the last release of ENDF/B-VI has been generated as a part of the update program for KASHIL-E6, which was based on ENDF/B-VI.5. It has VITAMIN-B6 neutron and photon energy group structures, i.e., 199 groups for neutron and 42 groups for photon. The neutron and photon weighting functions and the Legendre order of scattering are same as KASHIL-E6. The library has been validated through some benchmarks: the PCA-REPLICA and NESDIP-2 experiments for LWR pressure vessel facility benchmark, the Winfrith Iron88 experiment for validation of iron data, and the Winfrith Graphite experiment for validation of graphite data. These calculations were performed by the TRANSXlDANTSYS code system. In addition, the substitutions of the JENDL-3.3 and JEFF-3.0 data for Fe, Cr, Cu and Ni, which are very important nuclides for shielding analyses, were investigated to estimate the effects on the benchmark calculation results

  15. Flexural Stiffness of Myosin Va Subdomains as Measured from Tethered Particle Motion

    Science.gov (United States)

    Michalek, Arthur J.; Kennedy, Guy G.; Warshaw, David M.; Ali, M. Yusuf

    2015-01-01

    Myosin Va (MyoVa) is a processive molecular motor involved in intracellular cargo transport on the actin cytoskeleton. The motor's processivity and ability to navigate actin intersections are believed to be governed by the stiffness of various parts of the motor's structure. Specifically, changes in calcium may regulate motor processivity by altering the motor's lever arm stiffness and thus its interhead communication. In order to measure the flexural stiffness of MyoVa subdomains, we use tethered particle microscopy, which relates the Brownian motion of fluorescent quantum dots, which are attached to various single- and double-headed MyoVa constructs bound to actin in rigor, to the motor's flexural stiffness. Based on these measurements, the MyoVa lever arm and coiled-coil rod domain have comparable flexural stiffness (0.034 pN/nm). Upon addition of calcium, the lever arm stiffness is reduced 40% as a result of calmodulins potentially dissociating from the lever arm. In addition, the flexural stiffness of the full-length MyoVa construct is an order of magnitude less stiff than both a single lever arm and the coiled-coil rod. This suggests that the MyoVa lever arm-rod junction provides a flexible hinge that would allow the motor to maneuver cargo through the complex intracellular actin network. PMID:26770194

  16. VaRank: a simple and powerful tool for ranking genetic variants

    Directory of Open Access Journals (Sweden)

    Véronique Geoffroy

    2015-03-01

    Full Text Available Background. Most genetic disorders are caused by single nucleotide variations (SNVs or small insertion/deletions (indels. High throughput sequencing has broadened the catalogue of human variation, including common polymorphisms, rare variations or disease causing mutations. However, identifying one variation among hundreds or thousands of others is still a complex task for biologists, geneticists and clinicians.Results. We have developed VaRank, a command-line tool for the ranking of genetic variants detected by high-throughput sequencing. VaRank scores and prioritizes variants annotated either by Alamut Batch or SnpEff. A barcode allows users to quickly view the presence/absence of variants (with homozygote/heterozygote status in analyzed samples. VaRank supports the commonly used VCF input format for variants analysis thus allowing it to be easily integrated into NGS bioinformatics analysis pipelines. VaRank has been successfully applied to disease-gene identification as well as to molecular diagnostics setup for several hundred patients.Conclusions. VaRank is implemented in Tcl/Tk, a scripting language which is platform-independent but has been tested only on Unix environment. The source code is available under the GNU GPL, and together with sample data and detailed documentation can be downloaded from http://www.lbgi.fr/VaRank/.

  17. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Home Health Care and Patterns of Subsequent VA and Medicare Health Care Utilization for Veterans

    Science.gov (United States)

    Van Houtven, Courtney Harold; Jeffreys, Amy S.; Coffman, Cynthia J.

    2008-01-01

    Purpose: The Veterans Affairs or VA health care system is in the process of significantly expanding home health care (HOC) nationwide. We describe VA HHC use in 2003 for all VA HHC users from 2002; we examine whether VA utilization across a broad spectrum of services differed for a sample of VA HHC users and their propensity-score-matched…

  19. Comparison of historically simulated VaR: Evidence from oil prices

    International Nuclear Information System (INIS)

    Costello, Alexandra; Asem, Ebenezer; Gardner, Eldon

    2008-01-01

    Cabedo and Moya [Cabedo, J.D., Moya, I., 2003. Estimating oil price 'Value at Risk' using the historical simulation approach. Energy Economics 25, 239-253] find that ARMA with historical simulation delivers VaR forecasts that are superior to those from GARCH. We compare the ARMA with historical simulation to the semi-parametric GARCH model proposed by Barone-Adesi et al. [Barone-Adesi, G., Giannopoulos, K., Vosper, L., 1999. VaR without correlations for portfolios of derivative securities. Journal of Futures Markets 19 (5), 583-602]. The results suggest that the semi-parametric GARCH model generates VaR forecasts that are superior to the VaR forecasts from the ARMA with historical simulation. This is due to the fact that GARCH captures volatility clustering. Our findings suggest that Cabedo and Moya's conclusion is mainly driven by the normal distributional assumption imposed on the future risk structure in the GARCH model. (author)

  20. Direct observation of the myosin Va recovery stroke that contributes to unidirectional stepping along actin.

    Directory of Open Access Journals (Sweden)

    Katsuyuki Shiroguchi

    2011-04-01

    Full Text Available Myosins are ATP-driven linear molecular motors that work as cellular force generators, transporters, and force sensors. These functions are driven by large-scale nucleotide-dependent conformational changes, termed "strokes"; the "power stroke" is the force-generating swinging of the myosin light chain-binding "neck" domain relative to the motor domain "head" while bound to actin; the "recovery stroke" is the necessary initial motion that primes, or "cocks," myosin while detached from actin. Myosin Va is a processive dimer that steps unidirectionally along actin following a "hand over hand" mechanism in which the trailing head detaches and steps forward ∼72 nm. Despite large rotational Brownian motion of the detached head about a free joint adjoining the two necks, unidirectional stepping is achieved, in part by the power stroke of the attached head that moves the joint forward. However, the power stroke alone cannot fully account for preferential forward site binding since the orientation and angle stability of the detached head, which is determined by the properties of the recovery stroke, dictate actin binding site accessibility. Here, we directly observe the recovery stroke dynamics and fluctuations of myosin Va using a novel, transient caged ATP-controlling system that maintains constant ATP levels through stepwise UV-pulse sequences of varying intensity. We immobilized the neck of monomeric myosin Va on a surface and observed real time motions of bead(s attached site-specifically to the head. ATP induces a transient swing of the neck to the post-recovery stroke conformation, where it remains for ∼40 s, until ATP hydrolysis products are released. Angle distributions indicate that the post-recovery stroke conformation is stabilized by ≥ 5 k(BT of energy. The high kinetic and energetic stability of the post-recovery stroke conformation favors preferential binding of the detached head to a forward site 72 nm away. Thus, the recovery

  1. Mixed-oxide (MOX) fuel performance benchmark. Summary of the results for the PRIMO MOX rod BD8

    International Nuclear Information System (INIS)

    Ott, L.J.; Sartori, E.; Costa, A.; ); Sobolev, V.; Lee, B-H.; Alekseev, P.N.; Shestopalov, A.A.; Mikityuk, K.O.; Fomichenko, P.A.; Shatrova, L.P.; Medvedev, A.V.; Bogatyr, S.M.; Khvostov, G.A.; Kuznetsov, V.I.; Stoenescu, R.; Chatwin, C.P.

    2009-01-01

    The OECD/NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, nuclear fuel performance, and fuel cycle issues related to the disposition of weapons-grade plutonium as MOX fuel. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close cooperation with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A major part of these activities includes benchmark studies. This report describes the results of the PRIMO rod BD8 benchmark exercise, the second benchmark by the TFRPD relative to MOX fuel behaviour. The corresponding PRIMO experimental data have been released, compiled and reviewed for the International Fuel Performance Experiments (IFPE) database. The observed ranges (as noted in the text) in the predicted thermal and FGR responses are reasonable given the variety and combination of thermal conductivity and FGR models employed by the benchmark participants with their respective fuel performance codes

  2. 48 CFR 803.7000 - Display of the VA Hotline poster.

    Science.gov (United States)

    2010-10-01

    ... poster. 803.7000 Section 803.7000 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS... Improper Business Practices 803.7000 Display of the VA Hotline poster. (a) Under the circumstances described in paragraph (b) of this section, a contractor must display prominently a VA Hotline poster...

  3. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  4. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  5. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  6. ESTIMATING RISK ON THE CAPITAL MARKET WITH VaR METHOD

    Directory of Open Access Journals (Sweden)

    Sinisa Bogdan

    2015-06-01

    Full Text Available The two basic questions that every investor tries to answer before investment are questions about predicting return and risk. Risk and return are generally considered two positively correlated sizes, during the growth of risk it is expected increase of return to compensate the higher risk. The quantification of risk in the capital market represents the current topic since occurrence of securities. Together with estimated future returns it represents starting point of any investment. In this study it is described the history of the emergence of VaR methods, usefulness in assessing the risks of financial assets. Three main Value at Risk (VaR methodologies are decribed and explained in detail: historical method, parametric method and Monte Carlo method. After the theoretical review of VaR methods it is estimated risk of liquid stocks and portfolio from the Croatian capital market with historical and parametric VaR method, after which the results were compared and explained.

  7. Comparison of historically simulated VaR: Evidence from oil prices

    Energy Technology Data Exchange (ETDEWEB)

    Costello, Alexandra [Seminole Canada Energy, Calgary, AB (Canada); Asem, Ebenezer; Gardner, Eldon [Faculty of Management, University of Lethbridge, Lethbridge, AB (Canada)

    2008-09-15

    Cabedo and Moya [Cabedo, J.D., Moya, I., 2003. Estimating oil price 'Value at Risk' using the historical simulation approach. Energy Economics 25, 239-253] find that ARMA with historical simulation delivers VaR forecasts that are superior to those from GARCH. We compare the ARMA with historical simulation to the semi-parametric GARCH model proposed by Barone-Adesi et al. [Barone-Adesi, G., Giannopoulos, K., Vosper, L., 1999. VaR without correlations for portfolios of derivative securities. Journal of Futures Markets 19 (5), 583-602]. The results suggest that the semi-parametric GARCH model generates VaR forecasts that are superior to the VaR forecasts from the ARMA with historical simulation. This is due to the fact that GARCH captures volatility clustering. Our findings suggest that Cabedo and Moya's conclusion is mainly driven by the normal distributional assumption imposed on the future risk structure in the GARCH model. (author)

  8. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  9. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  10. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  11. VA Veterans Health Administration Access Data

    Data.gov (United States)

    Department of Veterans Affairs — At the Department of Veterans Affairs (VA), our most important mission is to provide the high quality health care and benefits Veterans have earned and deserve —...

  12. Semi-nonparametric VaR forecasts for hedge funds during the recent crisis

    Science.gov (United States)

    Del Brio, Esther B.; Mora-Valencia, Andrés; Perote, Javier

    2014-05-01

    The need to provide accurate value-at-risk (VaR) forecasting measures has triggered an important literature in econophysics. Although these accurate VaR models and methodologies are particularly demanded for hedge fund managers, there exist few articles specifically devoted to implement new techniques in hedge fund returns VaR forecasting. This article advances in these issues by comparing the performance of risk measures based on parametric distributions (the normal, Student’s t and skewed-t), semi-nonparametric (SNP) methodologies based on Gram-Charlier (GC) series and the extreme value theory (EVT) approach. Our results show that normal-, Student’s t- and Skewed t- based methodologies fail to forecast hedge fund VaR, whilst SNP and EVT approaches accurately success on it. We extend these results to the multivariate framework by providing an explicit formula for the GC copula and its density that encompasses the Gaussian copula and accounts for non-linear dependences. We show that the VaR obtained by the meta GC accurately captures portfolio risk and outperforms regulatory VaR estimates obtained through the meta Gaussian and Student’s t distributions.

  13. Research on Interval Forecast For Metal Futures Market′s VaR Based on Bootstrap%基于Bootstrap的金属期货市场风险VaR区间预测

    Institute of Scientific and Technical Information of China (English)

    沈盟; 王璐

    2016-01-01

    金属期货市场风险VaR的准确测度对防范期货交易风险及保持市场健康平稳运行有重要作用.传统的VaR测度方法主要以点预测为主,无法反映预测近似值的精确程度及范围. 因此,提出了一种基于Bootstrap的金属期货市场风险VaR区间预测方法,同时引入LR检验区间预测的有效性,最后利用我国铜和铝期货市场数据进行了VaR风险的区间预测. 结果表明,新的VaR区间预测方法能克服点预测的不足,准确有效地描述VaR的估计风险,同时置信区间上下限可用于风险的预警及控制.%The accurate measurement on metal futures market′s VaR is important to prevent the futures′transaction risk and maintain the market healthy and stable operation.The traditional methods measuring VaR focus on point forecast which can't re-flect the accuracy of the predictive value.A new method to measure the interval forecast for metal futures market's VaR based on bootstrap is put forward.LR test is used to test the effectiveness of interval forecast.Finally,we empirical study the interval fore-cast of copper and aluminum futures market's VaR in China.The results show that the new method can overcome the lack of point forecast,while the upper and lower confidence interval can be used for early warning and control financial risks.

  14. 76 FR 52230 - Establishment of Class E Airspace; Forest, VA

    Science.gov (United States)

    2011-08-22

    ...-0378; Airspace Docket No. 11-AEA-11] Establishment of Class E Airspace; Forest, VA AGENCY: Federal... at Forest, VA, to accommodate the new Area Navigation (RNAV) Global Positioning System (GPS) Standard... published in the Federal Register a notice of proposed rulemaking to establish Class E airspace at Forest...

  15. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  16. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  17. VA Personal Health Record Sample Data

    Data.gov (United States)

    Department of Veterans Affairs — My HealtheVet (www.myhealth.va.gov) is a Personal Health Record portal designed to improve the delivery of health care services to Veterans, to promote health and...

  18. Computing Conditional VaR using Time-varying CopulasComputing Conditional VaR using Time-varying Copulas

    Directory of Open Access Journals (Sweden)

    Beatriz Vaz de Melo Mendes

    2005-12-01

    Full Text Available It is now widespread the use of Value-at-Risk (VaR as a canonical measure at risk. Most accurate VaR measures make use of some volatility model such as GARCH-type models. However, the pattern of volatility dynamic of a portfolio follows from the (univariate behavior of the risk assets, as well as from the type and strength of the associations among them. Moreover, the dependence structure among the components may change conditionally t past observations. Some papers have attempted to model this characteristic by assuming a multivariate GARCH model, or by considering the conditional correlation coefficient, or by incorporating some possibility for switches in regimes. In this paper we address this problem using time-varying copulas. Our modeling strategy allows for the margins to follow some FIGARCH type model while the copula dependence structure changes over time.

  19. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  20. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  1. Benchmarking of numerical codes describing the dispersion of radionuclides in the Arctic Seas

    International Nuclear Information System (INIS)

    Scott, E.M.; Gurbutt, P.; Harms, I.

    1995-01-01

    As part of the International Arctic Seas Assessment Project (IASAP) of IAEA a working group has been created to model the dispersal and transfer of radionuclides released from the radioactive waste disposed of in the Kara Sea. The aim of the benchmarking work is to quantitatively assess the reliability of the models, which would lead ultimately to the evaluation of consensus/best estimates of the concentration fields to be used in the radiological assessment. The results from the benchmarking have been compared and the results of the comparison are summarised in terms of agreement in maximum concentrations and when the maximum concentrations occurred. This has been carried out for both water and sediment, at each of the defined locations and for each of the radionuclides. The paper presents a full description of the benchmarking results and discusses the similarities and differences. The role of the exercise within the modelling programme of IASAP is also discussed and the planning for the next stage of the work is presented. 4 refs

  2. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  3. In aftermath of financial investigation Phoenix VA employee demoted after her testimony

    Directory of Open Access Journals (Sweden)

    Robbins RA

    2013-03-01

    Full Text Available No abstract available. Article truncated after 150 words. A previous Southwest Journal of Pulmonary and Critical Care Journal editorial commented on fiscal mismanagement at the Department of Veterans Affairs (VA Medical Center in Phoenix (1. Now Paula Pedene, the former Phoenix VA public affairs officer, claims she was demoted for testimony she gave to the VA Inspector General’s Office (OIG regarding that investigation (2. In 2011, the OIG investigated the Phoenix VA for excess spending on private care of patients (3. The report blamed systemic failures for controls so weak that $56 million in medical fees were paid during 2010 without adequate review. The report particularly focused on one clinician assigned by the Chief of Staff to review hundreds of requests per week and the intensive care unit physicians for transferring patients to chronic ventilator units (1,3. After the investigation, the director and one of the associate directors left the VA and the chief of staff was promoted …

  4. Isolation and characterization of specific bacteriophage Va1 to Vibrio alginolyticus

    Directory of Open Access Journals (Sweden)

    Carla Fernández Espinel

    2017-04-01

    Full Text Available Vibrio alginolyticus is associated with diseases in aquaculture. The misuse of antibiotics has led to the search for alternatives in the treatment of bacterial diseases, among them the application of bacteriophages that infect and destroy bacteria selectively. In this way, a highly lytic V. alginolyticus bacteriophage, termed Va1, was isolated, with the aim to evaluate its physical chemical parameters. For this purpose, different temperature, pH, chloroform exposure and host range conditions were evaluated. The temperature stability of phage Va1 showed higher titers at 20 and 30 °C decreasing from 40 °C. With respect to pH, the highest titers for the bacteriophage were between 5 and 8, and chloroform exposure reduced viability of the Va1 phage by 25%. The one-step curve determined that the latency period and the burst size were 20 minutes and 192 PFU / infective center respectively. Under the transmission electron microscope, the Va1 phage showed an icosahedral head and a non-contractile tail, belonging to the Podoviridae family. In conclusion, Va1 phage presents potential characteristics for use in phage therapy.

  5. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  6. The Application of VaR Method to Risk Evaluation of Bank Loans%VaR方法在银行贷款风险评估中的应用

    Institute of Scientific and Technical Information of China (English)

    邹新月

    2005-01-01

    Value-at-Risk model developed recently is a mathemetical medol to measure and monitor market risk. The article focuses on discussing calculate procedure and calculate method about applying VaR means for the bank loan risk in evaluation, we make clear differentiate both the Bank for International Settlements draw credit risk reserve and VaR means calculate bank loan risk value, find VaR means in application practicality value and extensity perspective in our bank loan risk for evaluation

  7. 78 FR 76412 - Agency Information Collection (VA National Rehabilitation Special Events, Event Registration...

    Science.gov (United States)

    2013-12-17

    ... INFORMATION: Titles: a. National Disabled Veterans Winter Sports Clinic Application, VA Form 0924a, c, d and..., c, e. j. Voluntary Service Application, VA Form 0927f. k. National Veterans Summer sports Clinic... Festival Event Application, VA0929a, b, c, d, e, f, g, h. Type of Review: Revision of an already approved...

  8. PERHITUNGAN VaR PORTOFOLIO SAHAM MENGGUNAKAN DATA HISTORIS DAN DATA SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    WAYAN ARTHINI

    2012-09-01

    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  9. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  10. Poincare group, SU(3) and V-A in leptonic decay

    International Nuclear Information System (INIS)

    Boehm, A.

    1975-07-01

    From as few assumptions as possible about the relations between the Poincare group, the particle classifying SU(3) and V-A we derive properties of the K/sub l 3 / and K/sub L 2 / decays. From the assumed relation between SU(3) and the Poincare group and the first class condition it follows that the formfactor ratio Xi of K/sub l 3 / decay is Xi = --0.57, and that a value of Xi = 0 is in disagreement with very general and well accepted theoretical assumptions. Assuming universality of V-A, the Cabibbo suppression is derived from the relations between SU(3) and V-A as a consequence of the brokenness of SU(3). (U.S.)

  11. VA Health Care: VA Spends Millions on Post-Traumatic Stress Disorder Research and Incorporates Research Outcomes into Guidelines and Policy for Post-Traumatic Stress Disorder Services

    Science.gov (United States)

    2011-01-01

    post - traumatic stress disorder ( PTSD ) and...Veterans Affairs (VA) Intramural Post - Traumatic Stress Disorder ( PTSD ) Research Funding and VA’s Medical and Prosthetic Research Appropriation...Table 6: Department of Veterans Affairs (VA) Research Centers and Programs That Conduct or Support Post - Traumatic Stress Disorder ( PTSD ) Research

  12. Detailed benchmark test of JENDL-4.0 iron data for fusion applications

    Energy Technology Data Exchange (ETDEWEB)

    Konno, Chikara, E-mail: konno.chikara@jaea.go.jp [Japan Atomic Energy Agency, Tokai-Mura, Ibaraki-ken, 319-1195 (Japan); Wada, Masayuki [Japan Computer System, Mito, 310-0805 (Japan); Kondo, Keitaro; Ohnishi, Seiki; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi [Japan Atomic Energy Agency, Tokai-Mura, Ibaraki-ken, 319-1195 (Japan)

    2011-10-15

    The major revised version of Japanese Evaluated Nuclear Data Library (JENDL), JENDL-4.0, was released in May, 2010. As one of benchmark tests, we have carried out a benchmark test of JENDL-4.0 iron data, which are very important for radiation shielding in fusion reactors, by analyzing the iron fusion neutronics integral experiments (in situ and Time-of-Flight (TOF) experiments) at JAEA/FNS. It is demonstrated that the problems of the iron data in the previous version of JENDL, JENDL-3.3, are solved in JENDL-4.0; the first inelastic scattering cross section data of {sup 57}Fe and the angular distribution of the elastic scattering of {sup 56}Fe. The iron data in JENDL-4.0 are comparable to or are partly better than those in ENDF/B-VII.0 and JEFF-3.1.

  13. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  14. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  15. Employment status, employment functioning, and barriers to employment among VA primary care patients.

    Science.gov (United States)

    Zivin, Kara; Yosef, Matheos; Levine, Debra S; Abraham, Kristen M; Miller, Erin M; Henry, Jennifer; Nelson, C Beau; Pfeiffer, Paul N; Sripada, Rebecca K; Harrod, Molly; Valenstein, Marcia

    2016-03-15

    Prior research found lower employment rates among working-aged patients who use the VA than among non-Veterans or Veterans who do not use the VA, with the lowest reported employment rates among VA patients with mental disorders. This study assessed employment status, employment functioning, and barriers to employment among VA patients treated in primary care settings, and examined how depression and anxiety were associated with these outcomes. The sample included 287 VA patients treated in primary care in a large Midwestern VA Medical Center. Bivariate and multivariable analyses were conducted examining associations between socio-demographic and clinical predictors of six employment domains, including: employment status, job search self-efficacy, work performance, concerns about job loss among employed Veterans, and employment barriers and likelihood of job seeking among not employed Veterans. 54% of respondents were employed, 36% were not employed, and 10% were economically inactive. In adjusted analyses, participants with depression or anxiety (43%) were less likely to be employed, had lower job search self-efficacy, had lower levels of work performance, and reported more employment barriers. Depression and anxiety were not associated with perceived likelihood of job loss among employed or likelihood of job seeking among not employed. Single VA primary care clinic; cross-sectional study. Employment rates are low among working-aged VA primary care patients, particularly those with mental health conditions. Offering primary care interventions to patients that address mental health issues, job search self-efficacy, and work performance may be important in improving health, work, and economic outcomes. Published by Elsevier B.V.

  16. 78 FR 18425 - Proposed Information Collection VA Police Officer Pre-Employment Screening Checklist); Comment...

    Science.gov (United States)

    2013-03-26

    ... techniques or the use of other forms of information technology. Title: VA Police Officer Pre-Employment... Police Officer Pre-Employment Screening Checklist); Comment Request AGENCY: Office of Operations... approved collection. Abstract: VA personnel complete VA Form 0120 to document pre- employment history and...

  17. 76 FR 24570 - Proposed Information Collection (Application for VA Education Benefits) Activity; Comment Request

    Science.gov (United States)

    2011-05-02

    ... (Application for VA Education Benefits) Activity; Comment Request AGENCY: Veterans Benefits Administration, Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Veterans Benefits Administration (VBA... Under the Montgomery GI Bill, VA Form 22-1990E. c. Application for VA Education Benefits Under the...

  18. Validation of KENO V.a for the Portsmouth Gaseous Diffusion Plant

    International Nuclear Information System (INIS)

    Felsher, H.D.; Fentiman, A.W.; Tayloe, R.W.; D'Aquila, D.

    1992-01-01

    At the Portsmouth Gaseous Diffusion Plant, KENO V.a is used to make criticality calculations for complex configurations and a wide range of 235 U enrichments. It is essential that the calculated critical conditions either accurately reflect the true critical state or that the bias from the true critical conditions are well known. Accordingly, a study has been initiated to validate KENO V.a over the ranges of parameters expected to be used when modeling equipment and processes at Portsmouth. Preliminary results of that study are reported in this paper. The ultimate goal of this study is to identify a set of data from existing critical experiments that will exercise all KENO V.a parameters commonly used by Portsmouth's criticality safety personnel. A second goal is to identify a relatively small subset of those experiments that may be run frequently to ensure that KENO V.a provides consistent results

  19. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  20. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  1. Geographic Distribution of VA Expenditures FY 2016

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  2. Geographic Distribution of VA Expenditures FY2010

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  3. Geographic Distribution of VA Expenditures FY2012

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  4. Geographic Distribution of VA Expenditures FY2004

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  5. Geographic Distribution of VA Expenditures FY1998

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  6. Geographic Distribution of VA Expenditures FY2009

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  7. Geographic Distribution of VA Expenditures FY2013

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  8. Geographic Distribution of VA Expenditures FY2002

    Data.gov (United States)

    Department of Veterans Affairs — This report details VA expenditures at the state, county, and Congressional District level. It includes categories such as Compensation and Pension, Construction,...

  9. Validation of software releases for CMS

    International Nuclear Information System (INIS)

    Gutsche, Oliver

    2010-01-01

    The CMS software stack currently consists of more than 2 Million lines of code developed by over 250 authors with a new version being released every week. CMS has setup a validation process for quality assurance which enables the developers to compare the performance of a release to previous releases and references. The validation process provides the developers with reconstructed datasets of real data and MC samples. The samples span the whole range of detector effects and important physics signatures to benchmark the performance of the software. They are used to investigate interdependency effects of all CMS software components and to find and fix bugs. The release validation process described here is an integral part of CMS software development and contributes significantly to ensure stable production and analysis. It represents a sizable contribution to the overall MC production of CMS. Its success emphasizes the importance of a streamlined release validation process for projects with a large code basis and significant number of developers and can function as a model for future projects.

  10. 76 FR 44288 - Establishment of Class E Airspace; New Market, VA

    Science.gov (United States)

    2011-07-25

    ...-380; Airspace Docket No. 11-AEA-12] Establishment of Class E Airspace; New Market, VA AGENCY: Federal... proposes to establish Class E Airspace at New Market, VA, to accommodate the additional airspace needed for the Standard Instrument Approach Procedures developed for New Market Airport. This action would...

  11. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  12. Enhanced dissolution rate of dronedarone hydrochloride via preparation of solid dispersion using vinylpyrrolidone-vinyl acetate copolymer (Kollidone® VA 64)

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hyuck Jun; Kang, Myung Joo [College of Pharmacy, Dankook University, Cheonan (Korea, Republic of); Han, Sang Duk [Dong-A ST Rese arch Institute, Pharmaceutical Product Research Laboratories, Yongin (Korea, Republic of)

    2015-09-15

    Solid dispersion (SD) systems have been widely used to increase the dissolution rate and oral absorption of poorly water-soluble compounds. In order to enhance the dissolution rate of dronedarone hydrochloride (DRN), a recent antiarrhythmic agent, SDs of DRN were formulated using conventional solvent evaporation method with amorphous polymers including hydroxypropyl methyl cellulose (HPMC), poly(vinyl pyrrolidone) (PVP), and vinylpyrrolidone-vinyl acetate copolymer (VA64). The prepared SDs were characterized in terms of drug crystallinity, morphology, and in vitro dissolution profile in aqueous medium. The physical characterization using differential scanning calorimetry and X-ray powder diffraction revealed that the active compound was molecularly dispersed in all polymeric carriers tested, in a stable amorphous form in drug to polymer ratios ranging from 1:0.5 to 1:2. The dissolution rates of DRN in all SDs were much higher than those from the corresponding physical mixture and drug powder alone. In particular, the greatest dissolution enhancement was obtained from the VA64-based SD in a drug to polymer weight ratio of 1:1, achieving almost complete drug release after 120 min at pH 1.2. Thus, VA64-based SD with higher drug dissolution rate along with a simple preparation process is suggested as an alternative for the oral formulation of the benzofuran derivative.

  13. 76 FR 40453 - Agency Information Collection (Application for VA Education Benefits) Activity Under OMB Review

    Science.gov (United States)

    2011-07-08

    ... (Application for VA Education Benefits) Activity Under OMB Review AGENCY: Veterans Benefits Administration... Education Benefits, VA Form 22-1990. b. Application for Family Member to Use Transferred Benefits, VA Form 22-1990E. [[Page 40454

  14. HELIOS calculations for UO2 lattice benchmarks

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1998-01-01

    Calculations for the ANS UO 2 lattice benchmark have been performed with the HELIOS lattice-physics code and six of its cross-section libraries. The results obtained from the different libraries permit conclusions to be drawn regarding the adequacy of the energy group structures and of the ENDF/B-VI evaluation for 238 U. Scandpower A/S, the developer of HELIOS, provided Los Alamos National Laboratory with six different cross section libraries. Three of the libraries were derived directly from Release 3 of ENDF/B-VI (ENDF/B-VI.3) and differ only in the number of groups (34, 89 or 190). The other three libraries are identical to the first three except for a modification to the cross sections for 238 U in the resonance range

  15. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  16. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  17. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  18. Archetypal Analysis of "Almalek and Albrahmh” from Kalila va Demna

    Directory of Open Access Journals (Sweden)

    Ali Noori

    2016-12-01

    Full Text Available Archetype Theory, has been raised from Carl Gustav Young’s Psychoanalytical School. Young knows mental images and Sediments and Hereditary information which there are in the collective unconscious as Archetype. In other word, archetype is general type of repeated experiences and behaviors of human forefathers which rooted in collective unconscious. the most important Archetypes are "Anima"," Animus", "old wise man", "shadow", "mask", "rebirth" ,and "self" and "individuality". Kalila va Demna is the book which is full of allegorical stories, which because of their mythical aspect, are researchable from Archetypal Analytic criticism. One of this stories which Archetypal color is Dominant in it, is "Almalek and Albrahmh” from 16th chapter of Kalila va Demna. In this study, some archetypes such as "shadow", "Anima", "mask", "old wise man", " self " and "individuality" in the story of "Almalek and Albrahmh" from sixteenth chapter of book, has been analyzed. Since either this tale because of being legendary, and consisting of series of dreams – which have given it mythical, symbolic and interpretable nature – has capacity for archetypal analysis, or archetypal approach has high capacity and power to analyze such narrations. The story was investigated on mentioned view, and it was found that king Heblar which has stricken to a kind of Psychosis, Finds its way into the subconscious through dreams and After a confrontation with his anima, under guidance of old wise meets his self (true self and reach to his individuality and Is rescued from distress. It can be said that the king starts his path of the individual process with a dream.  He, in the bigining of the story, with the help of Iran Dokht and at the end of the story, with the help of minister is released of grief and psychosis.

  19. VA/Q distribution during heavy exercise and recovery in humans: implications for pulmonary edema

    Science.gov (United States)

    Schaffartzik, W.; Poole, D. C.; Derion, T.; Tsukimoto, K.; Hogan, M. C.; Arcos, J. P.; Bebout, D. E.; Wagner, P. D.

    1992-01-01

    Ventilation-perfusion (VA/Q) inequality has been shown to increase with exercise. Potential mechanisms for this increase include nonuniform pulmonary vasoconstriction, ventilatory time constant inequality, reduced large airway gas mixing, and development of interstitial pulmonary edema. We hypothesized that persistence of VA/Q mismatch after ventilation and cardiac output subside during recovery would be consistent with edema; however, rapid resolution would suggest mechanisms related to changes in ventilation and blood flow per se. Thirteen healthy males performed near-maximal cycle ergometry at an inspiratory PO2 of 91 Torr (because hypoxia accentuates VA/Q mismatch on exercise). Cardiorespiratory variables and inert gas elimination patterns were measured at rest, during exercise, and between 2 and 30 min of recovery. Two profiles of VA/Q distribution behavior emerged during heavy exercise: in group 1 an increase in VA/Q mismatch (log SDQ of 0.35 +/- 0.02 at rest and 0.44 +/- 0.02 at exercise; P less than 0.05, n = 7) and in group 2 no change in VA/Q mismatch (n = 6). There were no differences in anthropometric data, work rate, O2 uptake, or ventilation during heavy exercise between groups. Group 1 demonstrated significantly greater VA/Q inequality, lower vital capacity, and higher forced expiratory flow at 25-75% of forced vital capacity for the first 20 min during recovery than group 2. Cardiac index was higher in group 1 both during heavy exercise and 4 and 6 min postexercise. However, both ventilation and cardiac output returned toward baseline values more rapidly than did VA/Q relationships. Arterial pH was lower in group 1 during exercise and recovery. We conclude that greater VA/Q inequality in group 1 and its persistence during recovery are consistent with the hypothesis that edema occurs and contributes to the increase in VA/Q inequality during exercise. This is supported by observation of greater blood flows and acidosis and, presumably therefore

  20. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  1. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  2. 75 FR 26683 - Hospital and Outpatient Care for Veterans Released From Incarceration to Transitional Housing

    Science.gov (United States)

    2010-05-12

    ... difficulty obtaining similar treatment during a transitional period. In particular, if mental health issues... housing upon release from incarceration in a prison or jail. The proposed rule would permit VA to work with these veterans while they are in these programs with the goal of continuing to work with them...

  3. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  4. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  5. MODEL NON LINIER GARCH (NGARCH UNTUK MENGESTIMASI NILAI VALUE at RISK (VaR PADA IHSG

    Directory of Open Access Journals (Sweden)

    I KOMANG TRY BAYU MAHENDRA

    2015-06-01

    Full Text Available In investment, risk measurement is important. One of risk measure is Value at Risk (VaR. There are many methods that can be used to estimate risk based on VaR framework. One of them Non Linier GARCH (NGARCH model. In this research, determination of VaR used NGARCH model. NGARCH model allowed for asymetric behaviour in the volatility such that “good news” or positive return and “bad news” or negative return. Based on calculations of VaR, the higher of the confidence level and the longer the investment period, the risk was greater. Determination of VaR using NGARCH model was less than GARCH model.

  6. Vectorization of the KENO V.a criticality safety code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Dodds, H.L.; Petrie, L.M.

    1991-01-01

    The development of the vector processor, which is used in the current generation of supercomputers and is beginning to be used in workstations, provides the potential for dramatic speed-up for codes that are able to process data as vectors. Unfortunately, the stochastic nature of Monte Carlo codes prevents the old scalar version of these codes from taking advantage of the vector processors. New Monte Carlo algorithms that process all the histories undergoing the same event as a batch are required. Recently, new vectorized Monte Carlo codes have been developed that show significant speed-ups when compared to the scalar version of themselves or equivalent codes. This paper discusses the vectorization of an already existing and widely used criticality safety code, KENO V.a All the changes made to KENO V.a are transparent to the user making it possible to upgrade from the standard scalar version of KENO V.a to the vectorized version without learning a new code

  7. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  8. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  9. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  10. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  11. Introduction to benchmark dose methods and U.S. EPA's benchmark dose software (BMDS) version 2.1.1

    International Nuclear Information System (INIS)

    Davis, J. Allen; Gift, Jeffrey S.; Zhao, Q. Jay

    2011-01-01

    Traditionally, the No-Observed-Adverse-Effect-Level (NOAEL) approach has been used to determine the point of departure (POD) from animal toxicology data for use in human health risk assessments. However, this approach is subject to substantial limitations that have been well defined, such as strict dependence on the dose selection, dose spacing, and sample size of the study from which the critical effect has been identified. Also, the NOAEL approach fails to take into consideration the shape of the dose-response curve and other related information. The benchmark dose (BMD) method, originally proposed as an alternative to the NOAEL methodology in the 1980s, addresses many of the limitations of the NOAEL method. It is less dependent on dose selection and spacing, and it takes into account the shape of the dose-response curve. In addition, the estimation of a BMD 95% lower bound confidence limit (BMDL) results in a POD that appropriately accounts for study quality (i.e., sample size). With the recent advent of user-friendly BMD software programs, including the U.S. Environmental Protection Agency's (U.S. EPA) Benchmark Dose Software (BMDS), BMD has become the method of choice for many health organizations world-wide. This paper discusses the BMD methods and corresponding software (i.e., BMDS version 2.1.1) that have been developed by the U.S. EPA, and includes a comparison with recently released European Food Safety Authority (EFSA) BMD guidance.

  12. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  13. 76 FR 79067 - Payment or Reimbursement for Emergency Treatment Furnished by Non-VA Providers in Non-VA...

    Science.gov (United States)

    2011-12-21

    ... DEPARTMENT OF VETERANS AFFAIRS 38 CFR Part 17 RIN 2900-AN49 Payment or Reimbursement for Emergency..., authorize the Secretary of Veterans Affairs to reimburse eligible veterans for costs related to non-VA.... Specifically, section 1725 authorizes reimbursement for emergency treatment for eligible veterans with...

  14. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  15. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  16. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  17. Methods for estimating and comparing VA outpatient drug benefits with the private sector.

    Science.gov (United States)

    Render, Marta L; Nowak, John; Hammond, Emmett K; Roselle, Gary

    2003-06-01

    To estimate and compare Veterans Health Administration (VA) expenditures for outpatient pharmaceuticals for veterans at six VA facilities with hypothetical private sector costs. Using the VA Pharmacy Benefits Management Strategic Health Care Group (PBM) database, we extracted data for all dispensed outpatient prescriptions from the six study sites over federal fiscal year 1999. After extensive data validation, we converted prescriptions to the same units and merged relevant VA pricing information by National Drug Code to Redbook listed average wholesale price and the Medicaid maximal allowable charge, where available. We added total VA drug expenditures to personnel cost from the pharmacy portion of that medical center's cost distribution report. Hypothetical private sector payments were $200.8 million compared with an aggregate VA budget of $118.8 million. Using National Drug Code numbers, 97% of all items dispensed from the six facilities were matched to private sector price data. Nonmatched pharmaceuticals were largely generic over-the-counter pain relievers and commodities like alcohol swabs. The most commonly prescribed medications reflect the diseases and complaints of an older male population: pain, cardiovascular problems, diabetes, and depression or other psychiatric disorders. Use of the VA PBM database permits researchers to merge expenditure and prescription data to patient diagnoses and sentinel events. A critical element in its use is creating similar units among the systems. Such data sets permit a deeper view of the variability in drug expenditures, an important sector of health care whose inflation has been disproportionate to that of the economy and even health care.

  18. Malaria epidemiology in the Pakaanóva (Wari') Indians, Brazilian Amazon.

    Science.gov (United States)

    Sá, D Ribeiro; Souza-Santos, R; Escobar, A L; Coimbra, C E A

    2005-04-01

    This paper reports the results of a longitudinal study of malaria incidence (1998-2002) among the Pakaanóva (Wari') Indians, Brazilian southwest Amazon region, based on data routinely gathered by Brazilian National Health Foundation outposts network in conjunction with the Indian health service. Malaria is present yearlong in the Pakaanóva. Statistically significant differences between seasons or months were not noticed. A total of 1933 cases of malaria were diagnosed in the Pakaanóva during this period. The P. vivax / P. falciparum ratio was 3.4. P. vivax accounted for 76.5% of the cases. Infections with P. malariae were not recorded. Incidence rates did not differ by sex. Most malaria cases were reported in children < 10 years old (45%). About one fourth of all cases were diagnosed on women 10-40 years old. An entomological survey carried out at two Pakaanóva villages yielded a total of 3.232 specimens of anophelines. Anopheles darlingi predominated (94.4%). Most specimens were captured outdoors and peak activity hours were noted at early evening and just before sunrise. It was observed that Pakaanóva cultural practices may facilitate outdoor exposure of individuals of both sexes and all age groups during peak hours of mosquito activities (e.g., coming to the river early in the morning for bathing or to draw water, fishing, engaging in hunting camps, etc). In a context in which anophelines are ubiquitous and predominantly exophilic, and humans of both sexes and all ages are prone to outdoor activities during peak mosquito activity hours, malaria is likely to remain endemic in the Pakaanóva, thus requiring the development of alternative control strategies that are culturally and ecologically sensitive.

  19. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  20. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  1. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Mü ller, Matthias; Bibi, Adel Aamer; Giancola, Silvio; Al-Subaihi, Salman; Ghanem, Bernard

    2018-01-01

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  2. TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

    KAUST Repository

    Müller, Matthias

    2018-03-28

    Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

  3. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  4. The Burr X Pareto Distribution: Properties, Applications and VaR Estimation

    Directory of Open Access Journals (Sweden)

    Mustafa Ç. Korkmaz

    2017-12-01

    Full Text Available In this paper, a new three-parameter Pareto distribution is introduced and studied. We discuss various mathematical and statistical properties of the new model. Some estimation methods of the model parameters are performed. Moreover, the peaks-over-threshold method is used to estimate Value-at-Risk (VaR by means of the proposed distribution. We compare the distribution with a few other models to show its versatility in modelling data with heavy tails. VaR estimation with the Burr X Pareto distribution is presented using time series data, and the new model could be considered as an alternative VaR model against the generalized Pareto model for financial institutions.

  5. Verification of FA2D Prediction Capability Using Fuel Assembly Benchmark

    International Nuclear Information System (INIS)

    Jecmenica, R.; Pevec, D.; Grgic, D.; Konjarek, D.

    2008-01-01

    FA2D is 2D transport collision probability code developed at Faculty of Electrical Engineering and Computing, University Zagreb. It is used for calculation of cross section data at fuel assembly level. Main objective of its development was capability to generate cross section data to be used for fuel management and safety analyses of PWR reactors. Till now formal verification of code predictions capability is not performed at fuel assembly level, but results of fuel management calculations obtained using FA2D generated cross sections for NPP Krsko and IRIS reactor are compared against Westinghouse calculations. Cross section data were used within NRC's PARCS code and satisfactory preliminary results were obtained. This paper presents results of calculations performed for Nuclear Fuel Industries, Ltd., benchmark using FA2D, and SCALE5 TRITON calculation sequence (based on discrete ordinates code NEWT). Nuclear Fuel Industries, Ltd., Japan, released LWR Next Generation Fuels Benchmark with the aim to verify prediction capability in nuclear design for extended burnup regions. We performed calculations for two different Benchmark problem geometries - UO 2 pin cell and UO 2 PWR fuel assembly. The results obtained with two mentioned 2D spectral codes are presented for burnup dependency of infinite multiplication factor, isotopic concentration of important materials and for local peaking factor vs. burnup (in case of fuel assembly calculation).(author)

  6. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  7. 75 FR 33216 - Payment or Reimbursement for Emergency Treatment Furnished by Non-VA Providers in Non-VA...

    Science.gov (United States)

    2010-06-11

    ... health care services for veterans).'' Proposed Sec. 17.121(a) would establish the clinical decision maker... practice to utilize the services of health care professionals, such as nurses, for purposes of clinical review. For this reason, establishing the clinical decision maker as a ``designated VA clinician'' would...

  8. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  9. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  10. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  11. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  12. Testing popular VaR models in EU new member and candidate states

    Directory of Open Access Journals (Sweden)

    Saša Žiković

    2007-12-01

    Full Text Available The impact of allowing banks to calculate their capital requirement based on their internal VaR models, and the impact of regulation changes on banks in transitional countries has not been well studied. This paper examines whether VaR models that are created and suited for developed markets apply to the volatile stock markets of EU new member and candidate states (Bulgaria, Romania, Croatia and Turkey. Nine popular VaR models are tested on five stock indexes from EU new member and candidate states. Backtesting results show that VaR models commonly used in developed stock markets are not well suited for measuring market risk in these markets. Presented findings bear very important implications that have to be addressed by regulators and risk practitioners operating in EU new member andcandidate states. Risk managers have to start thinking outside the frames set by their parent companies or else investors present in these markets may find themselves in serious trouble, dealing with losses that they have not been expecting. National regulators have to take into consideration that simplistic VaR models that are widely used in some developed countries are not well suited for these illiquid and developing stock markets.

  13. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4® neutron gamma coupled calculations

    International Nuclear Information System (INIS)

    Lee, Yi-Kang

    2016-01-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4 ® Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries can be

  14. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  15. 76 FR 71920 - Payment for Home Health Services and Hospice Care by Non-VA Providers

    Science.gov (United States)

    2011-11-21

    ... concerning the billing methodology for non-VA providers of home health services and hospice care. The proposed rulemaking would include home health services and hospice care under the VA regulation governing... to ``RIN 2900-AN98--Payment for home health and services and hospice care by non-VA providers...

  16. Effects of inspired CO2, hyperventilation, and time on VA/Q inequality in the dog

    Science.gov (United States)

    Tsukimoto, K.; Arcos, J. P.; Schaffartzik, W.; Wagner, P. D.; West, J. B.

    1992-01-01

    In a recent study by Tsukimoto et al. (J. Appl. Physiol. 68: 2488-2493, 1990), CO2 inhalation appeared to reduce the size of the high ventilation-perfusion ratio (VA/Q) mode commonly observed in anesthetized mechanically air-ventilated dogs. In that study, large tidal volumes (VT) were used during CO2 inhalation to preserve normocapnia. To separate the influences of CO2 and high VT on the VA/Q distribution in the present study, we examined the effect of inspired CO2 on the high VA/Q mode using eight mechanically ventilated dogs (4 given CO2, 4 controls). The VA/Q distribution was measured first with normal VT and then with increased VT. In the CO2 group at high VT, data were collected before, during, and after CO2 inhalation. With normal VT, there was no difference in the size of the high VA/Q mode between groups [10.5 +/- 3.5% (SE) of ventilation in the CO2 group, 11.8 +/- 5.2% in the control group]. Unexpectedly, the size of the high VA/Q mode decreased similarly in both groups over time, independently of the inspired PCO2, at a rate similar to the fall in cardiac output over time. The reduction in the high VA/Q mode together with a simultaneous increase in alveolar dead space (estimated by the difference between inert gas dead space and Fowler dead space) suggests that poorly perfused high VA/Q areas became unperfused over time. A possible mechanism is that elevated alveolar pressure and decreased cardiac output eliminate blood flow from corner vessels in nondependent high VA/Q regions.

  17. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  18. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  19. 78 FR 66265 - Drawbridge Operation Regulation; Elizabeth River, Eastern Branch, Norfolk, VA

    Science.gov (United States)

    2013-11-05

    ... Operation Regulation; Elizabeth River, Eastern Branch, Norfolk, VA AGENCY: Coast Guard, DHS. ACTION: Notice... Elizabeth River Eastern Branch, mile 1.1, at Norfolk, VA. This deviation is necessary to facilitate... maintenance. The Norfolk Southern 5 railroad Bridge, at mile 1.1, across the Elizabeth River (Eastern Branch...

  20. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  1. 46 CFR 7.55 - Cape Henry, VA to Cape Fear, NC.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Cape Henry, VA to Cape Fear, NC. 7.55 Section 7.55 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY PROCEDURES APPLICABLE TO THE PUBLIC BOUNDARY LINES Atlantic Coast § 7.55 Cape Henry, VA to Cape Fear, NC. (a) A line drawn from Rudee Inlet Jetty Light “2” to...

  2. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  3. Can reported VaR be used as an indicator of the volatility of share prices? Evidence from UK banks.

    OpenAIRE

    Ou, Shian Kao

    2006-01-01

    Value at Risk (VaR) is used as an indicator to measure the risks contained in a firm. With the uprising development of VaR theory and computational techniques, the VaR is nowadays adopted by banks and reported in annual reports. Since the method to calculate VaR is questioned, and the reported VaR can not be thoroughly audited, this paper attempts to find the relationship between the reported VaR and the volatility of share price for UK listed banks. This paper reviews literature about VaR an...

  4. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  5. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  6. VA's National PTSD Brain Bank: a National Resource for Research.

    Science.gov (United States)

    Friedman, Matthew J; Huber, Bertrand R; Brady, Christopher B; Ursano, Robert J; Benedek, David M; Kowall, Neil W; McKee, Ann C

    2017-08-25

    The National PTSD Brain Bank (NPBB) is a brain tissue biorepository established to support research on the causes, progression, and treatment of PTSD. It is a six-part consortium led by VA's National Center for PTSD with participating sites at VA medical centers in Boston, MA; Durham, NC; Miami, FL; West Haven, CT; and White River Junction, VT along with the Uniformed Services University of Health Sciences. It is also well integrated with VA's Boston-based brain banks that focus on Alzheimer's disease, ALS, chronic traumatic encephalopathy, and other neurological disorders. This article describes the organization and operations of NPBB with specific attention to: tissue acquisition, tissue processing, diagnostic assessment, maintenance of a confidential data biorepository, adherence to ethical standards, governance, accomplishments to date, and future challenges. Established in 2014, NPBB has already acquired and distributed brain tissue to support research on how PTSD affects brain structure and function.

  7. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  8. Energy transformation, transfer, and release dynamics in high speed turbulent flows

    Science.gov (United States)

    2017-03-01

    Secondly, a new high -order (4 th -order) convective flux formulation was developed that uses the tabulated information, yet produces a fully consistent...Klippenstein 2012 Comprehensive H2/O2 Kinetic Model for High - Pressure Combustion. Int. J. Chem. Kinetics 44:444-474. Cabot, W.H., A.W. Cook, P.L. Miller, D.E...AFRL-AFOSR-VA-TR-2017-0054 Energy Transformation, Transfer, and Release Dynamics in High -Speed Turbulent Flows Paul Dimotakis CALIFORNIA INSTITUTE

  9. Feasibility and acceptability of interventions to delay gun access in VA mental health settings.

    Science.gov (United States)

    Walters, Heather; Kulkarni, Madhur; Forman, Jane; Roeder, Kathryn; Travis, Jamie; Valenstein, Marcia

    2012-01-01

    The majority of VA patient suicides are completed with firearms. Interventions that delay patients' gun access during high-risk periods may reduce suicide, but may not be acceptable to VA stakeholders or may be challenging to implement. Using qualitative methods, stakeholders' perceptions about gun safety and interventions to delay gun access during high-risk periods were explored. Ten focus groups and four individual interviews were conducted with key stakeholders, including VA mental health patients, mental health clinicians, family members and VA facility leaders (N=60). Transcripts were consensus-coded by two independent coders, and structured summaries were developed and reviewed using a consensus process. All stakeholder groups indicated that VA health system providers had a role in increasing patient safety and emphasized the need for providers to address gun access with their at-risk patients. However, VA mental health patients and clinicians reported limited discussion regarding gun access in VA mental health settings during routine care. Most, although not all, patients and clinicians indicated that routine screening for gun access was acceptable, with several noting that it was more acceptable for mental health patients. Most participants suggested that family and friends be involved in reducing gun access, but expressed concerns about potential family member safety. Participants generally found distribution of trigger locks acceptable, but were skeptical about its effectiveness. Involving Veteran Service Organizations or other individuals in temporarily holding guns during high-risk periods was acceptable to many participants but only with numerous caveats. Patients, clinicians and family members consider the VA health system to have a legitimate role in addressing gun safety. Several measures to delay gun access during high-risk periods for suicide were seen as acceptable and feasible if implemented thoughtfully. Published by Elsevier Inc.

  10. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  11. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  12. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  13. 46 CFR 7.45 - Cape Henlopen, DE to Cape Charles, VA.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Cape Henlopen, DE to Cape Charles, VA. 7.45 Section 7.45... Atlantic Coast § 7.45 Cape Henlopen, DE to Cape Charles, VA. (a) A line drawn from the easternmost extremity of Indian River Inlet North Jetty to latitude 38°36.5′ N. longitude 75°02.8′ W. (Indian River...

  14. VA Enterprise Design Patterns - 5.1 (Mobility) Mobile

    Data.gov (United States)

    Department of Veterans Affairs — First of a set of guidance documents that establish the architectural foundation for mobile computing in the VA. This document outlines the enterprise capabilities...

  15. VA Enterprise Design Patters - 2.5 (Enterprise Architecture)

    Data.gov (United States)

    Department of Veterans Affairs — Enterprise architectural guidelines and constraints that provide references to the use of enterprise capabilities that will enable the VA to access and exchange data...

  16. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  17. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  18. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  19. 38 CFR 74.26 - What types of business information will VA collect?

    Science.gov (United States)

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) VETERANS SMALL BUSINESS REGULATIONS Records Management § 74.26 What types of business information will VA collect? VA will examine a variety of business records. See § 74.12, “What is... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false What types of business...

  20. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  1. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  2. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4{sup ®} neutron gamma coupled calculations

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yi-Kang, E-mail: yi-kang.lee@cea.fr

    2016-11-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4{sup ®} Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries

  3. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  4. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  5. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  6. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  7. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  8. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  9. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  10. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  11. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  12. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  13. Interaction of blood coagulation factor Va with phospholipid vesicles examined by using lipophilic photoreagents

    International Nuclear Information System (INIS)

    Krieg, U.C.; Isaacs, B.S.; Yemul, S.S.; Esmon, C.T.; Bayley, H.; Johnson, A.E.

    1987-01-01

    Two different lipophilic photoreagents, [ 3 H]adamantane diazirine and 3-(trifluoromethyl)-3-(m-[ 125 I]iodophenyl)diazirine (TID), have been utilized to examine the interactions of blood coagulation factor Va with calcium, prothrombin, factor Xa, and, in particular, phospholipid vesicles. With each of these structurally dissimilar reagents, the extent of photolabeling of factor Va was greater when the protein was bound to a membrane surface than when it was free in solution. Specifically, the covalent photoreaction with Vl, the smaller subunit of factor Va, was 2-fold higher in the presence of phosphatidylcholine/phosphatidylserine (PC/PS, 3:1) vesicles, to which factor Va binds, than in the presence of 100% PC vesicles, to which the protein does not bind. However, the magnitude of the PC/PS-dependent photolabeling was much less than has been observed previously with integral membrane proteins. It therefore appears that the binding of factor Va to the membrane surface exposes Vl to the lipid core of the bilayer, but that only a small portion of the Vl polypeptide is exposed to, or embedded in, the bilayer core. Addition of either prothrombin or active-site-blocked factor Xa to PC/PS-bound factor Va had little effect on the photolabeling of Vl with TID, but reduced substantially the covalent labeling of Vh, the larger subunit of factor Va. This indicates that prothrombin and factor Xa each cover nonpolar surfaces on Vh when the macromolecules associate on the PC/PS surface. It therefore seems likely that the formation of the prothrombinase complex involves a direct interaction between Vh and factor Xa and between Vh and prothrombin.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Electroencephalogram (EEG spectral features discriminate between Alzheimer’s (AD and Vascular dementia (VaD

    Directory of Open Access Journals (Sweden)

    Emanuel eNeto

    2015-02-01

    Full Text Available Alzheimer’s disease (AD and vascular dementia (VaD present with similar clinical symptoms of cognitive decline, but the underlying pathophysiological mechanisms differ. To determine whether clinical electroencephalography (EEG can provide information relevant to discriminate between these diagnoses, we used quantitative EEG analysis to compare the spectra between non-medicated patients with AD (n=77 and VaD (n=77 and healthy elderly normal controls (NC (n=77. We use curve-fitting with a combination of a power loss and Gaussian function to model the averaged resting-state spectra of each EEG channel extracting six parameters. We assessed the performance of our model and tested the extracted parameters for group differentiation. We performed regression analysis in a MANCOVA with group, age, gender, and number of epochs as predictors and further explored the topographical group differences with pair-wise contrasts. Significant topographical differences between the groups were found in several of the extracted features. Both AD and VaD groups showed increased delta power when compared to NC, whereas the AD patients showed a decrease in alpha power for occipital and temporal regions when compared with NC. The VaD patients had higher alpha power than NC and AD. The AD and VaD groups showed slowing of the alpha rhythm. Variability of the alpha frequency was wider for both AD and VaD groups. There was a general decrease in beta power for both AD and VaD. The proposed model is a useful to parameterize spectra which allowed extracting relevant clinical EEG key features that move towards simple and interpretable diagnostic criteria.

  15. LIFE JOURNEY: MEDICAL AND SCIENTIFIC WORK OF PROFESSOR V.A. SOKOLOV

    Directory of Open Access Journals (Sweden)

    P. A. Ivanov

    2017-01-01

    Full Text Available The article is dedicated to doctor of medicine, professor V.A. Sokolov. In 2017 he celebrates his eightieth birthday. Professor V.A. .Sokolov is one of the founders of polytrauma treatment in USSR and Russia. For a long time he had been heading polytrauma department at the N.V. Sklifosovsky Research Institute for Emergency Medicine. Due to his work, algorithms of life sustaining and recovery of serious patients were developed. Professor V.A. Sokolov is the author of 6 monographies and about 300 periodical papers. Besides, he is the holder of 32 patents. Some of his inventions were popularized and manufactured. He had been leading active scientific work, which resulted in 6 doctoral dissertations and 15 candidate theses. The staff of N.V. Sklifosovsky Research Institute for Emergency Medicine congratulates on the anniversary.

  16. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  17. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  18. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  19. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  20. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  1. 75 FR 25321 - Agency Information Collection (VA National Rehabilitation Special Events, Event Registration...

    Science.gov (United States)

    2010-05-07

    ... Winter Sports Clinic Application, VA Form 0924a series. b. National Veterans Wheelchair Games Application.... National Veterans TEE Tournament Application, VA Form 0927a series. e. National Veterans Summer Sports... Form 0929a series. Type of Review: Existing collection in use without an OMB control number. Abstract...

  2. V.A. Gorodtsov and Kazan: tour 1920

    Directory of Open Access Journals (Sweden)

    Kuzminykh Sergey V.

    2014-12-01

    Full Text Available A fragment of an archival document is published, that is connected to the September 8-12, 1920, stay in Kazan of V.A. Gorodtsov, who headed the Archaeological Subdepartment with the Museum Department of the RSFSR People’s Commissariat for Education, in the framework of his inspecting tour around the towns of the Volga and Urals region. The document is a diary, and its entries reflect information about the tour and its results that had not been exhaustively reflected in official documentation. It narrates about meetings, polemic exchanges, Gorodtsov’s addresses to scientists and the public, his impressions of the archaeological investigations in the regions, and the state of the museums and collections. V.A. Gorodtsov’s encounters and personal contacts with B.F. Adler, N.F. Katanov, M.G. Hudyakov and other researchers had played a positive role in archaeology development in the Volga-Kama region during the hardest times after the revolution.

  3. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  4. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  5. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  6. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  7. Vađenje podrtina i potonulih stvari u svjetlu novele Pomorskog zakonika iz 2013. godine

    Directory of Open Access Journals (Sweden)

    Vesna Skorupan Wolff

    2017-11-01

    Full Text Available Materiju vađenja podrtina i potonulih stvari uređuju odredbe upravnopravne prirode koje reguliraju odnose između vlasnika podrtine ili potonule stvari, odnosno ovlaštene osobe i upravnih tijela, a kojima se uređuju razni pravni aspekti postupka vađenja. Pravo vaditi podrtinu ili potonulu stvar primarno ima njezin vlasnik odnosno ovlaštena osoba. Zakon vlasnicima podrtina i potonulih stvari, odnosno ovlaštenim osobama, ostavlja primjeren rok u kojem mogu pokrenuti upravni postupak za dobivanje dozvole za vađenje podrtine ili potonule stvari. Na taj način jamči se nepovredivost vlasništva i utvrđuje načelo prema kojem činjenica da je stvar potonula ili se nasukala ne utječe izravno na vlasnička prava njezina dotadašnjeg vlasnika. Međutim, ako vlasnik, odnosno ovlaštena osoba ne zatraži odobrenje za vađenje podrtine ili potonule stvari, ili kad bez opravdanog razloga prekine ili napusti započeto vađenje kao i u slučaju ako je ovlaštena osoba nepoznata, PZ nudi pravni okvir i omogućuje da vađenje podrtine ili potonule stvari poduzme pošteni nalaznik ili lučka kapetanija. U okviru instituta vađenja podrtina i potonulih stvari uvodi se posebno pravno uređenje za nalaz stvari u moru te se precizno normiraju svi segmenti postupka vađenja kada ga poduzima pošteni nalaznik ili lučka kapetanija. Uređuju se i sva relevantna pitanja u svezi postupanja s izvađenim stvarima kao što su njihovo čuvanje te u propisanim slučajevima mogućnost prodaje na javnoj dražbi. PZ-om se precizno uređuju obvezni odnosi koji nastaju između vlasnika, odnosno ovlaštene osobe i poštenog nalaznika te vlasnika, odnosno ovlaštene osobe i lučke kapetanije, ovisno o tome tko je poduzeo vađenje, a koji se odnose na plaćanje naknade za vađenje, čuvanje, nalazninu i druge tražbine koje zakon priznaje poštenim nalaznicima i lučkim kapetanijama. U okviru toga, uređuje se i posebno stvarnopravno uređenje za stjecanje prava vlasni

  8. VA Dental Insurance Program--federalism. Direct final rule; confirmation of effective date.

    Science.gov (United States)

    2014-03-20

    The Department of Veterans Affairs (VA) published a direct final rule in the Federal Register on October 22, 2013, amending its regulations related to the VA Dental Insurance Program (VADIP), a pilot program to offer premium-based dental insurance to enrolled veterans and certain survivors and dependents of veterans. Specifically, this rule adds language to clarify the limited preemptive effect of certain criteria in the VADIP regulations. VA received no comments concerning this rule or its companion substantially identical proposed rule published in the Federal Register on October 23, 2013. This document confirms that the direct final rule became effective on December 23, 2013. In a companion document in this issue of the Federal Register, we are withdrawing as unnecessary the proposed rule.

  9. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  10. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Kwi Hyun; Song, Wan Jung [ENERGEO Inc. Sungnam, (Korea, Republic of); Song, Dong Soo; Byun, Choong Sup [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences.

  11. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    International Nuclear Information System (INIS)

    Seo, Kwi Hyun; Song, Wan Jung; Song, Dong Soo; Byun, Choong Sup

    2006-01-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences

  12. 38 CFR 1.9 - Description, use, and display of VA seal and flag.

    Science.gov (United States)

    2010-07-01

    ... stars represent the five branches of military service. The crossed flags represent our nation's history... employees. (D) Official VA signs. (E) Official publications or graphics issued by and attributed to VA, or...) Souvenir or novelty items. (iii) Toys or commercial gifts or premiums. (iv) Letterhead design, except on...

  13. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  14. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  15. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  16. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  17. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  18. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  19. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  20. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  1. Topiramate Protects Pericytes from Glucotoxicity: Role for Mitochondrial CA VA in Cerebromicrovascular Disease in Diabetes.

    Science.gov (United States)

    Patrick, Ping; Price, Tulin O; Diogo, Ana L; Sheibani, Nader; Banks, William A; Shah, Gul N

    Hyperglycemia in diabetes mellitus causes oxidative stress and pericyte depletion from the microvasculature of the brain thus leading to the Blood-Brain Barrier (BBB) disruption. The compromised BBB exposes the brain to circulating substances, resulting in neurotoxicity and neuronal cell death. The decline in pericyte numbers in diabetic mouse brain and pericyte apoptosis in high glucose cultures are caused by excess superoxide produced during enhanced respiration (mitochondrial oxidative metabolism of glucose). Superoxide is precursor to all Reactive Oxygen Species (ROS) which, in turn, cause oxidative stress. The rate of respiration and thus the ROS production is regulated by mitochondrial carbonic anhydrases (mCA) VA and VB, the two isoforms expressed in the mitochondria. Inhibition of both mCA: decreases the oxidative stress and restores the pericyte numbers in diabetic brain; and reduces high glucose-induced respiration, ROS, oxidative stress, and apoptosis in cultured brain pericytes. However, the individual role of the two isoforms has not been established. To investigate the contribution of mCA VA in ROS production and apoptosis, a mCA VA overexpressing brain pericyte cell line was engineered. These cells were exposed to high glucose and analyzed for the changes in ROS and apoptosis. Overexpression of mCA VA significantly increased pericyte ROS and apoptosis. Inhibition of mCA VA with topiramate prevented increases both in glucose-induced ROS and pericyte death. These results demonstrate, for the first time, that mCA VA regulates the rate of pericyte respiration. These findings identify mCA VA as a novel and specific therapeutic target to protect the cerebromicrovascular bed in diabetes.

  2. 77 FR 21158 - VA Directive 0005 on Scientific Integrity: Availability for Review and Comment

    Science.gov (United States)

    2012-04-09

    ... Draft VA Directive 0005 on Scientific Integrity: [square] Fosters a culture of transparency, integrity, and ethical behavior in the development and application of scientific and technological findings in VA... information from inappropriate political or commercial influence; [square] Ensures that selection and...

  3. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  4. 48 CFR 853.236-70 - VA Form 10-6298, Architect-Engineer Fee Proposal.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false VA Form 10-6298, Architect-Engineer Fee Proposal. 853.236-70 Section 853.236-70 Federal Acquisition Regulations System DEPARTMENT OF...-Engineer Fee Proposal. VA Form 10-6298, Architect-Engineer Fee Proposal, shall be used as prescribed in 836...

  5. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  6. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  7. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  8. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  9. Kritika važećeg i prijedlog novog pravnog uređenja vađenja i uklanjanja podrtina i potonulih stvari

    Directory of Open Access Journals (Sweden)

    Vesna Skorupan Wolff

    2012-12-01

    Full Text Available Glavni cilj rada je ponuditi i prikazati rješenja koja predstavljaju prijedlog novog pravnog uređenja instituta vađenja i uklanjanja podrtina i potonulih stvari. Prije toga, autorice u radu prikazuju genezu pravnih izvora u povijesti i ranijem domaćem zakonodavstvu te proučavaju sva relevantna pitanja i sve odredbe važećeg Pomorskog zakonika (u nastavku PZ o ovoj materiji. Analizira se značenje i uporaba pojedinih izraza, sistematizacija zakonske građe unutar strukture zakona te sadržaj i domašaj pojedinih odredbi. Autorice preispituju razinu usklađenosti pozitivnog PZ-a sa suvremenom međunarodnom regulativom u ovom području. Ukazuje se na važne probleme koji mogu nastati zbog manjkavosti odredbi pozitivnog PZ-a i nepostojanja sustavne regulacije svih relevantnih pitanja koja se mogu pojaviti u praksi. U okviru istraživanja provodi se i poredbena analiza ovih instituta u drugim nacionalnim zakonodavstvima. Predložena zakonska rješenja odlikuju se cjelovitošću i sustavnošću u normiranju svih relevantnih pitanja. Uređivanje ovih instituta specijalnim odredbama pružit će viši stupanj pravne sigurnosti te viši stupanj sigurnosti plovidbe Jadranom, zaštite okoliša, njegovih prirodnih bogatstava i drugih povezanih interesa.

  10. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  11. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  12. Forecasting VaR and ES of stock index portfolio: A Vine copula method

    Science.gov (United States)

    Zhang, Bangzheng; Wei, Yu; Yu, Jiang; Lai, Xiaodong; Peng, Zhenfeng

    2014-12-01

    Risk measurement has both theoretical and practical significance in risk management. Using daily sample of 10 international stock indices, firstly this paper models the internal structures among different stock markets with C-Vine, D-Vine and R-Vine copula models. Secondly, the Value-at-Risk (VaR) and Expected Shortfall (ES) of the international stock markets portfolio are forecasted using Monte Carlo method based on the estimated dependence of different Vine copulas. Finally, the accuracy of VaR and ES measurements obtained from different statistical models are evaluated by UC, IND, CC and Posterior analysis. The empirical results show that the VaR forecasts at the quantile levels of 0.9, 0.95, 0.975 and 0.99 with three kinds of Vine copula models are sufficiently accurate. Several traditional methods, such as historical simulation, mean-variance and DCC-GARCH models, fail to pass the CC backtesting. The Vine copula methods can accurately forecast the ES of the portfolio on the base of VaR measurement, and D-Vine copula model is superior to other Vine copulas.

  13. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  14. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  15. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  16. Benchmarks solutions of the coupled systems NJOY/AMPX-II/HAMMER-TECHNION and the library JENDL-3

    International Nuclear Information System (INIS)

    Santos, A. dos

    1991-01-01

    Benchmarks calculations were performed with the newest released japanese nuclear data library JENDL-3. The calculations were performed with the methodology developed at IPEN/CNEN-SP based on a coupled NJOY/AMPX-II/HAMMER-TECHNION system. The analyses show that the long-standing problem of the overprediction of the epithermal U-238 neutron capture still remains. Besides that there is an indication that the fission cross section of U-238 might be underestimated. (author)

  17. 38 CFR 3.2130 - Will VA accept a signature by mark or thumbprint?

    Science.gov (United States)

    2010-07-01

    ... signature by mark or thumbprint? 3.2130 Section 3.2130 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... of This Title General § 3.2130 Will VA accept a signature by mark or thumbprint? VA will accept signatures by mark or thumbprint if: (a) They are witnessed by two people who sign their names and give their...

  18. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  19. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  20. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  1. 78 FR 62441 - VA Dental Insurance Program-Federalism

    Science.gov (United States)

    2013-10-22

    ...--Federalism AGENCY: Department of Veterans Affairs. ACTION: Direct final rule. SUMMARY: The Department of... that they are submitted in response to ``RIN 2900-AO85-VA Dental Insurance Program-- Federalism... add preemption language in accordance with the discussion above. Executive Order 13132, Federalism...

  2. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  3. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  4. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  5. A Stochastic Dominance Approach to the Basel III Dilemma: Expected Shortfall or VaR?

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); J.A. Jiménez-Martín (Juan-Ángel); E. Maasoumi (Esfandiar); M.J. McAleer (Michael); T. Pérez-Amaral (Teodosio)

    2015-01-01

    markdownabstract__Abstract__ The Basel Committee on Banking Supervision (BCBS) (2013) recently proposed shifting the quantitative risk metrics system from Value-at-Risk (VaR) to Expected Shortfall (ES). The BCBS (2013) noted that “a number of weaknesses have been identified with using VaR for

  6. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  7. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  8. VaR: Exchange Rate Risk and Jump Risk

    Directory of Open Access Journals (Sweden)

    Fen-Ying Chen

    2010-01-01

    Full Text Available Incorporating the Poisson jumps and exchange rate risk, this paper provides an analytical VaR to manage market risk of international portfolios over the subprime mortgage crisis. There are some properties in the model. First, different from past studies in portfolios valued only in one currency, this model considers portfolios not only with jumps but also with exchange rate risk, that is vital for investors in highly integrated global financial markets. Second, in general, the analytical VaR solution is more accurate than historical simulations in terms of backtesting and Christoffersen's independence test (1998 for small portfolios and large portfolios. In other words, the proposed model is reliable not only for a portfolio on specific stocks but also for a large portfolio. Third, the model can be regarded as the extension of that of Kupiec (1999 and Chen and Liao (2009.

  9. SlaVaComp Fonts Converter

    Directory of Open Access Journals (Sweden)

    Simon Skilevic

    2013-12-01

    Full Text Available This paper presents a fonts converter that was developed as a part of the Freiburg project on historical corpus linguistics. The tool named SlaVaComp-Konvertierer converts Church Slavonic texts digitized with non-Unicode fonts into the Unicode format without any loss of information contained in the original file and without damage to the original formatting. It is suitable for the conversion of all idiosyncratic fonts—not only Church Slavonic—and therefore can be used not only in Palaeoslavistic, but also in all historical and philological studies.

  10. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  11. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  12. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  13. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  14. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  15. Benchmarking of fast-running software tools used to model releases during nuclear accidents

    Energy Technology Data Exchange (ETDEWEB)

    Devitt, P.; Viktorov, A., E-mail: Peter.Devitt@cnsc-ccsn.gc.ca, E-mail: Alex.Viktorov@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Fukushima highlighted the importance of effective nuclear accident response. However, its complexity greatly impacted the ability to provide timely and accurate information to national and international stakeholders. Safety recommendations provided by different national and international organizations varied notably. Such differences can partially be attributed to different methods used in the initial assessment of accident progression and the amount of radioactivity release.Therefore, a comparison of methodologies was undertaken by the NEA/CSNI and its highlights are presented here. For this project, the prediction tools used by various emergency response organizations for estimating the source terms and public doses were examined. Those organizations that have a capability to use such tools responded to a questionnaire describing each code's capabilities and main algorithms. Then the project's participants analyzed five accident scenarios to predict the source term, dispersion of releases and public doses. (author)

  16. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  17. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  18. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  19. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  20. Validation of KENO V.a: Comparison with critical experiments

    International Nuclear Information System (INIS)

    Jordan, W.C.; Landers, N.F.; Petrie, L.M.

    1986-12-01

    Section 1 of this report documents the validation of KENO V.a against 258 critical experiments. Experiments considered were primarily high or low enriched uranium systems. The results indicate that the KENO V.a Monte Carlo Criticality Program accurately calculates a broad range of critical experiments. A substantial number of the calculations showed a positive or negative bias in excess of 1 1/2% in k-effective (k/sub eff/). Classes of criticals which show a bias include 3% enriched green blocks, highly enriched uranyl fluoride slab arrays, and highly enriched uranyl nitrate arrays. If these biases are properly taken into account, the KENO V.a code can be used with confidence for the design and criticality safety analysis of uranium-containing systems. Sections 2 of this report documents the results of investigation into the cause of the bias observed in Sect. 1. The results of this study indicate that the bias seen in Sect. 1 is caused by code bias, cross-section bias, reporting bias, and modeling bias. There is evidence that many of the experiments used in this validation and in previous validations are not adequately documented. The uncertainty in the experimental parameters overshadows bias caused by the code and cross sections and prohibits code validation to better than about 1% in k/sub eff/. 48 refs., 19 figs., 19 tabs

  1. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  2. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  3. 78 FR 63143 - VA Dental Insurance Program-Federalism

    Science.gov (United States)

    2013-10-23

    ...--Federalism AGENCY: Department of Veterans Affairs. ACTION: Proposed rule. SUMMARY: The Department of Veterans... that they are submitted in response to ``RIN 2900-AO86-VA Dental Insurance Program-- Federalism... Order 13132, Federalism Section 6(c) of Executive Order 13132 (entitled ``Federalism'') requires an...

  4. VA Library Service--Today's look at Tomorrow's Library.

    Science.gov (United States)

    Veterans Administration, Washington, DC.

    The Conference Poceedings are divided into three broad topics: systems planning, audiovisuals in biomedical communication, and automation and networking. Speakers from within the Veterans Administration (VA), from the National Medical Audiovisual Center, and the Lister Hill National Center for Biomedical Communications, National Library of…

  5. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  6. TRACE/PARCS validation for BWR stability based on OECD/NEA Oskarshamn-2 benchmark

    International Nuclear Information System (INIS)

    Kozlowski, T.; Roshan, S.; Lefvert, T.; Downar, T.; Xu, Y.; Wysocki, A.; Ivanov, K.; Magedanz, J.; Hardgrove, M.; Netterbrant, C.; March-Leuba, J.; Hudson, N.; Sandervag, O.; Bergman, A.

    2011-01-01

    On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event, which culminated in diverging power oscillations with decay ratio greater than 1.3. The event was successfully modeled by TRACE/PARCS coupled code system and the details of the modeling and solution are described in the paper. The obtained results show excellent agreement with the plant data, capturing the entire behavior of the transient including onset of instability, growth of oscillation (decay ratio) and the oscillation frequency. The event allows coupled code validation for BWR with a real, challenging stability event, which challenges accuracy of neutron kinetics (NK), thermal-hydraulics (TH) and TH/NK coupling. The success of this work has demonstrated the ability of 3-D coupled code systems to capture the complex behavior of BWR stability events. The problem is released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (author)

  7. An analysis of the CSNI/GREST core concrete interaction chemical thermodynamic benchmark exercise using the MPEC2 computer code

    International Nuclear Information System (INIS)

    Muramatsu, Ken; Kondo, Yasuhiko; Uchida, Masaaki; Soda, Kunihisa

    1989-01-01

    Fission product (EP) release during a core concrete interaction (CCI) is an important factor of the uncertainty associated with a source term estimation for an LWR severe accident. An analysis was made on the CCI Chemical Thermodynamic Benchmark Exercise organized by OECD/NEA/CSNI Group of Experts on Source Terms (GREST) for investigating the uncertainty in thermodynamic modeling for CCI. The benchmark exercise was to calculate the equilibrium FP vapor pressure for given system of temperature, pressure, and debris composition. The benchmark consisted of two parts, A and B. Part A was a simplified problem intended to test the numerical techniques. In part B, the participants were requested to use their own best estimate thermodynamic data base to examine the variability of the results due to the difference in thermodynamic data base. JAERI participated in this benchmark exercise with use of the MPEC2 code. Chemical thermodynamic data base needed for analysis of Part B was taken from the VENESA code. This report describes the computer code used, inputs to the code, and results from the calculation by JAERI. The present calculation indicates that the FP vapor pressure depends strongly on temperature and Oxygen potential in core debris and the pattern of dependency may be different for different FP elements. (author)

  8. Benchmarking of the saturated-zone module associated with three risk assessment models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Whelan, Gene; Mcdonald, J P.; Gnanapragasam, Emmanuel K.; Laniak, Gerard F.; Lew, Christine S.; Mills, William B.; Yu, C

    1998-01-01

    A comprehensive benchmarking is being performed between three multimedia risk assessment models: RESRAD, MMSOILS, and MEPAS. Each multimedia model is comprised of a suite of modules (e.g., groundwater, air, surface water, exposure, and risk/hazard), all of which can impact the estimation of human-health risk. As a component of the comprehensive benchmarking exercise, the saturated-zone modules of each model were applied to an environmental release scenario, where uranium-234 was released from the waste site to a saturated zone. Uranium-234 time-varying emission rates exiting from the source and concentrations at three downgradient locations (0 m, 150 m, and 1500 m) are compared for each multimedia model. Time-Varying concentrations for uranium-234 decay products (i.e., thorium-230, radium-226, and lead-210) at the 1500-m location are also presented. Different results are reported for RESRAD, MMSOILS, and MEPAS, which are solely due to the assumptions and mathematical constructs inherently built into each model, thereby impacting the potential risks predicted by each model. Although many differences were identified between the models, differences that impacted these benchmarking results the most are as follows: (1) RESRAD transports its contaminants by pure translation, and MMSOILS and MEPAS solve the one-dimensional advective, three-dimensional dispersive equation. (2) Due to the manner in which the retardation factor is defined, RESRAD contaminant velocities will always be faster than MMSOILS or MEPAS. (3) RESRAD uses a dilution factor to account for a withdrawal well; MMSOILS and MEPAS were designed to calculate in-situ concentrations at a receptor location. (4) RESRAD allows for decay products to travel at different velocities, while MEPAS assumes the decay products travel at the same speed as their parents. MMSOILS does not account for decay products and assumes degradation/decay only in the aqueous phase

  9. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  10. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  11. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  12. 77 FR 60746 - Proposed Information Collection (VA/DOD Joint Disability Evaluation Board Claim) Activity...

    Science.gov (United States)

    2012-10-04

    ... burden of the collection of information on respondents, including through the use of automated collection techniques or the use of other forms of information technology. Title: VA/DOD Joint Disability Evaluation... War on Terror Heroes, VA and the Department of Defense (DOD) have agreed to develop a joint process in...

  13. The impact of the Department of Veterans Affairs Health Care Personnel Enhancement Act of 2004 on VA physicians' salaries and retention.

    Science.gov (United States)

    Weeks, William B; Wallace, Tanner A; Wallace, Amy E

    2009-01-01

    To determine whether the Department of Veterans Affairs Health Care Personnel Enhancement Act (the Act), which was designed to achieve VA physician salary parity with American Academy of Medical Colleges (AAMC) Associate Professors and enacted in 2006, had achieved its goal. Using VA human resources datasets and data from the AAMC, we calculated mean VA physician salaries, with 95 percent confidence intervals, for 15 different medical specialties. For each specialty, we compared VA salaries to the median, 25th, and 75th percentile of AAMC Associate Professors' incomes. The Act's passage resulted in a $20,000 annual increase in VA physicians' salaries. VA primary care physicians, medical subspecialists, and psychiatrists had salaries that were comparable to their AAMC counterparts prior to and after enactment of the Act. However, VA surgical specialists', anesthesiologists', and radiologists' salaries lagged their AAMC counterparts both before and after the Act's enactment. Income increases were negatively correlated with full-time workforce changes. VA does not appear to provide comparable salaries for physicians necessary for surgical care. In certain cases, VA should consider outsourcing surgical services.

  14. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  15. Semi-Analytical Benchmarks for MCNP6

    Energy Technology Data Exchange (ETDEWEB)

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  16. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A. [Los Alamos National Laboratory (LANL); Macfarlane, R E [Los Alamos National Laboratory (LANL); Mosteller, R D [Los Alamos National Laboratory (LANL); Kiedrowski, B C [Los Alamos National Laboratory (LANL); Frankle, S C [Los Alamos National Laboratory (LANL); Chadwick, M. B. [Los Alamos National Laboratory (LANL); Mcknight, R D [Argonne National Laboratory (ANL); Lell, R M [Argonne National Laboratory (ANL); Palmiotti, G [Idaho National Laboratory (INL); Hiruta, h [Idaho National Laboratory (INL); Herman, Micheal W [Brookhaven National Laboratory (BNL); Arcilla, r [Brookhaven National Laboratory (BNL); Mughabghab, S F [Brookhaven National Laboratory (BNL); Sublet, J C [Culham Science Center, Abington, UK; Trkov, A. [Jozef Stefan Institute, Slovenia; Trumbull, T H [Knolls Atomic Power Laboratory; Dunn, Michael E [ORNL

    2011-01-01

    The ENDF/B-VII.1 library is the latest revision to the United States' Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [1]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unrnoderated and uranium reflected (235)U and (239)Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also confirmed for selected actinide reaction rates such as (236)U; (238,242)Pu and (241,243)Am capture in fast systems. Other deficiencies, such as the overprediction of Pu solution system critical

  17. 78 FR 77204 - Proposed Information Collection (VA National Veterans Sports Programs and Special Event Surveys...

    Science.gov (United States)

    2013-12-20

    ... AGENCY: Office of Public & Intergovernmental Affairs, Department of Veterans Affairs. ACTION: Notice. SUMMARY: The Office of Public Affairs (OPA), Department of Veterans Affairs (VA), is announcing an... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-NEW] Proposed Information Collection (VA...

  18. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  19. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  20. Access to mental health care among women Veterans: is VA meeting women's needs?

    Science.gov (United States)

    Kimerling, Rachel; Pavao, Joanne; Greene, Liberty; Karpenko, Julie; Rodriguez, Allison; Saweikis, Meghan; Washington, Donna L

    2015-04-01

    Patient-centered access to mental health describes the fit between patient needs and resources of the system. To date, little data are available to guide implementation of services to women veterans, an underrepresented minority within Department of Veteran Affairs (VA) health care. The current study examines access to mental health care among women veterans, and identifies gender-related indicators of perceived access to mental health care. A population-based sample of 6287 women veterans using VA primary care services participated in a survey of past year perceived need for mental health care, mental health utilization, and gender-related mental health care experiences. Subjective rating of how well mental health care met their needs was used as an indicator of perceived access. Half of all women reported perceived mental health need; 84.3% of those women received care. Nearly all mental health users (90.9%) used VA services, although only about half (48.8%) reported that their mental health care met their needs completely or very well. Gender related experiences (availability of female providers, women-only treatment settings, women-only treatment groups, and gender-related comfort) were each associated with 2-fold increased odds of perceived access, and associations remained after adjusting for ease of getting care. Women VA users demonstrate very good objective access to mental health services. Desire for, and access to specialized mental health services for women varies across the population and are important aspects of shared decision making in referral and treatment planning for women using VA primary care.

  1. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  2. 76 FR 38302 - Safety Zone; Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA

    Science.gov (United States)

    2011-06-30

    ... the Town of Cape Charles will sponsor a fireworks display on the shoreline of the navigable waters of...-AA00 Safety Zone; Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA AGENCY: Coast Guard... navigable waters of Cape Charles City Harbor in Cape Charles, VA in support of the Fourth of July Fireworks...

  3. 76 FR 27970 - Safety Zone; Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA.

    Science.gov (United States)

    2011-05-13

    ... Charles will sponsor a fireworks display on the shoreline of the navigable waters of Cape Charles City...[deg]01'30'' W (NAD 1983). This safety zone will be established in the vicinity of Cape Charles, VA...-AA00 Safety Zone; Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA. AGENCY: Coast Guard...

  4. Continuation of Iranshahri Thoughts in Kalileh va Demne

    Directory of Open Access Journals (Sweden)

    Bijan Zahirinav

    2010-10-01

    Full Text Available In this essay, Iranshahri’s thoughts such as ideal king, divine charisma, togetherness of religion and kingdom, justice, truth etc in kalileh va Demneh are investigated. In Kalileh and Demneh, kingdom is a divine gift and the king is at the center of the state affairs. Therefore, affection and kindness must be imprinted on his forehead, otherwise he would be a devilish creature who disturbs governing orders of the universe. He must metaphysically be supported in the circus of governmental affairs, and this is possible only by divine charisma. Such a king must be decorated with justice ornament, so that every body does his best in his own special class and approaches the salvation of soul. The king must also be decorated with accomplishments such as battle assistance and with virtues such as religion assistance. He must strengthen the social connection by “truth”. Basically, without truth life has no harmony because falsehood diverts life form its regular direction. But finally, with the dominance of divine forces over devilish ones, utopia, which is a secure place for releasing from disorder and injustice is proven to be true. Therefore, we conclude that Kalileh and Demneh has a contribution to the continuation of Iranshahri’s thoughts and also to the historical and cultural continuation of Iran in transition to the Islamic period.

  5. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  6. Use of the Decision Support System for VA cost-effectiveness research.

    Science.gov (United States)

    Barnett, P G; Rodgers, J H

    1999-04-01

    The Department of Veterans Affairs is adopting the Decision Support System (DSS), computer software and databases which include a cost-accounting system which determines the cost of health care products and patient encounters. A system for providing cost data for cost-effectiveness analysis should be provide valid, detailed, and comprehensive data that can be aggregated. The design of DSS is described and compared with those criteria. Utilization data from DSS was compared with other VA utilization data. Aggregate DSS cost data from 35 medical centers was compared with relative resource weights developed for the Medicare program. Data on hospital stays at 3 facilities found that 3.7% of the stays in DSS were not in the VA discharge database, whereas 7.6% of the stays in the discharge data were not in DSS. DSS reported between 68.8% and 97.1% of the outpatient encounters reported by six facilities in the ambulatory care data base. Relative weights for each Diagnosis Related Group based on DSS data from 35 VA facilities correlated with Medicare weights (correlation coefficient of .853). DSS will be useful for research if certain problems are overcome. It is difficult to distinguish long-term from acute hospital care. VA does not have a complete database of all inpatient procedures, so DSS has not assigned them a specific cost. The authority to access encounter-level DSS data needs to be centralized. Researchers can provide the feedback needed to improve DSS cost estimates. A comprehensive encounter-level extract would facilitate use of DSS for research.

  7. Brand-Name Prescription Drug Use Among Diabetes Patients in the VA and Medicare Part D: A National Comparison

    Science.gov (United States)

    Gellad, Walid F.; Donohue, Julie M.; Zhao, Xinhua; Mor, Maria K.; Thorpe, Carolyn T.; Smith, Jeremy; Good, Chester B.; Fine, Michael J.; Morden, Nancy E.

    2013-01-01

    Background Medicare Part D and the Department of Veterans Affairs (VA) use different approaches to manage prescription drug benefits, with implications for spending. Medicare relies on private plans with distinct formularies, whereas VA administers its own benefit using a national formulary. Objective To compare overall and regional rates of brand-name drug use among older adults with diabetes in Medicare and VA. Design Retrospective cohort Setting Medicare and VA Patients National sample in 2008 of 1,061,095 Part D beneficiaries and 510,485 Veterans age 65+ with diabetes. Measurements Percent of patients on oral hypoglycemics, statins, and angiotensin-converting-enzyme inhibitors/angiotensin-receptor-blockers who filled brand-name drugs and percent of patients on long-acting insulin who filled analogues. We compared sociodemographic and health-status adjusted hospital referral region (HRR) brand-name use to examine local practice patterns, and calculated changes in spending if each system’s brand-name use mirrored the other. Results Brand-name use in Medicare was 2–3 times that of VA: 35.3% vs. 12.7% for oral hypoglycemics, 50.7% vs. 18.2% for statins, 42.5% vs. 20.8% for angiotensin-converting-enzyme inhibitors/angiotensin-receptor-blockers, and 75.1% vs. 27.0% for insulin analogues. Adjusted HRR brand-name statin use ranged (5th to 95th percentile) from 41.0%–58.3% in Medicare and 6.2%–38.2% in VA. For each drug group, the HRR at the 95th percentile in VA had lower brand-name use than the 5th percentile HRR in Medicare. Medicare spending in this population would have been $1.4 billion less if brand-name use matched the VA for these medications. Limitation This analysis cannot fully describe the factors underlying differences in brand-name use. Conclusions Medicare beneficiaries with diabetes use 2–3 times more brand-name drugs than a comparable group within VA, at substantial excess cost. Primary Funding Sources VA; NIH; RWJF PMID:23752663

  8. TRACE/PARCS analysis of the OECD/NEA Oskarshamn-2 BWR stability benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, T. [Univ. of Illinois, Urbana-Champaign, IL (United States); Downar, T.; Xu, Y.; Wysocki, A. [Univ. of Michigan, Ann Arbor, MI (United States); Ivanov, K.; Magedanz, J.; Hardgrove, M. [Pennsylvania State Univ., Univ. Park, PA (United States); March-Leuba, J. [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Hudson, N.; Woodyatt, D. [Nuclear Regulatory Commission, Rockville, MD (United States)

    2012-07-01

    On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event which culminated in diverging power oscillations with a decay ratio of about 1.4. The event was successfully modeled by the TRACE/PARCS coupled code system, and further analysis of the event is described in this paper. The results show very good agreement with the plant data, capturing the entire behavior of the transient including the onset of instability, growth of the oscillations (decay ratio) and oscillation frequency. This provides confidence in the prediction of other parameters which are not available from the plant records. The event provides coupled code validation for a challenging BWR stability event, which involves the accurate simulation of neutron kinetics (NK), thermal-hydraulics (TH), and TH/NK. coupling. The success of this work has demonstrated the ability of the 3-D coupled systems code TRACE/PARCS to capture the complex behavior of BWR stability events. The problem was released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (authors)

  9. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  10. Intra-Operative Indocyanine Green-Videoangiography (ICG-VA) in ...

    African Journals Online (AJOL)

    Methods: Fifteen consecutive patients with anterior circulation aneurysms who underwent craniotomy and clipping of the aneurysms were included in this study. Intraoperative ICG-VA was performed in all cases after exposure of the aneurysm and the branches in the vicinity of the aneurysm or the parent vessel before ...

  11. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  12. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    Science.gov (United States)

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  13. KiVa Anti-Bullying Program in Italy: Evidence of Effectiveness in a Randomized Control Trial.

    Science.gov (United States)

    Nocentini, Annalaura; Menesini, Ersilia

    2016-11-01

    The present study aims to evaluate the effectiveness of the KiVa anti-bullying program in Italy through a randomized control trial of students in grades 4 and 6. The sample involved 2042 students (51 % female; grade 4, mean age = 8.85; ds = 0.43; grade 6, mean age = 10.93; ds = 0.50); 13 comprehensive schools were randomly assigned into intervention (KiVa) or control (usual school provision) conditions. Different outcomes (bullying, victimization, pro-bullying attitudes, pro-victim attitudes, empathy toward victims), analyses (longitudinal mixed model with multiple-item scales; longitudinal prevalence of bullies and victims using Olweus' single question), and estimates of effectiveness (Cohen's d; odds ratios) were considered in order to compare the Italian results with those from other countries. Multilevel models showed that KiVa reduced bullying and victimization and increased pro-victim attitudes and empathy toward the victim in grade 4, with effect sizes from 0.24 to 0.40. In grade 6, KiVa reduced bullying, victimization, and pro-bullying attitudes; the effects were smaller as compared to grade 4, yet significant (d ≥ 0.20). Finally, using Olweus dichotomous definition of bullies and victims, results showed that the odds of being a victim were 1.93 times higher for a control student than for a KiVa student in grade 4. Overall, the findings provide evidence of the effectiveness of the program in Italy; the discussion will focus on factors that influenced successfully the transportability of the KiVa program in Italy.

  14. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  15. 77 FR 74279 - Agency Information Collection (VA/DOD Joint Disability Evaluation Board Claim): Activity under...

    Science.gov (United States)

    2012-12-13

    ... Joint Disability Evaluation Board Claim): Activity under OMB Review AGENCY: Veterans Benefits... . Please refer to ``OMB Control No. 2900-0704.'' SUPPLEMENTARY INFORMATION: Title: VA/DOD Joint Disability Evaluation Board Claim, VA Form 21- 0819. OMB Control Number: 2900-0704. Type of Review: Extension of a...

  16. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  17. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  18. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  19. OECD/NEZ Main Steam Line Break Benchmark Problem Exercise I Simulation Using the SPACE Code with the Point Kinetics Model

    International Nuclear Information System (INIS)

    Kim, Yohan; Kim, Seyun; Ha, Sangjun

    2014-01-01

    The Safety and Performance Analysis Code for Nuclear Power Plants (SPACE) has been developed in recent years by the Korea Nuclear Hydro and Nuclear Power Co. (KHNP) through collaborative works with other Korean nuclear industries. The SPACE is a best-estimated two-phase three-field thermal-hydraulic analysis code to analyze the safety and performance of pressurized water reactors (PWRs). The SPACE code has sufficient features to replace outdated vendor supplied codes and to be used for the safety analysis of operating PWRs and the design of advanced reactors. As a result of the second phase of the development, the 2.14 version of the code was released through the successive various V and V works. The topical reports on the code and related safety analysis methodologies have been prepared for license works. In this study, the OECD/NEA Main Steam Line Break (MSLB) Benchmark Problem Exercise I was simulated as a V and V work. The results were compared with those of the participants in the benchmark project. The OECD/NEA MSLB Benchmark Problem Exercise I was simulated using the SPACE code. The results were compared with those of the participants in the benchmark project. Through the simulation, it was concluded that the SPACE code can effectively simulate PWR MSLB accidents

  20. Towards a benchmarking tool for minimizing wastewater utility greenhouse gas footprints.

    Science.gov (United States)

    Guo, L; Porro, J; Sharma, K R; Amerlinck, Y; Benedetti, L; Nopens, I; Shaw, A; Van Hulle, S W H; Yuan, Z; Vanrolleghem, P A

    2012-01-01

    A benchmark simulation model, which includes a wastewater treatment plant (WWTP)-wide model and a rising main sewer model, is proposed for testing mitigation strategies to reduce the system's greenhouse gas (GHG) emissions. The sewer model was run to predict methane emissions, and its output was used as the WWTP model input. An activated sludge model for GHG (ASMG) was used to describe nitrous oxide (N(2)O) generation and release in activated sludge process. N(2)O production through both heterotrophic and autotrophic pathways was included. Other GHG emissions were estimated using empirical relationships. Different scenarios were evaluated comparing GHG emissions, effluent quality and energy consumption. Aeration control played a clear role in N(2)O emissions, through concentrations and distributions of dissolved oxygen (DO) along the length of the bioreactor. The average value of N(2)O emission under dynamic influent cannot be simulated by a steady-state model subjected to a similar influent quality, stressing the importance of dynamic simulation and control. As the GHG models have yet to be validated, these results carry a degree of uncertainty; however, they fulfilled the objective of this study, i.e. to demonstrate the potential of a dynamic system-wide modelling and benchmarking approach for balancing water quality, operational costs and GHG emissions.

  1. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  2. Use of Sensitivity and Uncertainty Analysis to Select Benchmark Experiments for the Validation of Computer Codes and Data

    International Nuclear Information System (INIS)

    Elam, K.R.; Rearden, B.T.

    2003-01-01

    Sensitivity and uncertainty analysis methodologies under development at Oak Ridge National Laboratory were applied to determine whether existing benchmark experiments adequately cover the area of applicability for the criticality code and data validation of PuO 2 and mixed-oxide (MOX) powder systems. The study examined three PuO 2 powder systems and four MOX powder systems that would be useful for establishing mass limits for a MOX fuel fabrication facility. Using traditional methods to choose experiments for criticality analysis validation, 46 benchmark critical experiments were identified as applicable to the PuO 2 powder systems. However, only 14 experiments were thought to be within the area of applicability for dry MOX powder systems.The applicability of 318 benchmark critical experiments, including the 60 experiments initially identified, was assessed. Each benchmark and powder system was analyzed using the Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) one-dimensional (TSUNAMI-1D) or TSUNAMI three-dimensional (TSUNAMI-3D) sensitivity analysis sequences, which will be included in the next release of the SCALE code system. This sensitivity data and cross-section uncertainty data were then processed with TSUNAMI-IP to determine the correlation of each application to each experiment in the benchmarking set. Correlation coefficients are used to assess the similarity between systems and determine the applicability of one system for the code and data validation of another.The applicability of most of the experiments identified using traditional methods was confirmed by the TSUNAMI analysis. In addition, some PuO 2 and MOX powder systems were determined to be within the area of applicability of several other benchmarks that would not have been considered using traditional methods. Therefore, the number of benchmark experiments useful for the validation of these systems exceeds the number previously expected. The TSUNAMI analysis

  3. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  4. Conformational determination of [Leu]enkephalin based on theoretical and experimental VA and VCD spectral analyses

    DEFF Research Database (Denmark)

    Abdali, Salim; Jalkanen, Karl J.; Cao, X.

    2004-01-01

    Conformational determination of [Leu]enkephalin in DMSO-d6 is carried out using VA and VCD spectral analyses. Conformational energies, vibrational frequencies and VA and VCD intensities are calculated using DFT at B3LYP/6-31G* level of theory. Comparison between the measured spectra...

  5. 78 FR 55777 - Proposed Information Collection (VA, National Veterans Sports Programs and Special Events, Event...

    Science.gov (United States)

    2013-09-11

    ... techniques or the use of other forms of information technology. Titles: a. National Disabled Veterans Winter... Form 0928h. m. Surfing Personnel Application, VA Form 0928i. n. Venue Personnel Application, VA Form... Creative Arts Festival, National Veterans TEE Tournament, National Disabled Veterans Winter Sports Clinic...

  6. Effectiveness of Expanded Implementation of STAR-VA for Managing Dementia-Related Behaviors Among Veterans.

    Science.gov (United States)

    Karel, Michele J; Teri, Linda; McConnell, Eleanor; Visnic, Stephanie; Karlin, Bradley E

    2016-02-01

    Nonpharmacological, psychosocial approaches are first-line treatments for managing behavioral symptoms in dementia, but they can be challenging to implement in long-term care settings. The Veterans Health Administration implemented STAR-VA, an interdisciplinary behavioral approach for managing challenging dementia-related behaviors in its Community Living Center (CLCs, nursing home care) settings. This study describes how the program was implemented and provides an evaluation of Veteran clinical outcomes and staff feedback on the intervention. One mental health professional and registered nurse team from 17 CLCs completed STAR-VA training, which entailed an experiential workshop followed by 6 months of expert consultation as they worked with their teams to implement STAR-VA with Veterans identified to have challenging dementia-related behaviors. The frequency and severity of target behaviors and symptoms of depression, anxiety, and agitation were evaluated at baseline and at intervention completion. Staff provided feedback regarding STAR-VA feasibility and impact. Seventy-one Veterans completed the intervention. Behaviors clustered into 6 types: care refusal or resistance, agitation, aggression, vocalization, wandering, and other. Frequency and severity of target behaviors and symptoms of depression, anxiety, and agitation all significantly decreased, with overall effect sizes of 1 or greater. Staff rated both benefits for Veterans and program feasibility favorably. This evaluation supports the feasibility and effectiveness of STAR-VA, an interdisciplinary, behavioral intervention for managing challenging behaviors among residents with dementia in CLCs. Published by Oxford University Press on behalf of the Gerontological Society of America 2015.

  7. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  8. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  9. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  10. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  11. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  12. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  13. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  14. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  15. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  16. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  17. 48 CFR 852.219-72 - Evaluation factor for participation in the VA mentor-protégé program.

    Science.gov (United States)

    2010-10-01

    ... participation in the VA mentor-protégé program. 852.219-72 Section 852.219-72 Federal Acquisition Regulations... Texts of Provisions and Clauses 852.219-72 Evaluation factor for participation in the VA mentor-protégé... the VA Mentor-Protégé Program (DEC2009) This solicitation contains an evaluation factor or sub-factor...

  18. Correspondence of the Boston Assessment of Traumatic Brain Injury-Lifetime (BAT-L) clinical interview and the VA TBI screen.

    Science.gov (United States)

    Fortier, Catherine Brawn; Amick, Melissa M; Kenna, Alexandra; Milberg, William P; McGlinchey, Regina E

    2015-01-01

    Mild traumatic brain injury is the signature injury of Operation Enduring Freedom (OEF), Operation Iraqi Freedom (OIF), and Operation New Dawn (OND), yet its identification and diagnosis is controversial and fraught with challenges. In 2007, the Department of Veterans Affairs (VA) implemented a policy requiring traumatic brain injury (TBI) screening on all individuals returning from deployment in the OEF/OIF/OND theaters of operation that lead to the rapid and widespread use of the VA TBI screen. The Boston Assessment of TBI-Lifetime (BAT-L) is the first validated, postcombat semistructured clinical interview to characterize head injuries and diagnose TBIs throughout the life span, including prior to, during, and post-military service. Community-dwelling convenience sample of 179 OEF/OIF/OND veterans. BAT-L, VA TBI screen. Based on BAT-L diagnosis of military TBI, the VA TBI screen demonstrated similar sensitivity (0.85) and specificity (0.82) when administered by research staff. When BAT-L diagnosis was compared with historical clinician-administered VA TBI screen in a subset of participants, sensitivity was reduced. The specificity of the research-administered VA TBI screen was more than adequate. The sensitivity of the VA TBI screen, although relatively high, suggests that it does not oversample or "catch all" possible military TBIs. Traumatic brain injuries identified by the BAT-L, but not identified by the VA TBI screen, were predominantly noncombat military injuries. There is potential concern regarding the validity and reliability of the clinician administered VA TBI screen, as we found poor correspondence between it and the BAT-L, as well as low interrater reliability between the clinician-administered and research-administered screen.

  19. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  20. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  1. 78 FR 31840 - Safety Zone; USO Patriotic Festival Air Show, Atlantic Ocean; Virginia Beach, VA

    Science.gov (United States)

    2013-05-28

    ...-AA00 Safety Zone; USO Patriotic Festival Air Show, Atlantic Ocean; Virginia Beach, VA AGENCY: Coast... provide for the safety of life on navigable waters during the USO Patriotic Festival Air Show. This action... Patriotic Festival Air Show, Atlantic Ocean; Virginia Beach, VA. (a) Regulated Area. The following area is a...

  2. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  3. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  4. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  5. Cloning, Characterization, and Functional Investigation of VaHAESA from Vitis amurensis Inoculated with Plasmopara viticola

    Directory of Open Access Journals (Sweden)

    Shaoli Liu

    2018-04-01

    Full Text Available Plant pattern recognition receptors (PRRs are essential for immune responses and establishing symbiosis. Plants detect invaders via the recognition of pathogen-associated molecular patterns (PAMPs by PRRs. This phenomenon is termed PAMP-triggered immunity (PTI. We investigated disease resistance in Vitis amurensis to identify PRRs that are important for resistance against downy mildew, analyzed the PRRs that were upregulated by incompatible Plasmopara viticola infection, and cloned the full-length cDNA of the VaHAESA gene. We then analyzed the structure, subcellular localization, and relative disease resistance of VaHAESA. VaHAESA and PRR-receptor-like kinase 5 (RLK5 are highly similar, belonging to the leucine-rich repeat (LRR-RLK family and localizing to the plasma membrane. The expression of PRR genes changed after the inoculation of V. amurensis with compatible and incompatible P. viticola; during early disease development, transiently transformed V. vinifera plants expressing VaHAESA were more resistant to pathogens than those transformed with the empty vector and untransformed controls, potentially due to increased H2O2, NO, and callose levels in the transformants. Furthermore, transgenic Arabidopsis thaliana showed upregulated expression of genes related to the PTI pathway and improved disease resistance. These results show that VaHAESA is a positive regulator of resistance against downy mildew in grapevines.

  6. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  7. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  8. Volatility forecasting and value-at-risk estimation in emerging markets: the case of the stock market index portfolio in South Africa

    Directory of Open Access Journals (Sweden)

    Lumengo Bonga-Bonga

    2011-04-01

    Full Text Available Accurate modelling of volatility is important as it relates to the forecasting of Value-at-Risk (VaR. The RiskMetrics model to forecast volatility is the benchmark in the financial sector. In an important regulatory innovation, the Basel Committee has proposed the use of an internal method for modelling VaR instead of the strict use of the benchmark model. The aim of this paper is to evaluate the performance of RiskMetrics in comparison to other models of volatility forecasting, such as some family classes of the Generalised Auto Regressive Conditional Heteroscedasticity models, in forecasting the VaR in emerging markets. This paper makes use of the stock market index portfolio, the All-Share Index, as a case study to evaluate the market risk in emerging markets. The paper underlines the importance of asymmetric behaviour for VaR forecasting in emerging markets’ economies.

  9. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  10. VA Construction: Improved Processes Needed to Monitor Contract Modifications, Develop Schedules, and Estimate Costs

    Science.gov (United States)

    2017-03-01

    the Handbook.36 VA headquarters officials told us that regional CFM offices monitor change- order - processing time frames for projects in their...visited collected different types of data on change orders. Because VA lacks the data on the change order processing timeframes required by the Handbook...goals of processing change orders in a timelier manner, especially given our previous findings that change- order - processing time frames caused

  11. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  12. Small RNA sequence analysis of adenovirus VA RNA-derived miRNAs reveals an unexpected serotype-specific difference in structure and abundance.

    Directory of Open Access Journals (Sweden)

    Wael Kamel

    Full Text Available Human adenoviruses (HAds encode for one or two highly abundant virus-associated RNAs, designated VA RNAI and VA RNAII, which fold into stable hairpin structures resembling miRNA precursors. Here we show that the terminal stem of the VA RNAs originating from Ad4, Ad5, Ad11 and Ad37, all undergo Dicer dependent processing into virus-specific miRNAs (so-called mivaRNAs. We further show that the mivaRNA duplex is subjected to a highly asymmetric RISC loading with the 3'-strand from all VA RNAs being the favored strand, except for the Ad37 VA RNAII, where the 5'-mivaRNAII strand was preferentially assembled into RISC. Although the mivaRNA seed sequences are not fully conserved between the HAds a bioinformatics prediction approach suggests that a large fraction of the VA RNAII-, but not the VA RNAI-derived mivaRNAs still are able to target the same cellular genes. Using small RNA deep sequencing we demonstrate that the Dicer processing event in the terminal stem of the VA RNAs is not unique and generates 3'-mivaRNAs with a slight variation of the position of the 5' terminal nucleotide in the RISC loaded guide strand. Also, we show that all analyzed VA RNAs, except Ad37 VA RNAI and Ad5 VA RNAII, utilize an alternative upstream A start site in addition to the classical +1 G start site. Further, the 5'-mivaRNAs with an A start appears to be preferentially incorporated into RISC. Although the majority of mivaRNA research has been done using Ad5 as the model system our analysis demonstrates that the mivaRNAs expressed in Ad11- and Ad37-infected cells are the most abundant mivaRNAs associated with Ago2-containing RISC. Collectively, our results show an unexpected variability in Dicer processing of the VA RNAs and a serotype-specific loading of mivaRNAs into Ago2-based RISC.

  13. BMP-2 Overexpression Augments Vascular Smooth Muscle Cell Motility by Upregulating Myosin Va via Erk Signaling

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2014-01-01

    Full Text Available Background. The disruption of physiologic vascular smooth muscle cell (VSMC migration initiates atherosclerosis development. The biochemical mechanisms leading to dysfunctional VSMC motility remain unknown. Recently, cytokine BMP-2 has been implicated in various vascular physiologic and pathologic processes. However, whether BMP-2 has any effect upon VSMC motility, or by what manner, has never been investigated. Methods. VSMCs were adenovirally transfected to genetically overexpress BMP-2. VSMC motility was detected by modified Boyden chamber assay, confocal time-lapse video assay, and a colony wounding assay. Gene chip array and RT-PCR were employed to identify genes potentially regulated by BMP-2. Western blot and real-time PCR detected the expression of myosin Va and the phosphorylation of extracellular signal-regulated kinases 1/2 (Erk1/2. Immunofluorescence analysis revealed myosin Va expression locale. Intracellular Ca2+ oscillations were recorded. Results. VSMC migration was augmented in VSMCs overexpressing BMP-2 in a dose-dependent manner. siRNA-mediated knockdown of myosin Va inhibited VSMC motility. Both myosin Va mRNA and protein expression significantly increased after BMP-2 administration and were inhibited by Erk1/2 inhibitor U0126. BMP-2 induced Ca2+ oscillations, generated largely by a “cytosolic oscillator”. Conclusion. BMP-2 significantly increased VSMCs migration and myosin Va expression, via the Erk signaling pathway and intracellular Ca2+ oscillations. We provide additional insight into the pathophysiology of atherosclerosis, and inhibition of BMP-2-induced myosin Va expression may represent a potential therapeutic strategy.

  14. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  15. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  16. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  17. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  18. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  19. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  20. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  1. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  2. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  3. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  4. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  5. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  6. GeVaDSs – decision support system for novel Genetic Vaccine development process

    Directory of Open Access Journals (Sweden)

    Blazewicz Jacek

    2012-05-01

    Full Text Available Abstract Background The lack of a uniform way for qualitative and quantitative evaluation of vaccine candidates under development led us to set up a standardized scheme for vaccine efficacy and safety evaluation. We developed and implemented molecular and immunology methods, and designed support tools for immunization data storage and analyses. Such collection can create a unique opportunity for immunologists to analyse data delivered from their laboratories. Results We designed and implemented GeVaDSs (Genetic Vaccine Decision Support system an interactive system for efficient storage, integration, retrieval and representation of data. Moreover, GeVaDSs allows for relevant association and interpretation of data, and thus for knowledge-based generation of testable hypotheses of vaccine responses. Conclusions GeVaDSs has been tested by several laboratories in Europe, and proved its usefulness in vaccine analysis. Case study of its application is presented in the additional files. The system is available at: http://gevads.cs.put.poznan.pl/preview/(login: viewer, password: password.

  7. ENDF/B-VII.1 Neutron Cross Section Data Testing with Critical Assembly Benchmarks and Reactor Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, A.C.; Herman, M.; Kahler,A.C.; MacFarlane,R.E.; Mosteller,R.D.; Kiedrowski,B.C.; Frankle,S.C.; Chadwick,M.B.; McKnight,R.D.; Lell,R.M.; Palmiotti,G.; Hiruta,H.; Herman,M.; Arcilla,R.; Mughabghab,S.F.; Sublet,J.C.; Trkov,A.; Trumbull,T.H.; Dunn,M.

    2011-12-01

    The ENDF/B-VII.1 library is the latest revision to the United States Evaluated Nuclear Data File (ENDF). The ENDF library is currently in its seventh generation, with ENDF/B-VII.0 being released in 2006. This revision expands upon that library, including the addition of new evaluated files (was 393 neutron files previously, now 423 including replacement of elemental vanadium and zinc evaluations with isotopic evaluations) and extension or updating of many existing neutron data files. Complete details are provided in the companion paper [M. B. Chadwick et al., 'ENDF/B-VII.1 Nuclear Data for Science and Technology: Cross Sections, Covariances, Fission Product Yields and Decay Data,' Nuclear Data Sheets, 112, 2887 (2011)]. This paper focuses on how accurately application libraries may be expected to perform in criticality calculations with these data. Continuous energy cross section libraries, suitable for use with the MCNP Monte Carlo transport code, have been generated and applied to a suite of nearly one thousand critical benchmark assemblies defined in the International Criticality Safety Benchmark Evaluation Project's International Handbook of Evaluated Criticality Safety Benchmark Experiments. This suite covers uranium and plutonium fuel systems in a variety of forms such as metallic, oxide or solution, and under a variety of spectral conditions, including unmoderated (i.e., bare), metal reflected and water or other light element reflected. Assembly eigenvalues that were accurately predicted with ENDF/B-VII.0 cross sections such as unmoderated and uranium reflected {sup 235}U and {sup 239}Pu assemblies, HEU solution systems and LEU oxide lattice systems that mimic commercial PWR configurations continue to be accurately calculated with ENDF/B-VII.1 cross sections, and deficiencies in predicted eigenvalues for assemblies containing selected materials, including titanium, manganese, cadmium and tungsten are greatly reduced. Improvements are also

  8. Value at risk (VaR in uncertainty: Analysis with parametric method and black & scholes simulations

    Directory of Open Access Journals (Sweden)

    Humberto Banda Ortiz

    2014-07-01

    Full Text Available VaR is the most accepted risk measure worldwide and the leading reference in any risk management assessment. However, its methodology has important limitations which makes it unreliable in contexts of crisis or high uncertainty. For this reason, the aim of this work is to test the VaR accuracy when is employed in contexts of volatility, for which we compare the VaR outcomes in scenarios of both stability and uncertainty, using the parametric method and a historical simulation based on data generated with the Black & Scholes model. VaR main objective is the prediction of the highest expected loss for any given portfolio, but even when it is considered a useful tool for risk management under conditions of markets stability, we found that it is substantially inaccurate in contexts of crisis or high uncertainty. In addition, we found that the Black & Scholes simulations lead to underestimate the expected losses, in comparison with the parametric method and we also found that those disparities increase substantially in times of crisis. In the first section of this work we present a brief context of risk management in finance. In section II we present the existent literature relative to the VaR concept, its methods and applications. In section III we describe the methodology and assumptions used in this work. Section IV is dedicated to expose the findings. And finally, in Section V we present our conclusions.

  9. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  10. RİSK ÖLÇÜMÜNDE ALTERNATİF YAKLAŞIMLAR: RİSKE MARUZ DEĞER (VaR ve BEKLENEN KAYIP (ES UYGULAMALARI

    Directory of Open Access Journals (Sweden)

    SEZER BOZKUŞ

    2013-06-01

    Full Text Available This article shows that Value-at-Risk (VaR, the most popular risk measure in practice, has a considerable positive bias when used for a portfolio with fat-tail distribution. Numerical examples, i.e. USD/Euro daily prices and ISE-100 Index monthly returns, are given to demonstrate the use of our method. In the search for a suitable alternative to VaR, Expected Shortfall (ES or conditional VaR has been characterized as the coherent risk measure to dominate VaR. We discuss properties of VaR and ES and compare them in terms of consistency with elimination of tail risk, strengths and weaknesses. We conclude that ES is more applicable than VaR since ES is free of tail risk and consistent under more lenient conditions than VaR is.

  11. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  12. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  13. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  14. Phenomenology of MaVaN’s Models in Reactor Neutrino Data

    Directory of Open Access Journals (Sweden)

    M. F. Carneiro

    2013-01-01

    Full Text Available Mass Varying Neutrinos (MaVaN’s mechanisms were proposed to link the neutrino mass scale with the dark energy density, addressing the coincidence problem. In some scenarios, this mass can present a dependence on the baryonic density felt by neutrinos, creating an effective neutrino mass that depends both on the neutrino and baryonic densities. In this work, we study the phenomenological consequence of MaVaN’s scenarios in which the matter density dependence is induced by Yukawa interactions of a light neutral scalar particle which couples to neutrinos and matter. Under the assumption of one mass scale dominance, we perform an analysis of KamLAND neutrino data which depends on 4 parameters: the two standard oscillation parameters, Δm0,212 and tan2θ12, and two new coefficients which parameterize the environment dependence of neutrino mass. We introduce an Earth’s crust model to compute precisely the density in each point along the neutrino trajectory. We show that this new description of density does not affect the analysis with the standard model case. With the MaVaN model, we observe a first order effect in lower density, which leads to an improvement on the data description.

  15. 75 FR 34934 - Safety Zone; Fireworks for the Virginia Lake Festival, Buggs Island Lake, Clarksville, VA

    Science.gov (United States)

    2010-06-21

    ...-AA00 Safety Zone; Fireworks for the Virginia Lake Festival, Buggs Island Lake, Clarksville, VA AGENCY... Fireworks for the Virginia Lake Festival event. This action is intended to restrict vessel traffic movement... Virginia Lake Festival, Buggs Island Lake, Clarksville, VA (a) Regulated Area. The following area is a...

  16. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  17. Benchmarking the implementation of E-Commerce A Case Study Approach

    OpenAIRE

    von Ettingshausen, C. R. D. Freiherr

    2009-01-01

    The purpose of this thesis was to develop a guideline to support the implementation of E-Commerce with E-Commerce benchmarking. Because of its importance as an interface with the customer, web-site benchmarking has been a widely researched topic. However, limited research has been conducted on benchmarking E-Commerce across other areas of the value chain. Consequently this thesis aims to extend benchmarking into E-Commerce related subjects. The literature review examined ...

  18. 30 CFR 57.22208 - Auxiliary fans (I-A, II-A, III, and V-A mines).

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Auxiliary fans (I-A, II-A, III, and V-A mines... fans (I-A, II-A, III, and V-A mines). (a) Auxiliary fans, except fans used in shops and other areas... applicable requirements of 30 CFR part 18, and be operated so that recirculation is minimized. Auxiliary fans...

  19. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  20. Adenovirus Vector-Derived VA-RNA-Mediated Innate Immune Responses

    Directory of Open Access Journals (Sweden)

    Hiroyuki Mizuguchi

    2011-07-01

    Full Text Available The major limitation of the clinical use of replication-incompetent adenovirus (Ad vectors is the interference by innate immune responses, including induction of inflammatory cytokines and interferons (IFN, following in vivo application of Ad vectors. Ad vector-induced production of inflammatory cytokines and IFNs also results in severe organ damage and efficient induction of acquired immune responses against Ad proteins and transgene products. Ad vector-induced innate immune responses are triggered by the recognition of Ad components by pattern recognition receptors (PRRs. In order to reduce the side effects by Ad vector-induced innate immune responses and to develop safer Ad vectors, it is crucial to clarify which PRRs and which Ad components are involved in Ad vector-induced innate immune responses. Our group previously demonstrated that myeloid differentiating factor 88 (MyD88 and toll-like receptor 9 (TLR9 play crucial roles in the Ad vector-induced inflammatory cytokine production in mouse bone marrow-derived dendritic cells. Furthermore, our group recently found that virus associated-RNAs (VA-RNAs, which are about 160 nucleotide-long non-coding small RNAs encoded in the Ad genome, are involved in IFN production through the IFN-β promoter stimulator-1 (IPS-1-mediated signaling pathway following Ad vector transduction. The aim of this review is to highlight the Ad vector-induced innate immune responses following transduction, especially VA-RNA-mediated innate immune responses. Our findings on the mechanism of Ad vector-induced innate immune responses should make an important contribution to the development of safer Ad vectors, such as an Ad vector lacking expression of VA-RNAs.

  1. Comparison of topical fixed-combination fortified vancomycin-amikacin (VA solution) to conventional separate therapy in the treatment of bacterial corneal ulcer.

    Science.gov (United States)

    Chiang, C-C; Lin, J-M; Chen, W-L; Chiu, Y-T; Tsai, Y-Y

    2009-02-01

    In an in vitro study, fixed-combination fortified vancomycin and amikacin ophthalmic solutions (VA solution) had the same potency and stable physical properties as the separate components. In this retrospective clinical study, we evaluated the efficacy of the topical VA solution in the treatment of bacterial corneal ulcer and comparison with separate topical fortified vancomycin and amikacin. Separate topical fortified eye drops was used prior to January 2004 and switched to the VA solution afterwards in the treatment of bacterial corneal ulcer. The medical records of 223 patients diagnosed with bacterial corneal ulcers between January 2002 and December 2005 were reviewed retrospectively. There were 122 patients in the VA group and 101 in the separate group. Cure was defined as complete healing of the ulcer accompanied by a nonprogressive stromal infiltrate on two consecutive visits. No significant difference was found between the VA and separate therapy group. The mean treatment duration was 15.4 days in the VA group and 16.1 days in the separate therapy group. The average hospital stay was 5.4 days (VA) and 7.2 days (separate antibiotics). Stromal infiltration regressed significantly without further expansion in both groups. All corneal ulcers completely re-epithelialized without complications related to drugs. VA solution provided similar efficacy to the conventional separate therapy in the treatment of bacterial corneal ulcers; however, it is more convenient and tolerable, promotes patient's compliance, avoids the washout effect, and reduces nurse utilization. Hence, VA solution is a good alternative to separate therapy.

  2. Introducing a Generic Concept for an Online IT-Benchmarking System

    OpenAIRE

    Ziaie, Pujan;Ziller, Markus;Wollersheim, Jan;Krcmar, Helmut

    2014-01-01

    While IT benchmarking has grown considerably in the last few years, conventional benchmarking tools have not been able to adequately respond to the rapid changes in technology and paradigm shifts in IT-related domains. This paper aims to review benchmarking methods and leverage design science methodology to present design elements for a novel software solution in the field of IT benchmarking. The solution, which introduces a concept for generic (service-independent) indicators is based on and...

  3. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  4. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  5. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  6. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  7. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  8. Benchmarking CRISPR on-target sgRNA design.

    Science.gov (United States)

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Color perception differentiates Alzheimer's Disease (AD) from Vascular Dementia (VaD) patients.

    Science.gov (United States)

    Arnaoutoglou, N A; Arnaoutoglou, M; Nemtsas, P; Costa, V; Baloyannis, S J; Ebmeier, K P

    2017-08-01

    Alzheimer's Disease (AD) and Vascular Dementia (VaD) are the most common causes of dementia in older people. Both diseases appear to have similar clinical symptoms, such as deficits in attention and executive function, but specific cognitive domains are affected. Current cohort studies have shown a close relationship between αβ deposits and age-related macular degeneration (Johnson et al., 2002; Ratnayaka et al., 2015). Additionally, a close link between the thinning of the retinal nerve fiber (RNFL) and AD patients has been described, while it has been proposed that AD patients suffer from a non-specific type of color blindness (Pache et al., 2003). Our study included 103 individuals divided into three groups: A healthy control group (n = 35), AD (n = 32) according to DSM-IV-TR, NINCDS-ADRDA criteria, and VaD (n = 36) based on ΝΙΝDS-AIREN, as well as Magnetic Resonance Imaging (MRI) results. The severity of patient's cognitive impairment, was measured with the Mini-Mental State Examination (MMSE) and was classified according to the Reisberg global deterioration scale (GDS). Visual perception was examined using the Ishihara plates: "Ishihara Color Vision Test - 38 Plate." The three groups were not statistically different for demographic data (age, gender, and education). The Ishihara color blindness test has a sensitivity of 80.6% and a specificity of 87.5% to discriminate AD and VaD patients when an optimal (32.5) cut-off value of performance is used. Ishihara Color Vision Test - 38 Plate is a promising potential method as an easy and not time-consuming screening test for the differential diagnosis of dementia between AD and VaD.

  10. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  11. 76 FR 27381 - Proposed Information Collection (Notice of Waiver of VA Compensation or Pension To Receive...

    Science.gov (United States)

    2011-05-11

    ... waive VA benefits for the number of days equal to the number of days in which they received training pay... of Waiver of VA Compensation or Pension To Receive Military Pay and Allowances) Activity; Comment... currently approved collection, and allow 60 days for public comment in response to the notice. This notice...

  12. Evaluation of flooded 3 x3x3 arrays of plutonium metal

    International Nuclear Information System (INIS)

    Pitts, M.; Rahnema, F.

    1997-01-01

    In the early 1980's, thirteen experiments using plutonium metal cylinders were performed at the Rocky Flats Critical Mass Laboratory. The experimental method consisted of flooding a 3 x3 x3 array with water until criticality was achieved. Ten of the thirteen experiments went critical while the other three remained subcritical upon full reflection. This paper evaluates these experiments to develop benchmark descriptions for validation of computational tools used by criticality safety specialists. Six of the ten critical experiments were found acceptable as benchmark experiments. Sensitivity studies were performed to find the effect of experimental limits and uncertainties on the k eff value. Analysis of the experiments was performed by MCNP with continuous energy ENDF/B-V cross section data. K eff values for all benchmark experiments were computed using MCNP with ENDF/B-V data, KENO-Va with Hansen Roach cross section, and KENO-Va with 27-group ENDF/B-IV cross sections. Although these experiments were flooded, KENO-Va calculations show that these were, in fact, fast systems. 3 refs., 1 fig., 7 tabs

  13. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  14. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  15. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  16. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  17. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Nowell, Lisa H., E-mail: lhnowell@usgs.gov [U.S. Geological Survey, California Water Science Center, Placer Hall, 6000 J Street, Sacramento, CA 95819 (United States); Norman, Julia E., E-mail: jnorman@usgs.gov [U.S. Geological Survey, Oregon Water Science Center, 2130 SW 5" t" h Avenue, Portland, OR 97201 (United States); Ingersoll, Christopher G., E-mail: cingersoll@usgs.gov [U.S. Geological Survey, Columbia Environmental Research Center, 4200 New Haven Road, Columbia, MO 65021 (United States); Moran, Patrick W., E-mail: pwmoran@usgs.gov [U.S. Geological Survey, Washington Water Science Center, 934 Broadway, Suite 300, Tacoma, WA 98402 (United States)

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  18. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    International Nuclear Information System (INIS)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  19. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  20. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  1. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  2. QCD sum-rules for V-A spectral functions

    International Nuclear Information System (INIS)

    Chakrabarti, J.; Mathur, V.S.

    1980-01-01

    The Borel transformation technique of Shifman et al is used to obtain QCD sum-rules for V-A spectral functions. In contrast to the situation in the original Weinberg sum-rules and those of Bernard et al, the problem of saturating the sum-rules by low lying resonances is brought under control. Furthermore, the present sum-rules, on saturation, directly determine useful phenomenological parameters

  3. Waves from the Sun: to the 100th anniversary of V.A. Troitskaya's birth

    Science.gov (United States)

    Guglielmi, Anatol; Potapov, Alexander

    2017-09-01

    It has been one hundred years since the birth of the outstanding scientist Professor V.A. Troitskaya. Her remarkable achievements in solar-terrestrial physics are widely known. For many years, Valeria A. Troitskaya was the President of the International Association of Geomagnetism and Aeronomy. This article deals with only one aspect of the multifaceted creative activity of V.A. Troitskaya. It relates to the problem of sources of ultra-low frequency (ULF) electromagnetic oscillations and waves outside Earth’s magnetosphere. We were fortunate to work under the leadership of V.A. Troitskaya on this problem. In this paper, we briefly describe the history from the emergence of the idea of the extramagnetospheric origin of dayside permanent ULF oscillations in the late 1960s to the modern quest made by ground and satellite means for ULF waves excited by solar surface oscillations propagating in the interplanetary medium and reaching Earth.

  4. Scholar and teacher: V.A. Kitaev at the history department of Volgograd State University

    Directory of Open Access Journals (Sweden)

    Kuznetsov Oleg Viktorovich

    2013-11-01

    Full Text Available Vladimir A. Kitaev was born 1941. He was the first dean of the Faculty of History and the first head of the Department of History of the USSR (now the Department of History of Russia of Volgograd State University, reputable scientist, recognized expert in the history of the Russian social thought. The article shows the role of V.A. Kitaev in the formation and development of the faculty and the department. A characteristic of V.A. Kitaev’ sresearch and teaching activities is given. Kitaev’s featuresas a scholar and teacher such as great erudition, science scrupulosity, exactingness to himself andto his disciples, are marked. V.A. Kitaev worked at Volgograd State University for 16 years. All the while, he headed the Department andwas the dean for the first four years. The main thing is what he wanted in those positions – along with their colleagues was to lay and develop the traditions of the classical university, university atmosphere of the faculty and the university. The major scientific issues that are developed by V.A. Kitaev were: the history of liberalism and the fate of liberal reforms (modernization in Russia, the history of the Russian conservative thought, the problem of revolutionary violence as an inevitable result of the practical realization of socialist ideas. As an advocate of “establishing full-fledged liberal order”, V.A. Kitaev had, in essence, to ascertain: in Russia XIX – early XX century did not yet develop the historical conditions for the triumph of liberal ideas. The weakness and indecision of Russian liberals and their fear of the revolutionary movement, the constant fluctuation between the reform and reaction did not allow them to become independent of the political force that would determine the fate of the country in the end.

  5. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A., E-mail: sharpejr@mcmaster.ca [McMaster University, Hamilton, ON (Canada); Nowak, M. [McMaster University, Hamilton, ON (Canada); Institut National Polytechnique de Grenoble, Phelma, Grenoble (France); Pencer, J. [McMaster University, Hamilton, ON (Canada); Canadian Nuclear Laboratories, Chalk River, ON, (Canada); Novog, D.; Buijs, A. [McMaster University, Hamilton, ON (Canada)

    2015-07-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg{sup -1} [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  6. A benchmark comparison of the Canadian Supercritical Water-Cooled Reactor (SCWR) 64-element fuel lattice cell parameters using various computer codes

    International Nuclear Information System (INIS)

    Sharpe, J.; Salaun, F.; Hummel, D.; Moghrabi, A.; Nowak, M.; Pencer, J.; Novog, D.; Buijs, A.

    2015-01-01

    Discrepancies in key lattice physics parameters have been observed between various deterministic (e.g. DRAGON and WIMS-AECL) and stochastic (MCNP, KENO) neutron transport codes in modeling previous versions of the Canadian SCWR lattice cell. Further, inconsistencies in these parameters have also been observed when using different nuclear data libraries. In this work, the predictions of k∞, various reactivity coefficients, and relative ring-averaged pin powers have been re-evaluated using these codes and libraries with the most recent 64-element fuel assembly geometry. A benchmark problem has been defined to quantify the dissimilarities between code results for a number of responses along the fuel channel under prescribed hot full power (HFP), hot zero power (HZP) and cold zero power (CZP) conditions and at several fuel burnups (0, 25 and 50 MW·d·kg"-"1 [HM]). Results from deterministic (TRITON, DRAGON) and stochastic codes (MCNP6, KENO V.a and KENO-VI) are presented. (author)

  7. 38 CFR 17.1000 - Payment or reimbursement for emergency services for nonservice-connected conditions in non-VA...

    Science.gov (United States)

    2010-07-01

    ... for emergency services for nonservice-connected conditions in non-VA facilities. 17.1000 Section 17.1000 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Payment Or Reimbursement for Emergency Services for Nonservice-Connected Conditions in Non-Va Facilities § 17.1000 Payment...

  8. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  9. Validation of the Monte Carlo Criticality Program KENO V.a for highly-enriched uranium systems

    International Nuclear Information System (INIS)

    Knight, J.R.

    1984-11-01

    A series of calculations based on critical experiments have been performed using the KENO V.a Monte Carlo Criticality Program for the purpose of validating KENO V.a for use in evaluating Y-12 Plant criticality problems. The experiments were reflected and unreflected systems of single units and arrays containing highly enriched uranium metal or uranium compounds. Various geometrical shapes were used in the experiments. The SCALE control module CSAS25 with the 27-group ENDF/B-4 cross-section library was used to perform the calculations. Some of the experiments were also calculated using the 16-group Hansen-Roach Library. Results are presented in a series of tables and discussed. Results show that the criteria established for the safe application of the KENO IV program may also be used for KENO V.a results

  10. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  11. Le CERN va supprimer 600 postes d'ici a 2007

    CERN Multimedia

    2002-01-01

    "Le Laboratoire europeen pour la physique des particules (CERN), qui doit economiser quelque 340 millions d'euros jusqu'en 2008, va reduire ses effectifs de 600 postes d'ici a 2007, a annonce jeudi son porte-parole, James Gillies" (1/2/ page).

  12. Military and Veteran Support: DOD and VA Programs That Address the Effects of Combat and Transition to Civilian Life

    Science.gov (United States)

    2014-11-01

    servicemembers to civilian life. For its part, VA’s agency priority goals are to (1) ensure access to VA benefits and services, (2) eliminate the disability...transfer their benefits to dependents. VA – Veterans Benefit Administration ( VBA ) Spinal Cord Injury and Disorders Centers Disability; Physical...who are temporarily residing in a home owned by a family member to help adapt the home to meet his or her special needs. VA - VBA Yellow Ribbon

  13. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  14. 77 FR 29929 - Safety Zone; Town of Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA

    Science.gov (United States)

    2012-05-21

    ... section of this notice. Basis and Purpose On July 4, 2012 the Town of Cape Charles will sponsor a...-AA00 Safety Zone; Town of Cape Charles Fireworks, Cape Charles Harbor, Cape Charles, VA AGENCY: Coast... temporary safety zone on the waters of Cape Charles City Harbor in Cape Charles, VA in support of the Fourth...

  15. 75 FR 44720 - Safety Zone; Live-Fire Gun Exercise, M/V Del Monte, James River, VA

    Science.gov (United States)

    2010-07-29

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2010-0585] RIN 1625-AA00 Safety Zone; Live-Fire Gun Exercise, M/V Del Monte, James River, VA AGENCY: Coast Guard, DHS... follows: Sec. 165.T05-0585 Safety Zone; Live-Fire Gun Exercise, M/V Del Monte, James River, VA (a...

  16. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  17. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  18. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  19. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  20. VA Disability Benefits: Additional Planning Would Enhance Efforts to Improve the Timeliness of Appeals Decisions

    Science.gov (United States)

    2017-03-01

    must manually review and correct most incoming cases due to issues with labeling, mismatched dates, and missing files. Via an internal study, VA...individuals acclimate to their jobs —and factored this into the modeling assumptions used to project the number of Board staff needed. More...Needed to Promote Increased User Satisfaction . GAO-15-582 (Washington, D.C.: September 1, 2015). Page 29 GAO-17-234 VA Disability

  1. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  2. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  3. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  4. MCNP analysis of the nine-cell LWR gadolinium benchmark

    International Nuclear Information System (INIS)

    Arkuszewski, J.J.

    1988-01-01

    The Monte Carlo results for a 9-cell fragment of the light water reactor square lattice with a central gadolinium-loaded pin are presented. The calculations are performed with the code MCNP-3A and the ENDF-B/5 library and compared with the results obtained from the BOXER code system and the JEF-1 library. The objective of this exercise is to study the feasibility of BOXER for the analysis of a Gd-loaded LWR lattice in the broader framework of GAP International Benchmark Analysis. A comparison of results indicates that, apart from unavoidable discrepancies originating from different data evaluations, the BOXER code overestimates the multiplication factor by 1.4 % and underestimates the power release in a Gd cell by 4.66 %. It is hoped that further similar studies with use of the JEF-1 library for both BOXER and MCNP will help to isolate and explain these discrepancies in a cleaner way. (author) 4 refs., 9 figs., 10 tabs

  5. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  6. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  7. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  8. Core Benchmarks Descriptions

    International Nuclear Information System (INIS)

    Pavlovichev, A.M.

    2001-01-01

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented

  9. A Benchmark Estimate for the Capital Stock. An Optimal Consistency Method

    OpenAIRE

    Jose Miguel Albala-Bertrand

    2001-01-01

    There are alternative methods to estimate a capital stock for a benchmark year. These methods, however, do not allow for an independent check, which could establish whether the estimated benchmark level is too high or too low. I propose here an optimal consistency method (OCM), which may allow estimating a capital stock level for a benchmark year and/or checking the consistency of alternative estimates of a benchmark capital stock.

  10. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    , the chapter accounts for the data collection methods used to conduct the empirical data collection and the appertaining choices that are made, based on the account for analyzing institutionalization processes. The analysis unfolds over seven chapters, starting with an exposition of the political foundation...... and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... emerged as actors expressed diverse political interests in the institutionalization of benchmarking. The political struggles accounted for in chapter five constituted a powerful political pressure and called for transformations of the institutionalization in order for benchmarking to attain institutional...

  11. Benchmarking af kommunernes førtidspensionspraksis

    DEFF Research Database (Denmark)

    Gregersen, Ole

    Hvert år udgiver Den Sociale Ankestyrelse statistikken over afgørelser i sager om førtidspension. I forbindelse med årsstatistikken udgives resultater fra en benchmarking model, hvor antal tilkendelser i den enkelte kommune sammenlignes med et forventet antal tilkendelser, hvis kommunen havde haft...... samme afgørelsespraksis, som den "gennemsnitlige kommune", når vi korrigerer for den sociale struktur i kommunen. Den hidtil anvendte benchmarking model er dokumenteret i Ole Gregersen (1994): Kommunernes Pensionspraksis, Servicerapport, Socialforskningsinstituttet. I dette notat dokumenteres en...

  12. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.L.; Cheng, E.T.

    1986-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li 17 Pb 83 and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the tritium breeding ratio to group structure and weighting spectrum increases as the thickness and Li enrichment decrease with up to 20% discrepancies for thin natural Li 17 Pb 83 blankets. (author)

  13. Numerical simulations of concrete flow: A benchmark comparison

    DEFF Research Database (Denmark)

    Roussel, Nicolas; Gram, Annika; Cremonesi, Massimiliano

    2016-01-01

    First, we define in this paper two benchmark flows readily usable by anyone calibrating a numerical tool for concrete flow prediction. Such benchmark flows shall allow anyone to check the validity of their computational tools no matter the numerical methods and parameters they choose. Second, we ...

  14. Guideline for benchmarking thermal treatment systems for low-level mixed waste

    International Nuclear Information System (INIS)

    Hoffman, D.P.; Gibson, L.V. Jr.; Hermes, W.H.; Bastian, R.E.; Davis, W.T.

    1994-01-01

    A process for benchmarking low-level mixed waste (LLMW) treatment technologies has been developed. When used in conjunction with the identification and preparation of surrogate waste mixtures, and with defined quality assurance and quality control procedures, the benchmarking process will effectively streamline the selection of treatment technologies being considered by the US Department of Energy (DOE) for LLMW cleanup and management. Following the quantitative template provided in the benchmarking process will greatly increase the technical information available for the decision-making process. The additional technical information will remove a large part of the uncertainty in the selection of treatment technologies. It is anticipated that the use of the benchmarking process will minimize technology development costs and overall treatment costs. In addition, the benchmarking process will enhance development of the most promising LLMW treatment processes and aid in transferring the technology to the private sector. To instill inherent quality, the benchmarking process is based on defined criteria and a structured evaluation format, which are independent of any specific conventional treatment or emerging process technology. Five categories of benchmarking criteria have been developed for the evaluation: operation/design; personnel health and safety; economics; product quality; and environmental quality. This benchmarking document gives specific guidance on what information should be included and how it should be presented. A standard format for reporting is included in Appendix A and B of this document. Special considerations for LLMW are presented and included in each of the benchmarking categories

  15. IT-benchmarking of clinical workflows: concept, implementation, and evaluation.

    Science.gov (United States)

    Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula

    2014-01-01

    Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.

  16. Benchmark calculations for VENUS-2 MOX -fueled reactor dosimetry

    International Nuclear Information System (INIS)

    Kim, Jong Kung; Kim, Hong Chul; Shin, Chang Ho; Han, Chi Young; Na, Byung Chan

    2004-01-01

    As a part of a Nuclear Energy Agency (NEA) Project, it was pursued the benchmark for dosimetry calculation of the VENUS-2 MOX-fueled reactor. In this benchmark, the goal is to test the current state-of-the-art computational methods of calculating neutron flux to reactor components against the measured data of the VENUS-2 MOX-fuelled critical experiments. The measured data to be used for this benchmark are the equivalent fission fluxes which are the reaction rates divided by the U 235 fission spectrum averaged cross-section of the corresponding dosimeter. The present benchmark is, therefore, defined to calculate reaction rates and corresponding equivalent fission fluxes measured on the core-mid plane at specific positions outside the core of the VENUS-2 MOX-fuelled reactor. This is a follow-up exercise to the previously completed UO 2 -fuelled VENUS-1 two-dimensional and VENUS-3 three-dimensional exercises. The use of MOX fuel in LWRs presents different neutron characteristics and this is the main interest of the current benchmark compared to the previous ones

  17. Benchmarking the cost efficiency of community care in Australian child and adolescent mental health services: implications for future benchmarking.

    Science.gov (United States)

    Furber, Gareth; Brann, Peter; Skene, Clive; Allison, Stephen

    2011-06-01

    The purpose of this study was to benchmark the cost efficiency of community care across six child and adolescent mental health services (CAMHS) drawn from different Australian states. Organizational, contact and outcome data from the National Mental Health Benchmarking Project (NMHBP) data-sets were used to calculate cost per "treatment hour" and cost per episode for the six participating organizations. We also explored the relationship between intake severity as measured by the Health of the Nations Outcome Scales for Children and Adolescents (HoNOSCA) and cost per episode. The average cost per treatment hour was $223, with cost differences across the six services ranging from a mean of $156 to $273 per treatment hour. The average cost per episode was $3349 (median $1577) and there were significant differences in the CAMHS organizational medians ranging from $388 to $7076 per episode. HoNOSCA scores explained at best 6% of the cost variance per episode. These large cost differences indicate that community CAMHS have the potential to make substantial gains in cost efficiency through collaborative benchmarking. Benchmarking forums need considerable financial and business expertise for detailed comparison of business models for service provision.

  18. FENDL-3 benchmark test with neutronics experiments related to fusion in Japan

    International Nuclear Information System (INIS)

    Konno, Chikara; Ohta, Masayuki; Takakura, Kosuke; Ochiai, Kentaro; Sato, Satoshi

    2014-01-01

    Highlights: •We have benchmarked FENDL-3.0 with integral experiments with DT neutron sources in Japan. •The FENDL-3.0 is as accurate as FENDL-2.1 and JENDL-4.0 or more. •Some data in FENDL-3.0 may have some problems. -- Abstract: The IAEA supports and promotes the gathering of the best data from evaluated nuclear data libraries for each nucleus involved in fusion reactor applications and compiles these data as FENDL. In 2012, the IAEA released a major update to FENDL, FENDL-3.0, which extends the neutron energy range from 20 MeV to greater than 60 MeV for 180 nuclei. We have benchmarked FENDL-3.0 versus in situ and TOF experiments using the DT neutron source at FNS at the JAEA and TOF experiments using the DT neutron source at OKTAVIAN at Osaka University in Japan. The Monte Carlo code MCNP-5 and the ACE file of FENDL-3.0 supplied from the IAEA were used for the calculations. The results were compared with measured ones and those obtained using the previous version, FENDL-2.1, and the latest version, JENDL-4.0. It is concluded that FENDL-3.0 is as accurate as or more so than FENDL-2.1 and JENDL-4.0, although some data in FENDL-3.0 may be problematic

  19. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi; Ikeo, Kazuho; Katayama, Yukie; Kawabata, Takeshi; Kinjo, Akira R.; Kinoshita, Kengo; Kwon, Yeondae; Migita, Ohsuke; Mizutani, Hisashi; Muraoka, Masafumi; Nagata, Koji; Omori, Satoshi; Sugawara, Hideaki; Yamada, Daichi; Yura, Kei

    2016-01-01

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.

  20. VaProS: a database-integration approach for protein/genome information retrieval

    KAUST Repository

    Gojobori, Takashi

    2016-12-24

    Life science research now heavily relies on all sorts of databases for genome sequences, transcription, protein three-dimensional (3D) structures, protein–protein interactions, phenotypes and so forth. The knowledge accumulated by all the omics research is so vast that a computer-aided search of data is now a prerequisite for starting a new study. In addition, a combinatory search throughout these databases has a chance to extract new ideas and new hypotheses that can be examined by wet-lab experiments. By virtually integrating the related databases on the Internet, we have built a new web application that facilitates life science researchers for retrieving experts’ knowledge stored in the databases and for building a new hypothesis of the research target. This web application, named VaProS, puts stress on the interconnection between the functional information of genome sequences and protein 3D structures, such as structural effect of the gene mutation. In this manuscript, we present the notion of VaProS, the databases and tools that can be accessed without any knowledge of database locations and data formats, and the power of search exemplified in quest of the molecular mechanisms of lysosomal storage disease. VaProS can be freely accessed at http://p4d-info.nig.ac.jp/vapros/.