WorldWideScience

Sample records for functional benchmark results

  1. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  2. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  3. A benchmark server using high resolution protein structure data, and benchmark results for membrane helix predictions.

    Science.gov (United States)

    Rath, Emma M; Tessier, Dominique; Campbell, Alexander A; Lee, Hong Ching; Werner, Tim; Salam, Noeris K; Lee, Lawrence K; Church, W Bret

    2013-03-27

    Helical membrane proteins are vital for the interaction of cells with their environment. Predicting the location of membrane helices in protein amino acid sequences provides substantial understanding of their structure and function and identifies membrane proteins in sequenced genomes. Currently there is no comprehensive benchmark tool for evaluating prediction methods, and there is no publication comparing all available prediction tools. Current benchmark literature is outdated, as recently determined membrane protein structures are not included. Current literature is also limited to global assessments, as specialised benchmarks for predicting specific classes of membrane proteins were not previously carried out. We present a benchmark server at http://sydney.edu.au/pharmacy/sbio/software/TMH_benchmark.shtml that uses recent high resolution protein structural data to provide a comprehensive assessment of the accuracy of existing membrane helix prediction methods. The server further allows a user to compare uploaded predictions generated by novel methods, permitting the comparison of these novel methods against all existing methods compared by the server. Benchmark metrics include sensitivity and specificity of predictions for membrane helix location and orientation, and many others. The server allows for customised evaluations such as assessing prediction method performances for specific helical membrane protein subtypes.We report results for custom benchmarks which illustrate how the server may be used for specialised benchmarks. Which prediction method is the best performing method depends on which measure is being benchmarked. The OCTOPUS membrane helix prediction method is consistently one of the highest performing methods across all measures in the benchmarks that we performed. The benchmark server allows general and specialised assessment of existing and novel membrane helix prediction methods. Users can employ this benchmark server to determine the most

  4. Benchmarking implementations of lazy functional languages II -- Two years later

    NARCIS (Netherlands)

    Hartel, Pieter H.; Johnsson, T.

    Six implementations of different lazy functional languages are compared using a common benchmark of a dozen medium-sized programs. The experiments that were carried out two years ago have been repeated to chart progress in the development of these compilers. The results have been extended to include

  5. Benchmark Results for Few-Body Hypernuclei

    Science.gov (United States)

    Ferrari Ruffino, F.; Lonardoni, D.; Barnea, N.; Deflorian, S.; Leidemann, W.; Orlandini, G.; Pederiva, F.

    2017-05-01

    The Non-Symmetrized Hyperspherical Harmonics method (NSHH) is introduced in the hypernuclear sector and benchmarked with three different ab-initio methods, namely the Auxiliary Field Diffusion Monte Carlo method, the Faddeev-Yakubovsky approach and the Gaussian Expansion Method. Binding energies and hyperon separation energies of three- to five-body hypernuclei are calculated by employing the two-body Λ N component of the phenomenological Bodmer-Usmani potential (Bodmer and Usmani in Nucl Phys A 477:621, 1988; Usmani and Khanna in J Phys G 35:025105, 2008), and a hyperon-nucleon interaction (Hiyama et al. in Phus Rev C 65:011301, 2001) simulating the scattering phase shifts given by NSC97f (Rijken et al. in Phys Rev C 59:21, 1999). The range of applicability of the NSHH method is briefly discussed.

  6. Actinides transmutation - a comparison of results for PWR benchmark

    International Nuclear Information System (INIS)

    Claro, Luiz H.

    2009-01-01

    The physical aspects involved in the Partitioning and Transmutation (P and T) of minor actinides (MA) and fission products (FP) generated by reactors PWR are of great interest in the nuclear industry. Besides these the reduction in the storage of radioactive wastes are related with the acceptability of the nuclear electric power. From the several concepts for partitioning and transmutation suggested in literature, one of them involves PWR reactors to burn the fuel containing plutonium and minor actinides reprocessed of UO 2 used in previous stages. In this work are presented the results of the calculations of a benchmark in P and T carried with WIMSD5B program using its new cross sections library generated from the ENDF-B-VII and the comparison with the results published in literature by other calculations. For comparison, was used the benchmark transmutation concept based in a typical PWR cell and the analyzed results were the k∞ and the atomic density of the isotopes Np-239, Pu-241, Pu-242 and Am-242m, as function of burnup considering discharge of 50 GWd/tHM. (author)

  7. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...

  8. Spherical harmonic results for the 3D Kobayashi Benchmark suite

    International Nuclear Information System (INIS)

    Brown, P N; Chang, B; Hanebutte, U R

    1999-01-01

    Spherical harmonic solutions are presented for the Kobayashi benchmark suite. The results were obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL

  9. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    International Nuclear Information System (INIS)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-01-01

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions

  10. VVER-1000 burnup credit benchmark (CB5). New results evaluation

    International Nuclear Information System (INIS)

    Manolova, M.; Mihaylov, N.; Prodanova, R.

    2008-01-01

    The validation of depletion codes is an important task in spent fuel management, especially for burnup credit application in criticality safety analysis of spent fuel facilities. Because of lack of well documented experimental data for VVER-1000, the validation could be made on the basis of code intercomparison based on the numerical benchmark problems. Some years ago a VVER-1000 burnup credit benchmark (CB5) was proposed to the AER research community and the preliminary results from three depletion codes were compared. In the paper some new results for the isotopic concentrations of twelve actinides and fifteen fission products calculated by the depletion codes SCALE5.1, WIMS9, SCALE4.4 and NESSEL-NUKO are compared and evaluated. (authors)

  11. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  12. Parton distribution functions and benchmark cross sections at NNLO

    Energy Technology Data Exchange (ETDEWEB)

    Alekhin, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute for High Energy Physics (IHEP), Protvino (Russian Federation); Bluemlein, J.; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-02-15

    We present a determination of parton distribution functions (ABM11) and the strong coupling constant {alpha}{sub s} at next-to-leading order and next-to-next-to-leading order (NNLO) in QCD based on world data for deep-inelastic scattering and fixed-target data for the Drell-Yan process. The analysis is performed in the fixed-flavor number scheme for n{sub f}=3,4,5 and uses the MS-scheme for {alpha}{sub s} and the heavy-quark masses. At NNLO we obtain the value {alpha}{sub s}(MZ)=0.1134{+-}0.0011. The fit results are used to compute benchmark cross sections at hadron colliders to NNLO accuracy and to compare to data from the LHC. (orig.)

  13. Systems reliability Benchmark exercise part 1-Description and results

    International Nuclear Information System (INIS)

    Amendola, A.

    1986-01-01

    The report describes aims, rules and results of the Systems Reliability Benchmark Exercise, which has been performed in order to assess methods and procedures for reliability analysis of complex systems and involved a large number of European organizations active in NPP safety evaluation. The exercise included both qualitative and quantitative methods and was structured in such a way that separation of the effects of uncertainties in modelling and in data on the overall spread was made possible. Part I describes the way in which RBE has been performed, its main results and conclusions

  14. Benchmarks and performance indicators: two tools for evaluating organizational results and continuous quality improvement efforts.

    Science.gov (United States)

    McKeon, T

    1996-04-01

    Benchmarks are tools that can be compared across companies and industries to measure process output. The key to benchmarking is understanding the composition of the benchmark and whether the benchmarks consist of homogeneous groupings. Performance measures expand the concept of benchmarking and cross organizational boundaries to include factors that are strategically important to organizational success. Incorporating performance measures into a balanced score card will provide a comprehensive tool to evaluate organizational results.

  15. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  16. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  17. Benchmark density functional theory calculations for nanoscale conductance

    DEFF Research Database (Denmark)

    Strange, Mikkel; Bækgaard, Iben Sig Buur; Thygesen, Kristian Sommer

    2008-01-01

    We present a set of benchmark calculations for the Kohn-Sham elastic transmission function of five representative single-molecule junctions. The transmission functions are calculated using two different density functional theory methods, namely an ultrasoft pseudopotential plane-wave code...... in combination with maximally localized Wannier functions and the norm-conserving pseudopotential code SIESTA which applies an atomic orbital basis set. All calculations have been converged with respect to the supercell size and the number of k(parallel to) points in the surface plane. For all systems we find...

  18. The Learning Organisation: Results of a Benchmarking Study.

    Science.gov (United States)

    Zairi, Mohamed

    1999-01-01

    Learning in corporations was assessed using these benchmarks: core qualities of creative organizations, characteristic of organizational creativity, attributes of flexible organizations, use of diversity and conflict, creative human resource management systems, and effective and successful teams. These benchmarks are key elements of the learning…

  19. Benchmarking Evaluation Results for Prototype Extravehicular Activity Gloves

    Science.gov (United States)

    Aitchison, Lindsay; McFarland, Shane

    2012-01-01

    The Space Suit Assembly (SSA) Development Team at NASA Johnson Space Center has invested heavily in the advancement of rear-entry planetary exploration suit design but largely deferred development of extravehicular activity (EVA) glove designs, and accepted the risk of using the current flight gloves, Phase VI, for unique mission scenarios outside the Space Shuttle and International Space Station (ISS) Program realm of experience. However, as design reference missions mature, the risks of using heritage hardware have highlighted the need for developing robust new glove technologies. To address the technology gap, the NASA Game-Changing Technology group provided start-up funding for the High Performance EVA Glove (HPEG) Project in the spring of 2012. The overarching goal of the HPEG Project is to develop a robust glove design that increases human performance during EVA and creates pathway for future implementation of emergent technologies, with specific aims of increasing pressurized mobility to 60% of barehanded capability, increasing the durability by 100%, and decreasing the potential of gloves to cause injury during use. The HPEG Project focused initial efforts on identifying potential new technologies and benchmarking the performance of current state of the art gloves to identify trends in design and fit leading to establish standards and metrics against which emerging technologies can be assessed at both the component and assembly levels. The first of the benchmarking tests evaluated the quantitative mobility performance and subjective fit of four prototype gloves developed by Flagsuit LLC, Final Frontier Designs, LLC Dover, and David Clark Company as compared to the Phase VI. All of the companies were asked to design and fabricate gloves to the same set of NASA provided hand measurements (which corresponded to a single size of Phase Vi glove) and focus their efforts on improving mobility in the metacarpal phalangeal and carpometacarpal joints. Four test

  20. Benchmarking Density Functionals for Chemical Bonds of Gold

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2017-01-01

    Gold plays a major role in nanochemistry, catalysis, and electrochemistry. Accordingly, hundreds of studies apply density functionals to study chemical bonding with gold, yet there is no systematic attempt to assess the accuracy of these methods applied to gold. This paper reports a benchmark...... considering the diverse bonds to gold and the complication of relativistic effects. Thus, studies that use DFT with effective core potentials for gold chemistry, with no alternative due to computational cost, are on solid ground using TPSS-D3 or PBE-D3....

  1. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  2. Benchmarks for electronically excited states: Time-dependent density functional theory and density functional theory based multireference configuration interaction

    DEFF Research Database (Denmark)

    Silva-Junior, Mario R.; Schreiber, Marko; Sauer, Stephan P. A.

    2008-01-01

    Time-dependent density functional theory (TD-DFT) and DFT-based multireference configuration interaction (DFT/MRCI) calculations are reported for a recently proposed benchmark set of 28 medium-sized organic molecules. Vertical excitation energies, oscillator strengths, and excited-state dipole...... moments are computed using the same geometries (MP2/6-31G*) and basis set (TZVP) as in our previous ab initio benchmark study on electronically excited states. The results from TD-DFT (with the functionals BP86, B3LYP, and BHLYP) and from DFT/MRCI are compared against the previous high-level ab initio...

  3. Benchmark results for the critical slab and sphere problem in one-speed neutron transport theory

    International Nuclear Information System (INIS)

    Rawat, Ajay; Mohankumar, N.

    2011-01-01

    Research highlights: → The critical slab and sphere problem in neutron transport under Case eigenfunction formalism is considered. → These equations reduce to integral expressions involving X functions. → Gauss quadrature is not ideal but DE quadrature is well-suited. → Several fold decrease in computational effort with improved accuracy is realisable. - Abstract: In this paper benchmark numerical results for the one-speed criticality problem with isotropic scattering for the slab and sphere are reported. The Fredholm integral equations of the second kind based on the Case eigenfunction formalism are numerically solved by Neumann iterations with the Double Exponential quadrature.

  4. Development of computer code SIMPSEX for simulation of FBR fuel reprocessing flowsheets: II. additional benchmarking results

    International Nuclear Information System (INIS)

    Shekhar Kumar; Koganti, S.B.

    2003-07-01

    Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)

  5. Fast burner reactor benchmark results from the NEA working party on physics of plutonium recycle

    International Nuclear Information System (INIS)

    Hill, R.N.; Wade, D.C.; Palmiotti, G.

    1995-01-01

    As part of a program proposed by the OECD/NEA Working Party on Physics of Plutonium Recycling (WPPR) to evaluate different scenarios for the use of plutonium, fast reactor physics benchmarks were developed; fuel cycle scenarios using either PUREX/TRUEX (oxide fuel) or pyrometallurgical (metal fuel) separation technologies were specified. These benchmarks were designed to evaluate the nuclear performance and radiotoxicity impact of a transuranic-burning fast reactor system. International benchmark results are summarized in this paper; and key conclusions are highlighted

  6. SUSTAINABLE SUCCESS IN HIGHER EDUCATION BY SHARING THE BEST PRACTICES AS A RESULT OF BENCHMARKING PROCESS

    Directory of Open Access Journals (Sweden)

    Anca Gabriela Ilie

    2011-11-01

    Full Text Available The paper proposes to review the main benchmarking criteria, based on the quality indicators used by the higher education institutions and to present new indicators of reference as a result of the inter-universities cooperation. Once these indicators are defined, a national database could be created and through benchmarking methods, there could be established the level of national performance of the educational system. Going forward and generalizing the process, we can compare the national educational system with the European one, using the benchmarking approach. The final purpose is that of establishing a group of universities who come together to explore opportunities for benchmarks and best practices sharing on common interest areas in order to create a „quality culture” for the Romanian higher education system

  7. Performance of exchange-correlation functionals in density functional theory calculations for liquid metal: A benchmark test for sodium

    Science.gov (United States)

    Han, Jeong-Hwan; Oda, Takuji

    2018-04-01

    The performance of exchange-correlation functionals in density-functional theory (DFT) calculations for liquid metal has not been sufficiently examined. In the present study, benchmark tests of Perdew-Burke-Ernzerhof (PBE), Armiento-Mattsson 2005 (AM05), PBE re-parameterized for solids, and local density approximation (LDA) functionals are conducted for liquid sodium. The pair correlation function, equilibrium atomic volume, bulk modulus, and relative enthalpy are evaluated at 600 K and 1000 K. Compared with the available experimental data, the errors range from -11.2% to 0.0% for the atomic volume, from -5.2% to 22.0% for the bulk modulus, and from -3.5% to 2.5% for the relative enthalpy depending on the DFT functional. The generalized gradient approximation functionals are superior to the LDA functional, and the PBE and AM05 functionals exhibit the best performance. In addition, we assess whether the error tendency in liquid simulations is comparable to that in solid simulations, which would suggest that the atomic volume and relative enthalpy performances are comparable between solid and liquid states but that the bulk modulus performance is not. These benchmark test results indicate that the results of liquid simulations are significantly dependent on the exchange-correlation functional and that the DFT functional performance in solid simulations can be used to roughly estimate the performance in liquid simulations.

  8. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  9. BENCHMARKING UPGRADED HOTSPOT DOSE CALCULATIONS AGAINST MACCS2 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Brotherton, Kevin

    2009-04-30

    The radiological consequence of interest for a documented safety analysis (DSA) is the centerline Total Effective Dose Equivalent (TEDE) incurred by the Maximally Exposed Offsite Individual (MOI) evaluated at the 95th percentile consequence level. An upgraded version of HotSpot (Version 2.07) has been developed with the capabilities to read site meteorological data and perform the necessary statistical calculations to determine the 95th percentile consequence result. These capabilities should allow HotSpot to join MACCS2 (Version 1.13.1) and GENII (Version 1.485) as radiological consequence toolbox codes in the Department of Energy (DOE) Safety Software Central Registry. Using the same meteorological data file, scenarios involving a one curie release of {sup 239}Pu were modeled in both HotSpot and MACCS2. Several sets of release conditions were modeled, and the results compared. In each case, input parameter specifications for each code were chosen to match one another as much as the codes would allow. The results from the two codes are in excellent agreement. Slight differences observed in results are explained by algorithm differences.

  10. Benchmark of Space Charge Simulations and Comparison with Experimental Results for High Intensity, Low Energy Accelerators

    CERN Document Server

    Cousineau, Sarah M

    2005-01-01

    Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.

  11. How are functionally similar code clones syntactically different? An empirical study and a benchmark

    Directory of Open Access Journals (Sweden)

    Stefan Wagner

    2016-03-01

    Full Text Available Background. Today, redundancy in source code, so-called “clones” caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, not caused by copy&paste. At present, it is not clear how only functionally similar clones (FSC differ from clones created by copy&paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in <16% of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research.

  12. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom; Javier Ortensi; Sonat Sen; Hans Hammer

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible for defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III

  13. Benchmarking LES with wall-functions and RANS for fatigue problems in thermal–hydraulics systems

    Energy Technology Data Exchange (ETDEWEB)

    Tunstall, R., E-mail: ryan.tunstall@manchester.ac.uk [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Laurence, D.; Prosser, R. [School of MACE, The University of Manchester, Manchester M13 9PL (United Kingdom); Skillen, A. [Scientific Computing Department, STFC Daresbury Laboratory, Warrington WA4 4AD (United Kingdom)

    2016-11-15

    Highlights: • We benchmark LES with blended wall-functions and low-Re RANS for a pipe bend and T-Junction. • Blended wall-laws allow the first cell from the wall to be placed anywhere in the boundary layer. • In both cases LES predictions improve as the first cell wall spacing is reduced. • Near-wall temperature fluctuations in the T-Junction are overpredicted by wall-modelled LES. • The EBRSM outperforms other RANS models for the pipe bend. - Abstract: In assessing whether nuclear plant components such as T-Junctions are likely to suffer thermal fatigue problems in service, CFD techniques need to provide accurate predictions for wall temperature fluctuations. Though it has been established that this is within the capabilities of wall-resolved LES, its high computational cost has prevented widespread usage in industry. In the present paper the suitability of LES with blended wall-functions, that allow the first cell to be placed in any part of the boundary layer, is assessed. Numerical results for the flows through a 90° pipe bend and a T-Junction are compared against experimental data. Both test cases contain areas where equilibrium laws are violated in practice. It is shown that reducing the first cell wall spacing improves agreement with experimental data by limiting the extent from the wall in which the solution is constrained to an equilibrium law. The LES with wall-function approach consistently overpredicts the near-wall temperature fluctuations in the T-Junction, suggesting that it can be considered as a conservative approach. We also benchmark a range of low-Re RANS models. EBRSM predictions for the 90° pipe bend are in significantly better agreement with experimental data than those from the other models. There are discrepancies from all RANS models in the case of the T-Junction.

  14. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    International Nuclear Information System (INIS)

    DeHart, M.D.; Parks, C.V.; Brady, M.C.

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155

  15. OECD/NEA Burnup Credit Calculational Criticality Benchmark Phase I-B Results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.

    1993-01-01

    Burnup credit is an ongoing technical concern for many countries that operate commercial nuclear power reactors. In a multinational cooperative effort to resolve burnup credit issues, a Burnup Credit Working Group has been formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development. This working group has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide, and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods are in agreement to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods are within 11% agreement about the average for all fission products studied. Furthermore, most deviations are less than 10%, and many are less than 5%. The exceptions are {sup 149}Sm, {sup 151}Sm, and {sup 155}Gd.

  16. OECD/NEA burnup credit calculational criticality benchmark Phase I-B results

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, M.D.; Parks, C.V. [Oak Ridge National Lab., TN (United States); Brady, M.C. [Sandia National Labs., Las Vegas, NV (United States)

    1996-06-01

    In most countries, criticality analysis of LWR fuel stored in racks and casks has assumed that the fuel is fresh with the maximum allowable initial enrichment. This assumption has led to the design of widely spaced and/or highly poisoned storage and transport arrays. If credit is assumed for fuel burnup, initial enrichment limitations can be raised in existing systems, and more compact and economical arrays can be designed. Such reliance on the reduced reactivity of spent fuel for criticality control is referred to as burnup credit. The Burnup Credit Working Group, formed under the auspices of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development, has established a set of well-defined calculational benchmarks designed to study significant aspects of burnup credit computational methods. These benchmarks are intended to provide a means for the intercomparison of computer codes, methods, and data applied in spent fuel analysis. The benchmarks have been divided into multiple phases, each phase focusing on a particular feature of burnup credit analysis. This report summarizes the results and findings of the Phase I-B benchmark, which was proposed to provide a comparison of the ability of different code systems and data libraries to perform depletion analysis for the prediction of spent fuel isotopic concentrations. Results included here represent 21 different sets of calculations submitted by 16 different organizations worldwide and are based on a limited set of nuclides determined to have the most important effect on the neutron multiplication factor of light-water-reactor spent fuel. A comparison of all sets of results demonstrates that most methods agree to within 10% in the ability to estimate the spent fuel concentrations of most actinides. All methods agree within 11% about the average for all fission products studied. Most deviations are less than 10%, and many are less than 5%. The exceptions are Sm 149, Sm 151, and Gd 155.

  17. The reactive transport benchmark proposed by GdR MoMaS: presentation and first results

    Energy Technology Data Exchange (ETDEWEB)

    Carrayrou, J. [Institut de Mecanique des Fluides et des Solides, UMR ULP-CNRS 7507, 67 - Strasbourg (France); Lagneau, V. [Ecole des Mines de Paris, Centre de Geosciences, 77 - Fontainebleau (France)

    2007-07-01

    We present here the actual context of reactive transport modelling and the major numerical challenges. GdR MoMaS proposes a benchmark on reactive transport. We present this benchmark and some results obtained on it by two reactive transport codes HYTEC and SPECY. (authors)

  18. The reactive transport benchmark proposed by GdR MoMaS: presentation and first results

    International Nuclear Information System (INIS)

    Carrayrou, J.; Lagneau, V.

    2007-01-01

    We present here the actual context of reactive transport modelling and the major numerical challenges. GdR MoMaS proposes a benchmark on reactive transport. We present this benchmark and some results obtained on it by two reactive transport codes HYTEC and SPECY. (authors)

  19. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    Science.gov (United States)

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  20. Validation of the WIMSD4M cross-section generation code with benchmark results

    Energy Technology Data Exchange (ETDEWEB)

    Deen, J.R.; Woodruff, W.L. [Argonne National Lab., IL (United States); Leal, L.E. [Oak Ridge National Lab., TN (United States)

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.

  1. Validation of the WIMSD4M cross-section generation code with benchmark results

    International Nuclear Information System (INIS)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section libraries for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D 2 O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented

  2. Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks

    CERN Document Server

    Altheimer, A; Asquith, L; Brooijmans, G; Butterworth, J; Campanelli, M; Chapleau, B; Cholakian, A E; Chou, J P; Dasgupta, M; Davison, A; Dolen, J; Ellis, S D; Essig, R; Fan, J J; Field, R; Fregoso, A; Gallicchio, J; Gershtein, Y; Gomes, A; Haas, A; Halkiadakis, E; Halyo, V; Hoeche, S; Hook, A; Hornig, A; Huang, P; Izaguirre, E; Jankowiak, M; Kribs, G; Krohn, D; Larkoski, A J; Lath, A; Lee, C; Lee, S J; Loch, P; Maksimovic, P; Martinez, M; Miller, D W; Plehn, T; Prokofiev, K; Rahmat, R; Rappoccio, S; Safonov, A; Salam, G P; Schumann, S; Schwartz, M D; Schwartzman, A; Seymour, M; Shao, J; Sinervo, P; Son, M; Soper, D E; Spannowsky, M; Stewart, I W; Strassler, M; Strauss, E; Takeuchi, M; Thaler, J; Thomas, S; Tweedie, B; Vasquez Sierra, R; Vermilion, C K; Villaplana, M; Vos, M; Wacker, J; Walker, D; Walsh, J R; Wang, L-T; Wilbur, S; Yavin, I; Zhu, W

    2012-01-01

    In this report we review recent theoretical progress and the latest experimental results in jet substructure from the Tevatron and the LHC. We review the status of and outlook for calculation and simulation tools for studying jet substructure. Following up on the report of the Boost 2010 workshop, we present a new set of benchmark comparisons of substructure techniques, focusing on the set of variables and grooming methods that are collectively known as "top taggers". To facilitate further exploration, we have attempted to collect, harmonise, and publish software implementations of these techniques.

  3. OECD/NEA burnup credit criticality benchmark. Result of phase IIA

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Makoto; Okuno, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-02-01

    The report describes the final result of the Phase IIA of the Burnup Credit Criticality Benchmark conducted by OECD/NEA. In the Phase IIA benchmark problems, the effect of an axial burnup profile of PWR spent fuels on criticality (end effect) has been studied. The axial profiles at 10, 30 and 50 GWd/t burnup have been considered. In total, 22 results from 18 institutes of 10 countries have been submitted. The calculated multiplication factors from the participants have lain within the band of {+-} 1% {Delta}k. For the irradiation up to 30 GWd/t, the end effect has been found to be less than 1.0% {Delta}k. But, for the 50 GWd/t case, the effect is more than 4.0% {Delta}k when both actinides and FPs are taken into account, whereas it remains less than 1.0% {Delta}k when only actinides are considered. The fission density data have indicated the importance end regions have in the criticality safety analysis of spent fuel systems. (author).

  4. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses are presented for a series of experimental benchmark problems. Consistent analytical procedures and constitutive relations were used in each of the analyses, and published material behavior data were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for Type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  5. Comparison of typical inelastic analysis predictions with benchmark problem experimental results

    International Nuclear Information System (INIS)

    Clinard, J.A.; Corum, J.M.; Sartory, W.K.

    1975-01-01

    The results of exemplary inelastic analyses for experimental benchmark problems on reactor components are presented. Consistent analytical procedures and constitutive relations were used in each of the analyses, and the material behavior data presented in the Appendix were used in all cases. Two finite-element inelastic computer programs were employed. These programs implement the analysis procedures and constitutive equations for type 304 stainless steel that are currently used in many analyses of elevated-temperature nuclear reactor system components. The analysis procedures and constitutive relations are briefly discussed, and representative analytical results are presented and compared to the test data. The results that are presented demonstrate the feasibility of performing inelastic analyses for the types of problems discussed, and they are indicative of the general level of agreement that the analyst might expect when using conventional inelastic analysis procedures. (U.S.)

  6. Results of the brugge benchmark study for flooding optimization and history matching

    NARCIS (Netherlands)

    Peters, E.; Arts, R.J.; Brouwer, G.K.; Geel, C.R.; Cullick, S.; Lorentzen, R.J.; Chen, Y.; Dunlop, K.N.B.; Vossepoel, F.C.; Xu, R.; Sarma, P.; Alhutali, A.H.; Reynolds, A.C.

    2010-01-01

    In preparation for the SPE Applied Technology Workshop (ATW) held in Brugge in June 2008, a unique benchmark project was organized to test the combined use of waterflooding-optimization and history-matching methods in a closed-loop workflow. The benchmark was organized in the form of an interactive

  7. Benchmarking exchange-correlation functionals for hydrogen at high pressures using quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Raymond C. [Univ. of Illinois, Urbana, IL (United States); Mcminis, Jeremy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McMahon, Jeffrey M. [Univ. of Illinois, Urbana, IL (United States); Pierleoni, Carlo [Istituto Nazionale di Fisica Nucleare (INFN), L' aquila (Italy). Lab. Nazionali del Gran Sasso (INFN-LNGS); Ceperley, David M. [Univ. of Illinois, Urbana, IL (United States); Morales, Miguel A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-05-01

    The ab initio phase diagram of dense hydrogen is very sensitive to errors in the treatment of electronic correlation. Recently, it has been shown that the choice of the density functional has a large effect on the predicted location of both the liquid-liquid phase transition and the solid insulator-to-metal transition in dense hydrogen. To identify the most accurate functional for dense hydrogen applications, we systematically benchmark some of the most commonly used functionals using quantum Monte Carlo. By considering several measures of functional accuracy, we conclude that the van der Waals and hybrid functionals significantly outperform local density approximation and Perdew-Burke-Ernzerhof. We support these conclusions by analyzing the impact of functional choice on structural optimization in the molecular solid, and on the location of the liquid-liquid phase transition.

  8. A comparison of recent results from HONDO III with the JSME nuclear shipping cask benchmark calculations

    International Nuclear Information System (INIS)

    Key, S.W.

    1985-01-01

    The results of two calculations related to the impact response of spent nuclear fuel shipping casks are compared to the benchmark results reported in a recent study by the Japan Society of Mechanical Engineers Subcommittee on Structural Analysis of Nuclear Shipping Casks. Two idealized impacts are considered. The first calculation utilizes a right circular cylinder of lead subjected to a 9.0 m free fall onto a rigid target, while the second calculation utilizes a stainless steel clad cylinder of lead subjected to the same impact conditions. For the first problem, four calculations from graphical results presented in the original study have been singled out for comparison with HONDO III. The results from DYNA3D, STEALTH, PISCES, and ABAQUS are reproduced. In the second problem, the results from four separate computer programs in the original study, ABAQUS, ANSYS, MARC, and PISCES, are used and compared with HONDO III. The current version of HONDO III contains a fully automated implementation of the explicit-explicit partitioning procedure for the central difference method time integration which results in a reduction of computational effort by a factor in excess of 5. The results reported here further support the conclusion of the original study that the explicit time integration schemes with automated time incrementation are effective and efficient techniques for computing the transient dynamic response of nuclear fuel shipping casks subject to impact loading. (orig.)

  9. Finite element model updating of the UCF grid benchmark using measured frequency response functions

    Science.gov (United States)

    Sipple, Jesse D.; Sanayei, Masoud

    2014-05-01

    A frequency response function based finite element model updating method is presented and used to perform parameter estimation of the University of Central Florida Grid Benchmark Structure. The proposed method is used to calibrate the initial finite element model using measured frequency response functions from the undamaged, intact structure. Stiffness properties, mass properties, and boundary conditions of the initial model were estimated and updated. Model updating was then performed using measured frequency response functions from the damaged structure to detect physical structural change. Grouping and ungrouping were utilized to determine the exact location and magnitude of the damage. The fixity in rotation of two boundary condition nodes was accurately and successfully estimated. The usefulness of the proposed method for finite element model updating is shown by being able to detect, locate, and quantify change in structural properties.

  10. Criticality benchmark results for the ENDF60 library with MCNP trademark

    International Nuclear Information System (INIS)

    Keen, N.D.; Frankle, S.C.; MacFarlane, R.E.

    1995-01-01

    The continuous-energy neutron data library ENDF60, for use with the Monte Carlo N-Particle radiation transport code MCNP4A, was released in the fall of 1994. The ENDF60 library is comprised of 124 nuclide data files based on the ENDF/B-VI (B-VI) evaluations through Release 2. Fifty-two percent of these B-VI evaluations are translations from ENDF/B-V (B-V). The remaining forty-eight percent are new evaluations which have sometimes changed significantly. Among these changes are greatly increased use of isotopic evaluations, more extensive resonance-parameter evaluations, and energy-angle correlated distributions for secondary particles. In particular, the upper energy limit for the resolved resonance region of 235 U, 238 U and 239 Pu has been extended from 0.082, 4.0, and 0.301 keV to 2..25, 10.0, and 2.5 keV respectively. As regulatory oversight has advanced and performing critical experiments has become more difficult, there has been an increased reliance on computational methods. For the criticality safety community, the performance of the combined transport code and data library is of interest. The purpose of this abstract is to provide benchmarking results to aid the user in determining the best data library for their application

  11. PWR-PSMS benchmarking results using thermocouple data from the summer-1 plant

    International Nuclear Information System (INIS)

    Peng, C.M.; Ipakchi, A.; Kim, J.H.

    1986-01-01

    In large pressurized water reactor (PWR) power plants, estimating the in-core power distribution from off-line predictions is based on data from global measurements with conservative assumptions. The off-line predictions are too independent of the actual process to reflect the true state of the reactor. The on-line core monitoring systems tend to balance between measurements and theoretical calculations, better utilizing information coming from measurements. The hybrid system, which incorporates measurements in predictions along with frequent model adaptations, will closely track the actual operating state of the plant. Since the detailed core flux mapping is performed with large time intervals for those PWRs without fixed in-core detectors, the on-line signals from thermocouples located at the top of selected fuel assemblies offer an alternative means of monitoring. The in-core thermocouples give a good indication of the average coolant temperature at the outlet of the instrumented assemblies and potentially can provide continuous information of the radial power distribution between flux maps. The PWR Power Shape Monitoring System (PWR-PSMS) has implemented this on-line monitoring feature based on thermocouple readings to evaluate the core performance and to improve core monitoring. The purpose of this paper is to present the benchmark results of PWR-PSMS using thermocouple data from the Summer-1 plant of a Westinghouse PWR

  12. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  13. Characterization of the dynamic friction of woven fabrics: Experimental methods and benchmark results

    NARCIS (Netherlands)

    Sachs, Ulrich; Akkerman, Remko; Fetfatsidis, K.; Vidal-Sallé, E.; Schumacher, J.; Ziegmann, G.; Allaoui, S.; Hivet, G.; Maron, B.; Vanclooster, K.; Lomov, S.V.

    2014-01-01

    A benchmark exercise was conducted to compare various friction test set-ups with respect to the measured coefficients of friction. The friction was determined between Twintex®PP, a fabric of commingled yarns of glass and polypropylene filaments, and a metal surface. The same material was supplied to

  14. Characterization of mechanical behavior of woven fabrics: experimental methods and benchmark results

    NARCIS (Netherlands)

    Cao, J.; Akkerman, Remko; Boisse, P.; Chen, J.; Cheng, H.S.; de Graaf, E.F.; Gorczyca, J.L.; Harrison, P.

    2008-01-01

    Textile composites made of woven fabrics have demonstrated excellent mechanical properties for the production of high specific-strength products. Research efforts in the woven fabric sheet forming are currently at a point where benchmarking will lead to major advances in understanding both the

  15. JNC results of BFS-62-3A benchmark calculation (CRP: Phase 5)

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2004-01-01

    The present work is the results of JNC, Japan, for the Phase 5 of IAEA CRP benchmark problem (BFS-62-3A critical experiment). Analytical Method of JNC is based on Nuclear Data Library JENDL-3.2; Group Constant Set JFS-3-J3.2R: 70-group, ABBN-type self-shielding factor table based on JENDL-3.2; Effective Cross-section - Current-weighted multigroup transport cross-section. Cell model for the BFS as-built tube and pellets was (Case 1) Homogeneous Model based on IPPE definition; (Case 2) Homogeneous atomic density equivalent to JNC's heterogeneous calculation only to cross-check the adjusted correction factors; (Case 3) Heterogeneous model based on JNC's evaluation, One-dimensional plate-stretch model with Tone's background cross-section method (CASUP code). Basic diffusion Calculation was done in 18-groups and three-dimensional Hex-Z model (by the CITATION code), with Isotropic diffusion coefficients (Case 1 and 2), and Benoist's anisotropic diffusion coefficients (Case 3). For sodium void reactivity, the exact perturbation theory was applied both to basic calculation and correction calculations, ultra-fine energy group correction - approx. 100,000 group constants below 50 keV, and ABBN-type 175 group constants with shielding factors above 50 keV. Transport theory and mesh size correction 18-group, was used for three-dimensional Hex-Z model (the MINIHEX code based on the S4-P0 transport method, which was developed by JNC. Effective delayed Neutron fraction in the reactivity scale was fixed at 0.00623 by IPPE evaluation. Analytical Results of criticality values and sodium void reactivity coefficient obtained by JNC are presented. JNC made a cross-check of the homogeneous model and the adjusted correction factors submitted by IPPE, and confirmed they are consistent. JNC standard system showed quite satisfactory analytical results for the criticality and the sodium void reactivity of BFS-62-3A experiment. JNC calculated the cross-section sensitivity coefficients of BFS

  16. The second Austrian benchmark study for blood use in elective surgery: results and practice change.

    Science.gov (United States)

    Gombotz, Hans; Rehak, Peter H; Shander, Aryeh; Hofmann, Axel

    2014-10-01

    Five years after the first Austrian benchmark study demonstrated relatively high transfusion rate and an abundance of nonindicated transfusions in elective surgeries, this study was conducted to investigate the effects of the first benchmark study. Data from 3164 patients undergoing primary unilateral total hip replacement (THR), primary unilateral noncemented total knee replacement (TKR), or coronary artery bypass graft (CABG) surgery at 15 orthopedic and six cardiac centers were collected and compared with the first study. Transfusion rates decreased in THR (41% to 30%) and TKR (41% to 25%), but remained unchanged in CABG surgery (57% vs. 55%) compared with the first study. More than 80% of all transfusions involved at least 2 units of red blood cells (RBCs). Marked variations were observed in transfusion rates among the centers. The prevalence of anemia was three times higher in patients who received transfusions versus those who did not. However, preoperative anemia was left untreated in the majority of patients. A considerable intercenter variability of RBC loss ranging from 26% to 43% in THR, from 24% to 40% in TKR, and from 30% to 49% in CABG procedures was observed. The second benchmark study demonstrates substantial intercenter variability and small but significant reductions in RBC transfusions and RBC loss. Even though the main independent predictors of transfusion were the relative lost RBC volume followed by the relative preoperative and the lowest relative postoperative hemoglobin, preoperative anemia was not adequately treated in many patients, underscoring the importance of patient blood management in these patients. © 2014 AABB.

  17. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, T.; Laville, C. [Institut de Radioprotection et de Surete Nucleaire IRSN, BP 17, 92262 Fontenay aux Roses (France); Dyrda, J. [Atomic Weapons Establishment AWE, Aldermaston, Reading, RG7 4PR (United Kingdom); Mennerdahl, D. [E Mennerdahl Systems EMS, Starvaegen 12, 18357 Taeby (Sweden); Golovko, Y.; Raskach, K.; Tsiboulia, A. [Inst. for Physics and Power Engineering IPPE, 1, Bondarenko sq., 249033 Obninsk (Russian Federation); Lee, G. S.; Woo, S. W. [Korea Inst. of Nuclear Safety KINS, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Bidaud, A.; Sabouri, P. [Laboratoire de Physique Subatomique et de Cosmologie LPSC, CNRS-IN2P3/UJF/INPG, Grenoble (France); Patel, A. [U.S. Nuclear Regulatory Commission (NRC), Washington, DC 20555-0001 (United States); Bledsoe, K.; Rearden, B. [Oak Ridge National Laboratory ORNL, M.S. 6170, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Gulliford, J.; Michel-Sendis, F. [OECD/NEA, 12, Bd des Iles, 92130 Issy-les-Moulineaux (France)

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)

  18. Results of a 3D-EM-Code Comparison on the TRISPAL Cavity Benchmark

    CERN Document Server

    Balleyguier, P

    2004-01-01

    Several 3D electromagnetic codes (MAFIA, CST MicroWave-Studio, Vector-Fields Soprano, Ansoft HFSS, SLAC Omega3P) have been tested on a 2-cell cavity benchmark. Computed frequencies and Q-factors were compared to experimental values measured on a mock-up, putting the emphasis on the effect of coupling slots. It comes out that MAFIA limitations due to the staircase approximation is overcome by all other codes, but some differences still remain for losses calculations in re-entrant corners

  19. Benchmarking Density Functional Theory Approaches for the Description of Symmetry-Breaking in Long Polymethine Dyes

    KAUST Repository

    Gieseking, Rebecca L.

    2016-04-25

    Long polymethines are well-known experimentally to symmetry-break, which dramatically modifies their linear and nonlinear optical properties. Computational modeling could be very useful to provide insight into the symmetry-breaking process, which is not readily available experimentally; however, accurately predicting the crossover point from symmetric to symmetry-broken structures has proven challenging. Here, we benchmark the accuracy of several DFT approaches relative to CCSD(T) geometries. In particular, we compare analogous hybrid and long-range corrected (LRC) functionals to clearly show the influence of the functional exchange term. Although both hybrid and LRC functionals can be tuned to reproduce the CCSD(T) geometries, the LRC functionals are better performing at reproducing the geometry evolution with chain length and provide a finite upper limit for the gas-phase crossover point; these methods also provide good agreement with the experimental crossover points for more complex polymethines in polar solvents. Using an approach based on LRC functionals, a reduction in the crossover length is found with increasing medium dielectric constant, which is related to localization of the excess charge on the end groups. Symmetry-breaking is associated with the appearance of an imaginary frequency of b2 symmetry involving a large change in the degree of bond-length alternation. Examination of the IR spectra show that short, isolated streptocyanines have a mode at ~1200 cm-1 involving a large change in bond-length alternation; as the polymethine length or the medium dielectric increases, the frequency of this mode decreases before becoming imaginary at the crossover point.

  20. Comparing the Floating Point Systems, Inc. AP-190L to representative scientific computers: some benchmark results

    International Nuclear Information System (INIS)

    Brengle, T.A.; Maron, N.

    1980-01-01

    Results are presented of comparative timing tests made by running a typical FORTRAN physics simulation code on the following machines: DEC PDP-10 with KI processor; DEC PDP-10, KI processor, and FPS AP-190L; CDC 7600; and CRAY-1. Factors such as DMA overhead, code size for the AP-190L, and the relative utilization of floating point functional units for the different machines are discussed. 1 table

  1. Monitoring the referral system through benchmarking in rural Niger: an evaluation of the functional relation between health centres and the district hospital

    Directory of Open Access Journals (Sweden)

    Miyé Hamidou

    2006-04-01

    Full Text Available Abstract Background The main objective of this study is to establish a benchmark for referral rates in rural Niger so as to allow interpretation of routine referral data to assess the performance of the referral system in Niger. Methods Strict and controlled application of existing clinical decision trees in a sample of rural health centres allowed the estimation of the corresponding need for and characteristics of curative referrals in rural Niger. Compliance of referral was monitored as well. Need was matched against actual referral in 11 rural districts. The referral patterns were registered so as to get an idea on the types of pathology referred. Results The referral rate benchmark was set at 2.5 % of patients consulting at the health centre for curative reasons. Niger's rural districts have a referral rate of less than half this benchmark. Acceptability of referrals is low for the population and is adding to the deficient referral system in Niger. Mortality because of under-referral is highest among young children. Conclusion Referral patterns show that the present programme approach to deliver health care leaves a large amount of unmet need for which only comprehensive first and second line health services can provide a proper answer. On the other hand, the benchmark suggests that well functioning health centres can take care of the vast majority of problems patients present with.

  2. Benchmark Calculations of Energetic Properties of Groups 4 and 6 Transition Metal Oxide Nanoclusters Including Comparison to Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Zongtang; Both, Johan; Li, Shenggang; Yue, Shuwen; Aprà, Edoardo; Keçeli, Murat; Wagner, Albert F.; Dixon, David A.

    2016-08-09

    The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T) method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.

  3. Benchmarking of London Dispersion-Accounting Density Functional Theory Methods on Very Large Molecular Complexes.

    Science.gov (United States)

    Risthaus, Tobias; Grimme, Stefan

    2013-03-12

    A new test set (S12L) containing 12 supramolecular noncovalently bound complexes is presented and used to evaluate seven different methods to account for dispersion in DFT (DFT-D3, DFT-D2, DFT-NL, XDM, dDsC, TS-vdW, M06-L) at different basis set levels against experimental, back-corrected reference energies. This allows conclusions about the performance of each method in an explorative research setting on "real-life" problems. Most DFT methods show satisfactory performance but, due to the largeness of the complexes, almost always require an explicit correction for the nonadditive Axilrod-Teller-Muto three-body dispersion interaction to get accurate results. The necessity of using a method capable of accounting for dispersion is clearly demonstrated in that the two-body dispersion contributions are on the order of 20-150% of the total interaction energy. MP2 and some variants thereof are shown to be insufficient for this while a few tested D3-corrected semiempirical MO methods perform reasonably well. Overall, we suggest the use of this benchmark set as a "sanity check" against overfitting to too small molecular cases.

  4. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors

    International Nuclear Information System (INIS)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M.; Reyes F, M. del C.; Del Valle G, E.

    2014-10-01

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  5. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requires participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.

  6. Analyses and results of the OECD/NEA WPNCS EGUNF benchmark phase II. Technical report; Analysen und Ergebnisse zum OECD/NEA WPNCS EGUNF Benchmark Phase II. Technischer Bericht

    Energy Technology Data Exchange (ETDEWEB)

    Hannstein, Volker; Sommer, Fabian

    2017-05-15

    The report summarizes the performed studies and results in the frame of the phase II benchmarks of the expert group of used nuclear fuel (EGUNF) of the working party of nuclear criticality safety (WPNCS) of the nuclear energy agency (NEA) of the organization for economic co-operation and development (OECD). The studies specified within the benchmarks have been realized to the full extent. The scope of the benchmarks was the comparison of a generic BWR fuel element with gadolinium containing fuel rods with several computer codes and cross section libraries of different international working groups and institutions. The used computational model allows the evaluation of the accuracy of fuel rod and their influence of the inventory calculations and the respective influence on BWR burnout credit calculations.

  7. Benchmarking Parameter-free AMaLGaM on Functions With and Without Noise

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J. Grahl; D. Thierens (Dirk)

    2013-01-01

    htmlabstractWe describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-IDEA, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black

  8. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    Science.gov (United States)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They

  9. Revised benchmarking of contact-less fingerprint scanners for forensic fingerprint detection: challenges and results for chromatic white light scanners (CWL)

    Science.gov (United States)

    Kiltz, Stefan; Leich, Marcus; Dittmann, Jana; Vielhauer, Claus; Ulrich, Michael

    2011-02-01

    Mobile contact-less fingerprint scanners can be very important tools for the forensic investigation of crime scenes. To be admissible in court, data and the collection process must adhere to rules w.r.t. technology and procedures of acquisition, processing and the conclusions drawn from that evidence. Currently, no overall accepted benchmarking methodology is used to support some of the rules regarding the localisation, acquisition and pre-processing using contact-less fingerprint scanners. Benchmarking is seen essential to rate those devices according to their usefulness for investigating crime scenes. Our main contribution is a revised version of our extensible framework for methodological benchmarking of contact-less fingerprint scanners using a collection of extensible categories and items. The suggested main categories describing a contact-less fingerprint scanner are properties of forensic country-specific legal requirements, technical properties, application-related aspects, input sensory technology, pre-processing algorithm, tested object and materials. Using those it is possible to benchmark fingerprint scanners and describe the setup and the resulting data. Additionally, benchmarking profiles for different usage scenarios are defined. First results for all suggested benchmarking properties, which will be presented in detail in the final paper, were gained using an industrial device (FRT MicroProf200) and conducting 18 tests on 10 different materials.

  10. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  11. OECD/NEA source convergence benchmark program: overview and summary of results

    International Nuclear Information System (INIS)

    Blomquist, Roger; Nouri, Ali; Armishaw, Malcolm; Jacquet, Olivier; Naito, Yoshitaka; Miyoshi, Yoshinori; Yamamoto, Toshihiro

    2003-01-01

    This paper describes the work of the OECD Nuclear Energy Agency Expert Group on Source Convergence in Criticality Safety Analysis. A set of test problems is presented, some computational results are given, and the effects of source convergence difficulties are described

  12. Benchmarking Glucose Results through Automation: The 2009 Remote Automated Laboratory System Report

    Science.gov (United States)

    Anderson, Marcy; Zito, Denise; Kongable, Gail

    2010-01-01

    Background Hyperglycemia in the adult inpatient population remains a topic of intense study in U.S. hospitals. Most hospitals have established glycemic control programs but are unable to determine their impact. The 2009 Remote Automated Laboratory System (RALS) Report provides trends in glycemic control over 4 years to 576 U.S. hospitals to support their effort to manage inpatient hyperglycemia. Methods A proprietary software application feeds de-identified patient point-of-care blood glucose (POC-BG) data from the Medical Automation Systems RALS-Plus data management system to a central server. Analyses include the number of tests and the mean and median BG results for intensive care unit (ICU), non-ICU, and each hospital compared to the aggregate of the other hospitals. Results More than 175 million BG results were extracted from 2006–2009; 25% were from the ICU. Mean range of BG results for all inpatients in 2006, 2007, 2008, and 2009 was 142.2–201.9, 145.6–201.2, 140.6–205.7, and 140.7–202.4 mg/dl, respectively. The range for ICU patients was 128–226.5, 119.5–219.8, 121.6–226.0, and 121.1–217 mg/dl, respectively. The range for non-ICU patients was 143.4–195.5, 148.6–199.8, 145.2–201.9, and 140.7–203.6 mg/dl, respectively. Hyperglycemia rates of >180 mg/dl in 2008 and 2009 were examined, and hypoglycemia rates of Automated POC-BG data management software can assist in this effort. PMID:21129348

  13. Optical rotation calculated with time-dependent density functional theory: the OR45 benchmark.

    Science.gov (United States)

    Srebro, Monika; Govind, Niranjan; de Jong, Wibe A; Autschbach, Jochen

    2011-10-13

    Time-dependent density functional theory (TDDFT) computations are performed for 42 organic molecules and three transition metal complexes, with experimental molar optical rotations ranging from 2 to 2 × 10(4) deg cm(2) dmol(-1). The performances of the global hybrid functionals B3LYP, PBE0, and BHLYP, and of the range-separated functionals CAM-B3LYP and LC-PBE0 (the latter being fully long-range corrected), are investigated. The performance of different basis sets is studied. When compared to liquid-phase experimental data, the range-separated functionals do, on average, not perform better than B3LYP and PBE0. Median relative deviations between calculations and experiment range from 25 to 29%. A basis set recently proposed for optical rotation calculations (LPol-ds) on average does not give improved results compared to aug-cc-pVDZ in TDDFT calculations with B3LYP. Individual cases are discussed in some detail, among them norbornenone for which the LC-PBE0 functional produced an optical rotation that is close to available data from coupled-cluster calculations, but significantly smaller in magnitude than the liquid-phase experimental value. Range-separated functionals and BHLYP perform well for helicenes and helicene derivatives. Metal complexes pose a challenge to first-principles calculations of optical rotation.

  14. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  15. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  16. Calculations of the IAEA-CRP-6 Benchmark Cases by Using the ABAQUS FE Model for a Comparison with the COPA Results

    International Nuclear Information System (INIS)

    Cho, Moon-Sung; Kim, Y. M.; Lee, Y. W.; Jeong, K. C.; Kim, Y. K.; Oh, S. C.

    2006-01-01

    The fundamental design for a gas-cooled reactor relies on an understanding of the behavior of a coated particle fuel. KAERI, which has been carrying out the Korean VHTR (Very High Temperature modular gas cooled Reactor) Project since 2004, is developing a fuel performance analysis code for a VHTR named COPA (COated Particle fuel Analysis). COPA predicts temperatures, stresses, a fission gas release and failure probabilities of a coated particle fuel in normal operating conditions. Validation of COPA in the process of its development is realized partly by participating in the benchmark section of the international CRP-6 program led by IAEA which provides comprehensive benchmark problems and analysis results obtained from the CRP-6 member countries. Apart from the validation effort through the CRP-6, a validation of COPA was attempted by comparing its benchmark results with the visco-elastic solutions obtained from the ABAQUS code calculations for the same CRP-6 TRISO coated particle benchmark problems involving creep, swelling, and pressure. The study shows the calculation results of the IAEA-CRP-6 benchmark cases 5 through 7 by using the ABAQUS FE model for a comparison with the COPA results

  17. Five- and six-electron harmonium atoms: Highly accurate electronic properties and their application to benchmarking of approximate 1-matrix functionals

    Science.gov (United States)

    Cioslowski, Jerzy; Strasburger, Krzysztof

    2018-04-01

    Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.

  18. Benchmark risk analysis models

    NARCIS (Netherlands)

    Ale BJM; Golbach GAM; Goos D; Ham K; Janssen LAM; Shield SR; LSO

    2002-01-01

    A so-called benchmark exercise was initiated in which the results of five sets of tools available in the Netherlands would be compared. In the benchmark exercise a quantified risk analysis was performed on a -hypothetical- non-existing hazardous establishment located on a randomly chosen location in

  19. Solutions of the Two-Dimensional Hubbard Model: Benchmarks and Results from a Wide Range of Numerical Algorithms

    Directory of Open Access Journals (Sweden)

    2015-12-01

    Full Text Available Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

  20. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    International Nuclear Information System (INIS)

    Youssef, M. Z.

    2007-01-01

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  1. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators...

  2. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...

  3. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  4. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  5. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  6. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  7. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Mausolff, Zander; Weems, Zach; Popp, Dustin; Smith, Kristin; Shriver, Forrest; Goluoglu, Sedat; Prince, Zachary; Ragusa, Jean

    2016-01-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\citelesnake) and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  8. Comparative analysis of results between CASMO, MCNP and Serpent for a suite of Benchmark problems on BWR reactors; Analisis comparativo de resultados entre CASMO, MCNP y SERPENT para una suite de problemas Benchmark en reactores BWR

    Energy Technology Data Exchange (ETDEWEB)

    Xolocostli M, J. V.; Vargas E, S.; Gomez T, A. M. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Reyes F, M. del C.; Del Valle G, E., E-mail: vicente.xolocostli@inin.gob.mx [IPN, Escuela Superior de Fisica y Matematicas, UP - Adolfo Lopez Mateos, Edif. 9, 07738 Mexico D. F. (Mexico)

    2014-10-15

    In this paper a comparison is made in analyzing the suite of Benchmark problems for reactors type BWR between CASMO-4, MCNP6 and Serpent code. The Benchmark problem consists of two different geometries: a fuel cell of a pin and assembly type BWR. To facilitate the study of reactors physics in the fuel pin their nuclear characteristics are provided to detail, such as burnt dependence, the reactivity of selected nuclide, etc. With respect to the fuel assembly, the presented results are regarding to infinite multiplication factor for burning different steps and different vacuum conditions. Making the analysis of this set of Benchmark problems provides comprehensive test problems for the next fuels generation of BWR reactors with high extended burned. It is important to note that when making this comparison the purpose is to validate the methodologies used in modeling for different operating conditions, if the case is of other BWR assembly. The results will be within a range with some uncertainty, considering that does not depend on code that is used. Escuela Superior de Fisica y Matematicas of Instituto Politecnico Nacional (IPN (Mexico) has accumulated some experience in using Serpent, due to the potential of this code over other commercial codes such as CASMO and MCNP. The obtained results for the infinite multiplication factor are encouraging and motivate the studies to continue with the generation of the X S of a core to a next step a respective nuclear data library is constructed and this can be used by codes developed as part of the development project of the Mexican Analysis Platform of Nuclear Reactors AZTLAN. (Author)

  9. A benchmarking study

    Directory of Open Access Journals (Sweden)

    H. Groessing

    2015-02-01

    Full Text Available A benchmark study for permeability measurement is presented. In the past studies of other research groups which focused on the reproducibility of 1D-permeability measurements showed high standard deviations of the gained permeability values (25%, even though a defined test rig with required specifications was used. Within this study, the reproducibility of capacitive in-plane permeability testing system measurements was benchmarked by comparing results of two research sites using this technology. The reproducibility was compared by using a glass fibre woven textile and carbon fibre non crimped fabric (NCF. These two material types were taken into consideration due to the different electrical properties of glass and carbon with respect to dielectric capacitive sensors of the permeability measurement systems. In order to determine the unsaturated permeability characteristics as function of fibre volume content the measurements were executed at three different fibre volume contents including five repetitions. It was found that the stability and reproducibility of the presentedin-plane permeability measurement system is very good in the case of the glass fibre woven textiles. This is true for the comparison of the repetition measurements as well as for the comparison between the two different permeameters. These positive results were confirmed by a comparison to permeability values of the same textile gained with an older generation permeameter applying the same measurement technology. Also it was shown, that a correct determination of the grammage and the material density are crucial for correct correlation of measured permeability values and fibre volume contents.

  10. Towards benchmarking citizen observatories: Features and functioning of online amateur weather networks.

    Science.gov (United States)

    Gharesifard, Mohammad; Wehn, Uta; van der Zaag, Pieter

    2017-05-15

    Crowd-sourced environmental observations are increasingly being considered as having the potential to enhance the spatial and temporal resolution of current data streams from terrestrial and areal sensors. The rapid diffusion of ICTs during the past decades has facilitated the process of data collection and sharing by the general public and has resulted in the formation of various online environmental citizen observatory networks. Online amateur weather networks are a particular example of such ICT-mediated observatories that are rooted in one of the oldest and most widely practiced citizen science activities, namely amateur weather observation. The objective of this paper is to introduce a conceptual framework that enables a systematic review of the features and functioning of these expanding networks. This is done by considering distinct dimensions, namely the geographic scope and types of participants, the network's establishment mechanism, revenue stream(s), existing communication paradigm, efforts required by data sharers, support offered by platform providers, and issues such as data accessibility, availability and quality. An in-depth understanding of these dimensions helps to analyze various dynamics such as interactions between different stakeholders, motivations to run the networks, and their sustainability. This framework is then utilized to perform a critical review of six existing online amateur weather networks based on publicly available data. The main findings of this analysis suggest that: (1) there are several key stakeholders such as emergency services and local authorities that are not (yet) engaged in these networks; (2) the revenue stream(s) of online amateur weather networks is one of the least discussed but arguably most important dimensions that is crucial for the sustainability of these networks; and (3) all of the networks included in this study have one or more explicit modes of bi-directional communication, however, this is limited to

  11. Benchmarking Cloud Resources for HEP

    Science.gov (United States)

    Alef, M.; Cordeiro, C.; De Salvo, A.; Di Girolamo, A.; Field, L.; Giordano, D.; Guerri, M.; Schiavi, F. C.; Wiebalck, A.

    2017-10-01

    In a commercial cloud environment, exhaustive resource profiling is beneficial to cope with the intrinsic variability of the virtualised environment, allowing to promptly identify performance degradation. In the context of its commercial cloud initiatives, CERN has acquired extensive experience in benchmarking commercial cloud resources. Ultimately, this activity provides information on the actual delivered performance of invoiced resources. In this report we discuss the experience acquired and the results collected using several fast benchmark applications adopted by the HEP community. These benchmarks span from open-source benchmarks to specific user applications and synthetic benchmarks. The workflow put in place to collect and analyse performance metrics is also described.

  12. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  13. Bent functions results and applications to cryptography

    CERN Document Server

    Tokareva, Natalia

    2015-01-01

    Bent Functions: Results and Applications to Cryptography offers a unique survey of the objects of discrete mathematics known as Boolean bent functions. As these maximal, nonlinear Boolean functions and their generalizations have many theoretical and practical applications in combinatorics, coding theory, and cryptography, the text provides a detailed survey of their main results, presenting a systematic overview of their generalizations and applications, and considering open problems in classification and systematization of bent functions. The text is appropriate for novices and advanced

  14. One-hundred-three compound band-structure benchmark of post-self-consistent spin-orbit coupling treatments in density functional theory

    Science.gov (United States)

    Huhn, William P.; Blum, Volker

    2017-08-01

    We quantify the accuracy of different non-self-consistent and self-consistent spin-orbit coupling (SOC) treatments in Kohn-Sham and hybrid density functional theory by providing a band-structure benchmark set for the valence and low-lying conduction energy bands of 103 inorganic compounds, covering chemical elements up to polonium. Reference energy band structures for the PBE density functional are obtained using the full-potential (linearized) augmented plane wave code wien2k, employing its self-consistent treatment of SOC including Dirac-type p1 /2 orbitals in the basis set. We use this benchmark set to benchmark a computationally simpler, non-self-consistent all-electron treatment of SOC based on scalar-relativistic orbitals and numeric atom-centered orbital basis functions. For elements up to Z ≈50 , both treatments agree virtually exactly. For the heaviest elements considered (Tl, Pb, Bi, Po), the band-structure changes due to SOC are captured with a relative deviation of 11% or less. For different density functionals (PBE versus the hybrid HSE06), we show that the effect of spin-orbit coupling is usually similar but can be dissimilar if the qualitative features of the predicted underlying scalar-relativistic band structures do not agree. All band structures considered in this work are available online via the NOMAD repository to aid in future benchmark studies and methods development.

  15. Functionalized single-walled carbon nanotube-based fuel cell benchmarked against US DOE 2017 technical targets.

    Science.gov (United States)

    Jha, Neetu; Ramesh, Palanisamy; Bekyarova, Elena; Tian, Xiaojuan; Wang, Feihu; Itkis, Mikhail E; Haddon, Robert C

    2013-01-01

    Chemically modified single-walled carbon nanotubes (SWNTs) with varying degrees of functionalization were utilized for the fabrication of SWNT thin film catalyst support layers (CSLs) in polymer electrolyte membrane fuel cells (PEMFCs), which were suitable for benchmarking against the US DOE 2017 targets. Use of the optimum level of SWNT -COOH functionality allowed the construction of a prototype SWNT-based PEMFC with total Pt loading of 0.06 mg(Pt)/cm²--well below the value of 0.125 mg(Pt)/cm² set as the US DOE 2017 technical target for total Pt group metals (PGM) loading. This prototype PEMFC also approaches the technical target for the total Pt content per kW of power (<0.125 g(PGM)/kW) at cell potential 0.65 V: a value of 0.15 g(Pt)/kW was achieved at 80°C/22 psig testing conditions, which was further reduced to 0.12 g(Pt)/kW at 35 psig back pressure.

  16. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Laboratory

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conduction Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be

  17. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  18. Benchmark Credentialing Results for NRG-BR001: The First National Cancer Institute-Sponsored Trial of Stereotactic Body Radiation Therapy for Multiple Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hallaq, Hania A., E-mail: halhallaq@radonc.uchicago.edu [Department of Radiation and Cellular Oncology, Chicago, Illinois (United States); Chmura, Steven J. [Department of Radiation and Cellular Oncology, Chicago, Illinois (United States); Salama, Joseph K. [Department of Radiation Oncology, Durham, North Carolina (United States); Lowenstein, Jessica R. [Imaging and Radiation Oncology Core Group (IROC) Houston, MD Anderson Cancer Center, Houston, Texas (United States); McNulty, Susan; Galvin, James M. [Imaging and Radiation Oncology Core Group (IROC) PHILADELPHIA RT, Philadelphia, Pennsylvania (United States); Followill, David S. [Imaging and Radiation Oncology Core Group (IROC) Houston, MD Anderson Cancer Center, Houston, Texas (United States); Robinson, Clifford G. [Department of Radiation Oncology, St Louis, Missouri (United States); Pisansky, Thomas M. [Department of Radiation Oncology, Rochester, Minnesota (United States); Winter, Kathryn A. [NRG Oncology Statistics and Data Management Center, Philadelphia, Pennsylvania (United States); White, Julia R. [Department of Radiation Oncology, Columbus, Ohio (United States); Xiao, Ying [Imaging and Radiation Oncology Core Group (IROC) PHILADELPHIA RT, Philadelphia, Pennsylvania (United States); Department of Radiation Oncology, Philadelphia, Pennsylvania (United States); Matuszak, Martha M. [Department of Radiation Oncology, Ann Arbor, Michigan (United States)

    2017-01-01

    Purpose: The NRG-BR001 trial is the first National Cancer Institute–sponsored trial to treat multiple (range 2-4) extracranial metastases with stereotactic body radiation therapy. Benchmark credentialing is required to ensure adherence to this complex protocol, in particular, for metastases in close proximity. The present report summarizes the dosimetric results and approval rates. Methods and Materials: The benchmark used anonymized data from a patient with bilateral adrenal metastases, separated by <5 cm of normal tissue. Because the planning target volume (PTV) overlaps with organs at risk (OARs), institutions must use the planning priority guidelines to balance PTV coverage (45 Gy in 3 fractions) against OAR sparing. Submitted plans were processed by the Imaging and Radiation Oncology Core and assessed by the protocol co-chairs by comparing the doses to targets, OARs, and conformity metrics using nonparametric tests. Results: Of 63 benchmarks submitted through October 2015, 94% were approved, with 51% approved at the first attempt. Most used volumetric arc therapy (VMAT) (78%), a single plan for both PTVs (90%), and prioritized the PTV over the stomach (75%). The median dose to 95% of the volume was 44.8 ± 1.0 Gy and 44.9 ± 1.0 Gy for the right and left PTV, respectively. The median dose to 0.03 cm{sup 3} was 14.2 ± 2.2 Gy to the spinal cord and 46.5 ± 3.1 Gy to the stomach. Plans that spared the stomach significantly reduced the dose to the left PTV and stomach. Conformity metrics were significantly better for single plans that simultaneously treated both PTVs with VMAT, intensity modulated radiation therapy, or 3-dimensional conformal radiation therapy compared with separate plans. No significant differences existed in the dose at 2 cm from the PTVs. Conclusions: Although most plans used VMAT, the range of conformity and dose falloff was large. The decision to prioritize either OARs or PTV coverage varied considerably, suggesting that

  19. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  20. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  1. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  2. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions......This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...

  3. CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in Battelle model containment. Experimental phases 2, 3 and 4. Results of comparisons

    International Nuclear Information System (INIS)

    Fischer, K.; Schall, M.; Wolf, L.

    1993-01-01

    The present final report comprises the major results of Phase II of the CEC thermal-hydraulic benchmark exercise on Fiploc verification experiment F2 in the Battelle model containment, experimental phases 2, 3 and 4, which was organized and sponsored by the Commission of the European Communities for the purpose of furthering the understanding and analysis of long-term thermal-hydraulic phenomena inside containments during and after severe core accidents. This benchmark exercise received high European attention with eight organizations from six countries participating with eight computer codes during phase 2. Altogether 18 results from computer code runs were supplied by the participants and constitute the basis for comparisons with the experimental data contained in this publication. This reflects both the high technical interest in, as well as the complexity of, this CEC exercise. Major comparison results between computations and data are reported on all important quantities relevant for containment analyses during long-term transients. These comparisons comprise pressure, steam and air content, velocities and their directions, heat transfer coefficients and saturation ratios. Agreements and disagreements are discussed for each participating code/institution, conclusions drawn and recommendations provided. The phase 2 CEC benchmark exercise provided an up-to-date state-of-the-art status review of the thermal-hydraulic capabilities of present computer codes for containment analyses. This exercise has shown that all of the participating codes can simulate the important global features of the experiment correctly, like: temperature stratification, pressure and leakage, heat transfer to structures, relative humidity, collection of sump water. Several weaknesses of individual codes were identified, and this may help to promote their development. As a general conclusion it may be said that while there is still a wide area of necessary extensions and improvements, the

  4. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  5. Preliminary results of the seventh three-dimensional AER dynamic benchmark problem calculation. Solution with DYN3D and RELAP5-3D codes

    International Nuclear Information System (INIS)

    Bencik, M.; Hadek, J.

    2011-01-01

    The paper gives a brief survey of the seventh three-dimensional AER dynamic benchmark calculation results received with the codes DYN3D and RELAP5-3D at Nuclear Research Institute Rez. This benchmark was defined at the twentieth AER Symposium in Hanassari (Finland). It is focused on investigation of transient behaviour in a WWER-440 nuclear power plant. Its initiating event is opening of the main isolation valve and re-connection of the loop with its main circulation pump in operation. The WWER-440 plant is at the end of the first fuel cycle and in hot full power conditions. Stationary and burnup calculations were performed with the code DYN3D. Transient calculation was made with the system code RELAP5-3D. The two-group homogenized cross sections library HELGD05 created by HELIOS code was used for the generation of reactor core neutronic parameters. The detailed six loops model of NPP Dukovany was adopted for the seventh AER dynamic benchmark purposes. The RELAP5-3D full core neutronic model was coupled with 49 core thermal-hydraulic channels and 8 reflector channels connected with the three-dimensional model of the reactor vessel. The detailed nodalization of reactor downcomer, lower and upper plenum was used. Mixing in lower and upper plenum was simulated. The first part of paper contains a brief characteristic of RELAP5-3D system code and a short description of NPP input deck and reactor core model. The second part shows the time dependencies of important global and local parameters. (Authors)

  6. Results of the Australasian (Trans-Tasman Oncology Group) radiotherapy benchmarking exercise in preparation for participation in the PORTEC-3 trial.

    Science.gov (United States)

    Jameson, Michael G; McNamara, Jo; Bailey, Michael; Metcalfe, Peter E; Holloway, Lois C; Foo, Kerwyn; Do, Viet; Mileshkin, Linda; Creutzberg, Carien L; Khaw, Pearly

    2016-08-01

    Protocol deviations in Randomised Controlled Trials have been found to result in a significant decrease in survival and local control. In some cases, the magnitude of the detrimental effect can be larger than the anticipated benefits of the interventions involved. The implementation of appropriate quality assurance of radiotherapy measures for clinical trials has been found to result in fewer deviations from protocol. This paper reports on a benchmarking study conducted in preparation for the PORTEC-3 trial in Australasia. A benchmarking CT dataset was sent to each of the Australasian investigators, it was requested they contour and plan the case according to trial protocol using local treatment planning systems. These data was then sent back to Trans-Tasman Oncology Group for collation and analysis. Thirty three investigators from eighteen institutions across Australia and New Zealand took part in the study. The mean clinical target volume (CTV) volume was 383.4 (228.5-497.8) cm(3) and the mean dose to a reference gold standard CTV was 48.8 (46.4-50.3) Gy. Although there were some large differences in the contouring of the CTV and its constituent parts, these did not translate into large variations in dosimetry. Where individual investigators had deviations from the trial contouring protocol, feedback was provided. The results of this study will be used to compare with the international study QA for the PORTEC-3 trial. © 2016 The Royal Australian and New Zealand College of Radiologists.

  7. The Weizsaecker functional: Some rigorous results

    Energy Technology Data Exchange (ETDEWEB)

    Romera, E.; Dehesa, J.S.; Yanez, R.J. [Universidad de Granada (Spain)

    1995-12-05

    The Weizsacker functional T{sub W} is a necessary element to explain basic physical and chemical phenomena of atomic and molecular systems in the general density functional theory initiated by Hohenberg and Kohn. Here, rigorous inequalities which involve the functional T{sub W} and two arbitrary power-type density functionals {omega}{sub {alpha}} = {integral} {rho}{sup {alpha}}(r) dr are found by the successive applications of Sobolev and Holder inequalities. Particular cases of these inequalities give lower bounds to the Weizsacker functional of an N-electron system in terms of a fundamental and/or experimentally measurable quantity such as, e.g., the Thomas-Fermi kinetic energy T{sub W} the Dirac-Slater exchange energy K{sub 0} and the average electronic density < {rho} > in doing so, some known relationships appear. A numerical Hartree-Fock study of the accuracy of some resulting lower bounds is carried out. Finally, rigorous relationships between the Weizsacker functional and the Boltzmann-Shannon information entropy of the system under consideration are given. 28 refs., 2 figs.

  8. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  9. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  10. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  11. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1, a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.

  12. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  13. Effects of Secondary Circuit Modeling on Results of Pressurized Water Reactor Main Steam Line Break Benchmark Calculations with New Coupled Code TRAB-3D/SMABRE

    International Nuclear Information System (INIS)

    Daavittila, Antti; Haemaelaeinen, Anitta; Kyrki-Rajamaeki, Riitta

    2003-01-01

    All of the three exercises of the Organization for Economic Cooperation and Development/Nuclear Regulatory Commission pressurized water reactor main steam line break (PWR MSLB) benchmark were calculated at VTT, the Technical Research Centre of Finland. For the first exercise, the plant simulation with point-kinetic neutronics, the thermal-hydraulics code SMABRE was used. The second exercise was calculated with the three-dimensional reactor dynamics code TRAB-3D, and the third exercise with the combination TRAB-3D/SMABRE. VTT has over ten years' experience of coupling neutronic and thermal-hydraulic codes, but this benchmark was the first time these two codes, both developed at VTT, were coupled together. The coupled code system is fast and efficient; the total computation time of the 100-s transient in the third exercise was 16 min on a modern UNIX workstation. The results of all the exercises are similar to those of the other participants. In order to demonstrate the effect of secondary circuit modeling on the results, three different cases were calculated. In case 1 there is no phase separation in the steam lines and no flow reversal in the aspirator. In case 2 the flow reversal in the aspirator is allowed, but there is no phase separation in the steam lines. Finally, in case 3 the drift-flux model is used for the phase separation in the steam lines, but the aspirator flow reversal is not allowed. With these two modeling variations, it is possible to cover a remarkably broad range of results. The maximum power level reached after the reactor trip varies from 534 to 904 MW, the range of the time of the power maximum being close to 30 s. Compared to the total calculated transient time of 100 s, the effect of the secondary side modeling is extremely important

  14. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  15. Interim report on the result of the sodium boiling detection benchmark test using BOR-60 reactor noise data

    International Nuclear Information System (INIS)

    Shinohara, Y.; Watanabe, K.; Hayashi, K.

    1989-01-01

    The present paper deals with the second stage of investigations of acoustic signals from a boiling experiment performed on the KNS I loop at KfK Karlsruhe and first results of analysis of data from a series of boiling experiments carried out in the BOR 60 reactor in the USSR. Signals have been analysed in frequency as well as in time domain. Signal characteristics successfully used to detect the boiling process have been found in time domain. A proposal for in-service boiling monitoring by acoustic means is briefly described. (author). 1 ref., 8 figs, 1 tab

  16. Benchmark results and theoretical treatments for valence-to-core x-ray emission spectroscopy in transition metal compounds

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, D. R.; Seidler, G. T.; Kas, Joshua J.; Govind, Niranjan; Schwartz, Craig P.; Pemmaraju, Sri; Prendergast, David G.

    2017-09-01

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment.

  17. An IAEA multi-technique X-ray spectrometry endstation at Elettra Sincrotrone Trieste: benchmarking results and interdisciplinary applications.

    Science.gov (United States)

    Karydas, Andreas Germanos; Czyzycki, Mateusz; Leani, Juan José; Migliori, Alessandro; Osan, Janos; Bogovac, Mladen; Wrobel, Pawel; Vakula, Nikita; Padilla-Alvarez, Roman; Menk, Ralf Hendrik; Gol, Maryam Ghahremani; Antonelli, Matias; Tiwari, Manoj K; Caliri, Claudia; Vogel-Mikuš, Katarina; Darby, Iain; Kaiser, Ralf Bernd

    2018-01-01

    The International Atomic Energy Agency (IAEA) jointly with the Elettra Sincrotrone Trieste (EST) operates a multipurpose X-ray spectrometry endstation at the X-ray Fluorescence beamline (10.1L). The facility has been available to external users since the beginning of 2015 through the peer-review process of EST. Using this collaboration framework, the IAEA supports and promotes synchrotron-radiation-based research and training activities for various research groups from the IAEA Member States, especially those who have limited previous experience and resources to access a synchrotron radiation facility. This paper aims to provide a broad overview about various analytical capabilities, intrinsic features and performance figures of the IAEA X-ray spectrometry endstation through the measured results. The IAEA-EST endstation works with monochromatic X-rays in the energy range 3.7-14 keV for the Elettra storage ring operating at 2.0 or 2.4 GeV electron energy. It offers a combination of different advanced analytical probes, e.g. X-ray reflectivity, X-ray absorption fine-structure measurements, grazing-incidence X-ray fluorescence measurements, using different excitation and detection geometries, and thereby supports a comprehensive characterization for different kinds of nanostructured and bulk materials.

  18. [Functional results and treatment of functional dysfunctions after radical prostatectomy].

    Science.gov (United States)

    Salomon, L; Droupy, S; Yiou, R; Soulié, M

    2015-11-01

    To describe the functional results and treatment of functional dysfunctions after radical prostatectomy for localized prostate cancer. Bibliography search was performed from the database Medline (National Library of Medicine, Pubmed) selected according to the scientific relevance. The research was focused on continence, potency, les dyserections, couple sexuality, incontinence, treatments of postoperative incontinence, dysrection and trifecta. Radical prostatectomy is an elaborate and challenging procedure when carcinological risk balances with functional results. Despite recent developments in surgical techniques, post-radical prostatectomy urinary incontinence (pRP-UI) continues to be one of the most devastating complications, which affects 9-16% of patients. Sphincter injury and bladder dysfunction are the most common causes or pRP-UI. The assessment of severity of pRP-UI that affects the choice of treatment is still not well standardized but should include at least a pad test and self-administered questionnaires. The implantation of an artificial urinary sphincter AMS800 remains the gold standard treatment for patients with moderate to severe pRP-UI. The development of less invasive techniques such as the male sling of Pro-ACT balloons has provided alternative therapeutic options for moderate and slight forms of pRP-UI. Most groups now consider the bulbo-urethral compressive sling as the treatment of choice for patients with non-severe pRP-UI. The most appropriate second-line therapeutic strategy is not clearly determined. Recent therapies such as adjustable artificial urinary sphincters and sling and stem cells injections have been investigated. Maintenance of a satisfying sex life is a major concern of a majority of men facing prostate cancer and its treatments. It is essential to assess the couple's sexuality before treating prostate cancer in order to deliver comprehensive information and consider early therapeutic solutions adapted to the couple

  19. Byblos Speech Recognition Benchmark Results

    National Research Council Canada - National Science Library

    Kubala, F; Austin, S; Barry, C; Makhoul, J; Placeway, P; Schwartz, R

    1991-01-01

    .... Surprisingly, the 12-speaker model performs as well as the one made from 109 speakers. Also within the RM domain, we demonstrate that state-of-the-art SI models perform poorly for speakers with strong dialects...

  20. Comparison and validation of HEU and LEU modeling results to HEU experimental benchmark data for the Massachusetts Institute of Technology MITR reactor.

    Energy Technology Data Exchange (ETDEWEB)

    Newton, T. H.; Wilson, E. H; Bergeron, A.; Horelik, N.; Stevens, J. (Nuclear Engineering Division); (MIT Nuclear Reactor Lab.)

    2011-03-02

    The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Towards this goal, comparisons of MCNP5 Monte Carlo neutronic modeling results for HEU and LEU cores have been performed. Validation of the model has been based upon comparison to HEU experimental benchmark data for the MITR-II. The objective of this work was to demonstrate a model which could represent the experimental HEU data, and therefore could provide a basis to demonstrate LEU core performance. This report presents an overview of MITR-II model geometry and material definitions which have been verified, and updated as required during the course of validation to represent the specifications of the MITR-II reactor. Results of calculations are presented for comparisons to historical HEU start-up data from 1975-1976, and to other experimental benchmark data available for the MITR-II Reactor through 2009. This report also presents results of steady state neutronic analysis of an all-fresh LEU fueled core. Where possible, HEU and LEU calculations were performed for conditions equivalent to HEU experiments, which serves as a starting point for safety analyses for conversion of MITR-II from the use of HEU

  1. Benchmark Analysis for Condition Monitoring Test Techniques of Aged Low Voltage Cables in Nuclear Power Plants. Final Results of a Coordinated Research Project

    International Nuclear Information System (INIS)

    2017-10-01

    This publication provides information and guidelines on how to monitor the performance of insulation and jacket materials of existing cables and establish a programme of cable degradation monitoring and ageing management for operating reactors and the next generation of nuclear facilities. This research was done through a coordinated research project (CRP) with participants from 17 Member States. This group of experts compiled the current knowledge in a report together with areas of future research and development to cover aging mechanisms and means to identify and manage the consequences of aging. They established a benchmarking programme using cable samples aged under thermal and/or radiation conditions, and tested before and after ageing by various methods and organizations. In particular, 12 types of cable insulation or jacket material were tested, each using 14 different condition monitoring techniques. Condition monitoring techniques yield usable and traceable results. Techniques such as elongation at break, indenter modulus, oxidation induction time and oxidation induction temperature were found to work reasonably well for degradation trending of all materials. However, other condition monitoring techniques, such as insulation resistance, were only partially successful on some cables and other methods like ultrasonic or Tan δ were either unsuccessful or failed to provide reliable information to qualify the method for degradation trending or ageing assessment of cables. The electrical in situ tests did not show great promise for cable degradation trending or ageing assessment, although these methods are known to be very effective for finding and locating faults in cable insulation material. In particular, electrical methods such as insulation resistance and reflectometry techniques are known to be rather effective for locating insulation damage, hot spots or other faults in essentially all cable types. The advantage of electrical methods is that they can be

  2. Void reactivity coefficient benchmark results for a 10 x 10 BWR assembly in the full 0-100% void fraction range

    Energy Technology Data Exchange (ETDEWEB)

    Jatuff, F., E-mail: cecilemathis@hotmail.com; Perret, G.; Murphy, M.F. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Giust, F. [Nordostschweizerische Kraftwerke AG, Parkstrasse 23, CH-5401 Baden (Switzerland); Ecole Polytechnique Federal de Lausanne, CH-1015 Lausanne (Switzerland); Chawla, R. [Laboratory for Reactor Physics and Systems Behaviour, Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); Ecole Polytechnique Federal de Lausanne, CH-1015 Lausanne (Switzerland)

    2009-06-15

    A boiling water reactor SVEA-96+ fresh fuel lattice has been used as the basis for a benchmark study of the void reactivity coefficient at assembly level in the full voidage range. Results have been obtained using the deterministic codes CASMO-4, HELIOS, PHOENIX, BOXER and the probabilistic code MCNP4C, combined for almost all cases with different cross section libraries. A statistical analysis of the results obtained showed that the void reactivity coefficient tends to become less negative beyond 80% void and that the discrepancies between codes tend to increase from less than 15% at voidages lower than 40% to more than 25% at voidages higher than 70%. The void reactivity coefficient results and the corresponding differences between codes were isotopically decomposed to interpret discrepancies. The isotopic decomposition shows that the minimum observed in the void reactivity coefficient between 80% and 90% void is largely due to the decrease in the relative importance of the {sup 157}Gd(n, {gamma}) rate with increasing voidage, and that the fundamental discrepancies between codes or libraries are mainly governed by the different predictions of the {sup 238}U(n, {gamma}) variation with voidage.

  3. Existence results for functional differential inclusions

    Directory of Open Access Journals (Sweden)

    Mouffak Benchohra

    2001-06-01

    Full Text Available In this note we investigate the existence of solutions to functional differential inclusions on compact intervals. We use the fixed point theorem introduced by Covitz and Nadler for contraction multi-valued maps.

  4. Functional results after treatment for rectal cancer

    Directory of Open Access Journals (Sweden)

    Katrine Jossing Emmertsen

    2014-01-01

    Full Text Available Introduction: With improving survival of rectal cancer, functional outcome has become in- creasingly important. Following sphincter-preserving resection many patients suffer from severe bowel dysfunction with an impact on quality of life (QoL – referred to as low ante- rior resection syndrome (LARS. Study objective: To provide an overview of the current knowledge of LARS regarding symp- tomatology, occurrence, risk factors, pathophysiology, evaluation instruments and treat- ment options. Results: LARS is characterized by urgency, frequent bowel movements, emptying difficulties and incontinence, and occurs in up to 50-75% of patients on a long-term basis. Known risk factors are low anastomosis, use of radiotherapy, direct nerve injury and straight anasto- mosis. The pathophysiology seems to be multifactorial, with elements of anatomical, sen- sory and motility dysfunction. Use of validated instruments for evaluation of LARS is es- sential. Currently, there is a lack of evidence for treatment of LARS. Yet, transanal irrigation and sacral nerve stimulation are promising. Conclusion: LARS is a common problem following sphincter-preserving resection. All pa- tients should be informed about the risk of LARS before surgery, and routinely be screened for LARS postoperatively. Patients with severe LARS should be offered treatment in order to improve QoL. Future focus should be on the possibilities of non-resectional treatment in order to prevent LARS. Resumo: Introdução: Com o aumento da sobrevida após câncer retal, o resultado funcional se tornou cada vez mais importante. Após ressecção com preservação do esfíncter, muitos pacientes sofrem de disfunção intestinal com um impacto sobre a qualidade de vida (QdV – denomi- nada síndrome da ressecção anterior baixa (LARS. Objetivo do estudo: Fornecer uma visão geral do conhecimento atual da LARS com relação à sintomatologia, à ocorrência, aos fatores de risco, à fisiopatologia, aos

  5. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  6. Benchmarking as a strategy policy tool for energy management

    NARCIS (Netherlands)

    Rienstra, S.A.; Nijkamp, P.

    2002-01-01

    In this paper we analyse to what extent benchmarking is a valuable tool in strategic energy policy analysis. First, the theory on benchmarking is concisely presented, e.g., by discussing the benchmark wheel and the benchmark path. Next, some results of surveys among business firms are presented. To

  7. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  8. Atlas-based functional radiosurgery: Early results

    International Nuclear Information System (INIS)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N.

    2009-01-01

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  9. Density functional for van der Waals forces accounts for hydrogen bond in benchmark set of water hexamers

    DEFF Research Database (Denmark)

    Kelkkanen, Kari André; Lundqvist, Bengt; Nørskov, Jens Kehlet

    2009-01-01

    A recent extensive study has investigated how various exchange-correlation (XC) functionals treat hydrogen bonds in water hexamers and has shown traditional generalized gradient approximation and hybrid functionals used in density-functional (DF) theory to give the wrong dissociation-energy trend...... of low-lying isomers and van der Waals (vdW) dispersion forces to give key contributions to the dissociation energy. The question raised whether functionals that incorporate vdW forces implicitly into the XC functional predict the correct lowest-energy structure for the water hexamer and yield accurate...

  10. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  11. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  12. Benchmarking of AREVA BWR FDIC-PEZOG model against first BFE3 cycle 15 application of On-Line NobleChem results

    International Nuclear Information System (INIS)

    Pop, M.G.; Lamanna, L.S.; Hoornik, A.; Storey, G.C.; Lemons, J.F.

    2015-01-01

    The combination of AREVA's BWR FDIC-PEZOG tools allows the calculation of the total liftoff as a measure of fuel performance and a risk indicator for fuel reliability. The AREVA BWR FDIC tool is a crud modeling tool. The PEZOG tool models the platinum-enhanced zirconium oxide growth of fuel cladding when exposed to platinum during operation. Continuous effort to improve these tools used for the total liftoff calculations is illustrated by the benchmarking of the tools after the application of On-Line NobleChem TM at TVA Browns Ferry Unit 3 during Cycle 15. A set of runs using the modified FDIC-PEZOG model and actual plant water chemistry for Cycle 15 and partial data for Cycle 16 were performed. The updated results' deposit thickness and deposit composition predictions for EOC15 were compared to the measured data from EOC15 and are presented in this paper. The updated predicted deposit thickness matched the actual, measured value exactly. Predicted deposit composition near the fuel rod boundary, nearer to the bulk reactor water, and as an averaged deposit, as presented in the paper, compared extremely well with the measured data at EOC15. The updated AREVA methodology resulted in lower fuel oxide thickness predictions over the life of the fuel as compared to the initial evaluations for BFE3 by incorporating more recent experimental data on the thermal conductivity of zirconia; unnecessary conservatism in the prediction of the fuel oxide thickness over the life of the fuel was removed in the improved model. (authors)

  13. BWR stability analysis: methodology of the stability analysis and results of PSI for the NEA/NCR benchmark task; SWR Stabilitaetsanalyse: Methodik der Stabilitaetsanalyse und PSI-Ergebnisse zur NEA/NCR Benchmarkaufgabe

    Energy Technology Data Exchange (ETDEWEB)

    Hennig, D.; Nechvatal, L. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1996-09-01

    The report describes the PSI stability analysis methodology and the validation of this methodology based on the international OECD/NEA BWR stability benchmark task. In the frame of this work, the stability properties of some operation points of the NPP Ringhals 1 have been analysed and compared with the experimental results. (author) figs., tabs., 45 refs.

  14. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  15. FLOWTRAN-TF code benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  16. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  17. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  18. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views it as impo......Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...

  19. Selected examples on multi physics researches at KFKI AEKI-results for phase I of the OECD/NEA UAM benchmark

    International Nuclear Information System (INIS)

    Panka, I.; Kereszturi, A.; Maraczy, C.

    2010-01-01

    Nowadays, there is a tendency to use best estimate plus uncertainty methods in the field of nuclear energy. This implies the application of best estimate code systems and the determination of the corresponding uncertainties. For the latter one an OECD benchmark was set up. The objective of the OECD/NEA Uncertainty Analysis in Best-Estimate Modeling (UAM) LWR benchmark is to determine the uncertainties of the coupled reactor physics/thermal hydraulics LWR calculations at all stages. In this paper the AEKI participation in Phase I will be presented. This Phase is dealing with the evaluation of the uncertainties of the neutronic calculations starting from the pin cell spectral calculations up to the stand-alone neutronics core simulations. (Authors)

  20. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  1. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  2. Internet Based Benchmarking

    OpenAIRE

    Bogetoft, Peter; Nielsen, Kurt

    2002-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as non-parametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore alternative improvement strategies. An implementation of both a parametric and a non parametric model are presented.

  3. SSI and structural benchmarks

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Miller, C.A.; Costantino, C.J.; Graves, H.

    1987-01-01

    This paper presents the latest results of the ongoing program entitled, Standard Problems for Structural Computer Codes, currently being worked on at BNL for the USNRC, Office of Nuclear Regulatory Research. During FY 1986, efforts were focussed on three tasks, namely, (1) an investigation of ground water effects on the response of Category I structures, (2) the Soil-Structure Interaction Workshop and (3) studies on structural benchmarks associated with Category I structures. The objective of the studies on ground water effects is to verify the applicability and the limitations of the SSI methods currently used by the industry in performing seismic evaluations of nuclear plants which are located at sites with high water tables. In a previous study by BNL (NUREG/CR-4588), it has been concluded that the pore water can influence significantly the soil-structure interaction process. This result, however, is based on the assumption of fully saturated soil profiles. Consequently, the work was further extended to include cases associated with variable water table depths. In this paper, results related to cut-off depths beyond which the pore water effects can be ignored in seismic calculations, are addressed. Comprehensive numerical data are given for soil configurations typical to those encountered in nuclear plant sites. These data were generated by using a modified version of the SLAM code which is capable of handling problems related to the dynamic response of saturated soils. Further, the paper presents some key aspects of the Soil-Structure Interaction Workshop (NUREG/CP-0054) which was held in Bethesda, MD on June 1, 1986. Finally, recent efforts related to the task on the structural benchmarks are described

  4. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  5. Benchmarking Ortec ISOTOPIC measurements and calculations

    International Nuclear Information System (INIS)

    This paper represents a description of eight compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC gamma-ray analysis software program. The paper describes tests of the programs capability to perform finite geometry correction factors and sample-matrix-container photon absorption correction factors. Favorable results are obtained in all benchmark tests. (author)

  6. The Linked Data Benchmark Council Project

    NARCIS (Netherlands)

    P.A. Boncz (Peter); I. Fundulaki; A. Gubichev (Andrey); J. Larriba-Pey (Josep); T. Neumann (Thomas)

    2013-01-01

    htmlabstractDespite the fast growth and increasing popularity, the broad field of RDF and Graph database systems lacks an independent authority for developing benchmarks, and for neutrally assessing benchmark results through industry-strength auditing which would allow to quantify and compare the

  7. First 5 tower WIMP-search results from the Cryogenic Dark Matter Search with improved understanding of neutron backgrounds and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Hennings-Yeomans, Raul [Case Western Reserve Univ., Cleveland, OH (United States)

    2009-02-01

    Non-baryonic dark matter makes one quarter of the energy density of the Universe and is concentrated in the halos of galaxies, including the Milky Way. The Weakly Interacting Massive Particle (WIMP) is a dark matter candidate with a scattering cross section with an atomic nucleus of the order of the weak interaction and a mass comparable to that of an atomic nucleus. The Cryogenic Dark Matter Search (CDMS-II) experiment, using Ge and Si cryogenic particle detectors at the Soudan Underground Laboratory, aims to directly detect nuclear recoils from WIMP interactions. This thesis presents the first 5 tower WIMP-search results from CDMS-II, an estimate of the cosmogenic neutron backgrounds expected at the Soudan Underground Laboratory, and a proposal for a new measurement of high-energy neutrons underground to benchmark the Monte Carlo simulations. Based on the non-observation of WIMPs and using standard assumptions about the galactic halo [68], the 90% C.L. upper limit of the spin-independent WIMPnucleon cross section for the first 5 tower run is 6.6 × 10-44cm2 for a 60 GeV/c2 WIMP mass. A combined limit using all the data taken at Soudan results in an upper limit of 4.6×10-44cm2 at 90% C.L.for a 60 GeV/c2 WIMP mass. This new limit corresponds to a factor of ~3 improvement over any previous CDMS-II limit and a factor of ~2 above 60 GeV/c 2 better than any other WIMP search to date. This thesis presents an estimation, based on Monte Carlo simulations, of the nuclear recoils produced by cosmic-ray muons and their secondaries (at the Soudan site) for a 5 tower Ge and Si configuration as well as for a 7 supertower array. The results of the Monte Carlo are that CDMS-II should expect 0.06 ± 0.02+0.18 -0.02 /kgyear unvetoed single nuclear recoils in Ge for the 5 tower configuration, and 0.05 ± 0.01+0.15 -0.02 /kg-year for the 7 supertower configuration. The systematic error is based on the available

  8. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  9. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  10. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  11. Geothermal Heat Pump Benchmarking Report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  12. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  13. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  14. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  15. Approximation results for neural network operators activated by sigmoidal functions.

    Science.gov (United States)

    Costarelli, Danilo; Spigler, Renato

    2013-08-01

    In this paper, we study pointwise and uniform convergence, as well as the order of approximation, for a family of linear positive neural network operators activated by certain sigmoidal functions. Only the case of functions of one variable is considered, but it can be expected that our results can be generalized to handle multivariate functions as well. Our approach allows us to extend previously existing results. The order of approximation is studied for functions belonging to suitable Lipschitz classes and using a moment-type approach. The special cases of neural network operators activated by logistic, hyperbolic tangent, and ramp sigmoidal functions are considered. In particular, we show that for C(1)-functions, the order of approximation for our operators with logistic and hyperbolic tangent functions here obtained is higher with respect to that established in some previous papers. The case of quasi-interpolation operators constructed with sigmoidal functions is also considered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  17. The development of code benchmarks

    International Nuclear Information System (INIS)

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  18. Workshop: Monte Carlo computational performance benchmark - Contributions

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.; Sutton, T.; Leppaenen, J.; Forget, B.; Romano, P.; Siegel, A.; Hoogenboom, E.; Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, Y.; Yu, J.; Sun, J.; Fan, X.; Yu, G.; Bernard, F.; Cochet, B.; Jinaphanh, A.; Jacquet, O.; Van der Marck, S.; Tramm, J.; Felker, K.; Smith, K.; Horelik, N.; Capellan, N.; Herman, B.

    2013-01-01

    This series of slides is divided into 3 parts. The first part is dedicated to the presentation of the Monte-Carlo computational performance benchmark (aims, specifications and results). This benchmark aims at performing a full-size Monte Carlo simulation of a PWR core with axial and pin-power distribution. Many different Monte Carlo codes have been used and their results have been compared in terms of computed values and processing speeds. It appears that local power values mostly agree quite well. The first part also includes the presentations of about 10 participants in which they detail their calculations. In the second part, an extension of the benchmark is proposed in order to simulate a more realistic reactor core (for instance non-uniform temperature) and to assess feedback coefficients due to change of some parameters. The third part deals with another benchmark, the BEAVRS benchmark (Benchmark for Evaluation And Validation of Reactor Simulations). BEAVRS is also a full-core PWR benchmark for Monte Carlo simulations

  19. Benchmark assessment of density functional methods on group II-VI MX (M = Zn, Cd; X = S, Se, Te) quantum dots

    NARCIS (Netherlands)

    Azpiroz, Jon M.; Ugalde, Jesus M.; Infante, Ivan

    2014-01-01

    In this work, we build a benchmark data set of geometrical parameters, vibrational normal modes, and low-lying excitation energies for MX quantum dots, with M = Cd, Zn, and X = S, Se, Te. The reference database has been constructed by ab initio resolution-of-identity second-order approximate coupled

  20. Virtual machine performance benchmarking.

    Science.gov (United States)

    Langer, Steve G; French, Todd

    2011-10-01

    The attractions of virtual computing are many: reduced costs, reduced resources and simplified maintenance. Any one of these would be compelling for a medical imaging professional attempting to support a complex practice on limited resources in an era of ever tightened reimbursement. In particular, the ability to run multiple operating systems optimized for different tasks (computational image processing on Linux versus office tasks on Microsoft operating systems) on a single physical machine is compelling. However, there are also potential drawbacks. High performance requirements need to be carefully considered if they are to be executed in an environment where the running software has to execute through multiple layers of device drivers before reaching the real disk or network interface. Our lab has attempted to gain insight into the impact of virtualization on performance by benchmarking the following metrics on both physical and virtual platforms: local memory and disk bandwidth, network bandwidth, and integer and floating point performance. The virtual performance metrics are compared to baseline performance on "bare metal." The results are complex, and indeed somewhat surprising.

  1. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  2. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  3. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  4. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm...... survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  5. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  6. BENCHMARKING LEARNER EDUCATION USING ONLINE BUSINESS SIMULATION

    Directory of Open Access Journals (Sweden)

    Alfred H. Miller

    2016-06-01

    Full Text Available For programmatic accreditation by the Accreditation Council of Business Schools and Programs (ACBSP, business programs are required to meet STANDARD #4, Measurement and Analysis of Student Learning and Performance. Business units must demonstrate that outcome assessment systems are in place using documented evidence that shows how the results are being used to further develop or improve the academic business program. The Higher Colleges of Technology, a 17 campus federal university in the United Arab Emirates, differentiates its applied degree programs through a ‘learning by doing ethos,’ which permeates the entire curricula. This paper documents benchmarking of education for managing innovation. Using business simulation for Bachelors of Business, Year 3 learners, in a business strategy class; learners explored through a simulated environment the following functional areas; research and development, production, and marketing of a technology product. Student teams were required to use finite resources and compete against other student teams in the same universe. The study employed an instrument developed in a 60-sample pilot study of business simulation learners against which subsequent learners participating in online business simulation could be benchmarked. The results showed incremental improvement in the program due to changes made in assessment strategies, including the oral defense.

  7. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  8. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  9. Benchmarking Danish Vocational Education and Training Programmes

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    This study paper discusses methods whereby Danish vocational education and training colleges can be benchmarked, and presents results from a number of models. It is conceptually complicated to benchmark vocational colleges, as the various colleges in Denmark offer a wide range of course programmes...... attempt to summarise the various effects that the colleges have in two relevant figures, namely retention rates of students and employment rates among students who have completed training programmes....

  10. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    takes into account that the available positions of the moving objects are inaccurate, an aspect largely ignored in previous indexing research. The concepts of data and query enlargement are introduced for addressing inaccuracy. As proof of concepts of the benchmark, the paper covers the application...

  11. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  12. Benchmarking of workplace performance

    NARCIS (Netherlands)

    van der Voordt, Theo; Jensen, Per Anker

    2017-01-01

    This paper aims to present a process model of value adding corporate real estate and facilities management and to discuss which indicators can be used to measure and benchmark workplace performance.

    In order to add value to the organisation, the work environment has to provide value for

  13. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  14. Subordination and superordination results for analytic functions with ...

    African Journals Online (AJOL)

    ... some integrals and obtaining the results of the type of Cartan's uniqueness theorem. In this paper, we solve some differential subordinations and superordinations involving analytic functions with respect to the symmetric points and also derive some sandwich results under certain assumptions on the parameters involved.

  15. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...

  16. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  17. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  18. Functional Fitness Testing Results Following Long-Duration ISS Missions.

    Science.gov (United States)

    Laughlin, Mitzi S; Guilliams, Mark E; Nieschwitz, Bruce A; Hoellen, David

    2015-12-01

    Long-duration spaceflight missions lead to the loss of muscle strength and endurance. Significant reduction in muscle function can be hazardous when returning from spaceflight. To document these losses, NASA developed medical requirements that include measures of functional strength and endurance. Results from this Functional Fitness Test (FFT) battery are also used to evaluate the effectiveness of in-flight exercise countermeasures. The purpose of this paper is to document results from the FFT and correlate this information with performance of in-flight exercise on board the International Space Station. The FFT evaluates muscular strength and endurance, flexibility, and agility and includes the following eight measures: sit and reach, cone agility, push-ups, pull-ups, sliding crunches, bench press, leg press, and hand grip dynamometry. Pre- to postflight functional fitness measurements were analyzed using dependent t-tests and correlation analyses were used to evaluate the relationship between functional fitness measurements and in-flight exercise workouts. Significant differences were noted post space flight with the sit and reach, cone agility, leg press, and hand grip measurements while other test scores were not significantly altered. The relationships between functional fitness and in-flight exercise measurements showed minimal to moderate correlations for most in-flight exercise training variables. The change in FFT results can be partially explained by in-flight exercise performance. Although there are losses documented in the FFT results, it is important to realize that the crewmembers are successfully performing activities of daily living and are considered functional for normal activities upon return to Earth.

  19. Algebraic Multigrid Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    2017-08-01

    AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL and is very similar to the AMG2013 benchmark with additional optimizations. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem with a 27-point stencil, which can be scaled up and is designed to solve a very large problem. A second problem simulates a time dependent problem, in which successively various smnllcr systems are solved.

  20. Some results associated with a generalized basic hypergeometric function

    Directory of Open Access Journals (Sweden)

    Rajeev K. Gupta

    2009-05-01

    Full Text Available In this paper, we define a q-extension of the new generalized hypergeometric function given by Saxena et al. in [13], and have investigated the properties of the above new function such as q-differentiation and q-integral representation. The results presented are of general character and the results given earlier by Saxena and Kalla in [14], Virchenko, Kalla and Al-Zamel in [15], Al-Musallam and Kalla in [2, 3], Kobayashi in [7, 8], Saxena et al. in [13], Kumbhat et al. in [11] follow as special cases.

  1. Mask Waves Benchmark

    Science.gov (United States)

    2007-10-01

    frequenciesfoeahpbeswllsa"gdnsmtrc fo eah/Rbe. /Qthe acuation are de fiamn aprltmethod raetheorta cmiurve fTtn,wihe ies whynee select ful cycle wisdoimporat tob...See Figure 22 for a comparison of measured waves, linear waves, and non- linear Stokes waves. Looking at the selected 16 runs from the trough-to-peak...Figure 23 for the benchmark data set, the relation of obtained frequency verses desired frequency is almost completely linear . The slight variation at

  2. Functionally oriented rehabilitation program for patients with fibromyalgia: preliminary results.

    Science.gov (United States)

    Wennemer, Heidi K; Borg-Stein, Joanne; Gomba, Lorraine; Delaney, Bobbi; Rothmund, Astrid; Barlow, David; Breeze, Gail; Thompson, Anita

    2006-08-01

    To evaluate function and disability in patients with fibromyalgia before and after participation in a functionally oriented, multidisciplinary, 8-wk treatment program. A total of 23 patients who met American College of Rheumatology criteria for the diagnosis of fibromyalgia were enrolled in the study. Outcome measures included: range of motion, 6-min walk test, a modified Fibromyalgia Impact Questionnaire, a modified SF-36 Physical Functioning Scale, and the Fibromyalgia Health Assessment Questionnaire. Pretreatment and posttreatment scores were analyzed using paired t tests. All subjects completed the program, and there were no reported injuries. Three subjects failed to complete the survey instruments at the conclusion of the study. Intention to treat analysis including these subjects was carried out but did not significantly change results. For the remaining subjects (n = 20), a significant improvement was found on the Physical Functioning Scale (P = 0.01). Trends toward improvement on the Fibromyalgia Impact Questionnaire (P = 0.40) and Fibromyalgia Health Assessment Questionnaire (P = 0.14) were seen but did not achieve statistical significance. Range of motion testing revealed significant improvements in lumbar spine extension (P Borg scale did not change. There were no injuries or other adverse consequences of the program. This study utilized multiple functional outcome measures to demonstrate improved function and decreased disability in patients with fibromyalgia. Our patients reported significantly improved physical function after participation in the 8-wk intensive multidisciplinary treatment program. This progressive, functionally based exercise training program was well tolerated by all participants and outlines an effective exercise prescription for patients with fibromyalgia. Fibromyalgia patients in this study responded favorably to a treatment program that focused on function instead of pain.

  3. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  4. Long-term followup of hypospadias: functional and cosmetic results

    NARCIS (Netherlands)

    Rynja, Sybren P.; Wouters, Gerlof A.; van Schaijk, Maaike; Kok, Esther T.; de Jong, Tom P.; de Kort, Laetitia M.

    2009-01-01

    We assessed long-term results after hypospadias surgery with respect to urinary and sexual function, cosmetic appearance and intimate relationships. We contacted 116 patients who are now adults and who underwent surgery between 1987 and 1992. Participation included mailed questionnaires containing

  5. On some results for meromorphic univalent functions having ...

    Indian Academy of Sciences (India)

    71

    2017-08-05

    Aug 5, 2017 ... The univalent analytic mappings defined in D having quasiconformal extension to the whole complex plane play a vital role in Teichmüller spaces. There are number of results for such functions obtained by O. Lehto, R. Kühnau and various other mathematicians starting from the work of L. Ahlfors (see [1]) in ...

  6. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available of dynamic multi-objective optimisation algorithms (DMOAs) are highlighted. In addition, new DMOO benchmark functions with complicated Pareto-optimal sets (POSs) and approaches to develop DMOOPs with either an isolated or deceptive Pareto-optimal front (POF...

  7. A Benchmark for Online Non-Blocking Schema Transformations

    NARCIS (Netherlands)

    Wevers, L.; Hofstra, Matthijs; Tammens, Menno; Huisman, Marieke; van Keulen, Maurice

    2015-01-01

    This paper presents a benchmark for measuring the blocking behavior of schema transformations in relational database systems. As a basis for our benchmark, we have developed criteria for the functionality and performance of schema transformation mechanisms based on the characteristics of state of

  8. Benchmarking optimization solvers for structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    sizes is developed for this benchmarking. The problems are based on a material interpolation scheme combined with a density filter. Different optimization solvers including Optimality Criteria (OC), the Method of Moving Asymptotes (MMA) and its globally convergent version GCMMA, the interior point...... solvers in IPOPT and FMINCON, and the sequential quadratic programming method in SNOPT, are benchmarked on the library using performance profiles. Whenever possible the methods are applied to both the nested and the Simultaneous Analysis and Design (SAND) formulations of the problem. The performance...... profiles conclude that general solvers are as efficient and reliable as classical structural topology optimization solvers. Moreover, the use of the exact Hessians in SAND formulations, generally produce designs with better objective function values. However, with the benchmarked implementations solving...

  9. FUNCTIONAL ABILITIES AS PREDICTORS OF PREADOSLESCENT STUDENTS’ ATHLETIC RESULTS OUTCOME

    Directory of Open Access Journals (Sweden)

    Miroljub Ivanović

    2011-09-01

    Full Text Available Aim of this research has been directed to the functional abilities relation testing (as predictors and athletic results (as criterion of students, who are VII and VIII grade of primary school (Χ= 13, 9 years; SD = 1, 17. The research has been conducted in Valjevo during November 2010. on the sample of 108 examinees. Variables’ sample has been assembled from 3 tests for functional abilities (maximal oxygen consumption, pulse frequency and vital lungs capacity evaluation and 4 athletic disciplines (high jump, long jump, shot put and 60 meters low start sprint from current physical education curriculum. Crombah-alfa coefficient values indicate to satisfactory reliability of applied instruments. In data processing canonical correlation analysis and multiple regression analysis have been used. Achieved canonical correlation analysis results showed that functional abilities set is statistically and significantly related to criterion variables set (R=.67, manifesting one canonical factor on the level p<.03. Achieved determination coefficient (R² = .43 indicates to functional abilities prognostic significance of explained variance 46% criterion. Using hierarchy regression model following statistically significant beta coefficient of functional abilities as partial predictors of athletics outcome have been determined: I for vital lungs capacity- high jump (β = .67, p < .01, II for vital lungs capacity- long jump (β = .55, p < .01, III for vital lungs capacity and pulse frequency- shot put (β =.-.34, p < .01; β =.42, p < .02 and IV for vital lungs capacity- 60 meters sprint (β = .-.39. Regression equation calculation of other applied functional abilities preadolescents’ predictor variables has not statistically and significantly contributed to univariance prediction of criterion variable variance

  10. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  11. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless......This report is based on the survey "Industrial Companies in Denmark - Today and Tomorrow',section IV: Supply Chain Management - Practices and Performance, question number 4.9 onperformance assessment. To our knowledge, this survey is unique, as we have not been able to findresults from any...

  12. Perforator plus flaps: Optimizing results while preserving function and esthesis

    Directory of Open Access Journals (Sweden)

    Mehrotra Sandeep

    2010-01-01

    Full Text Available Background: The tenuous blood supply of traditional flaps for wound cover combined with collateral damage by sacrifice of functional muscle, truncal vessels, or nerves has been the bane of reconstructive procedures. The concept of perforator plus flaps employs dual vascular supply to flaps. By safeguarding perforators along with supply from its base, robust flaps can be raised in diverse situations. This is achieved while limiting collateral damage and preserving nerves, vessels, and functioning muscle with better function and aesthesis. Materials and Methods: The perforator plus concept was applied in seven different clinical situations. Functional muscle and fasciocutaneous flaps were employed in five and adipofascial flaps in two cases, primarily involving lower extremity defects and back. Adipofascial perforator plus flaps were employed to provide cover for tibial fracture in one patients and chronic venous ulcer in another. Results: All flaps survived without any loss and provided long-term stable cover, both over soft tissue and bone. Functional preservation was achieved in all cases where muscle flaps were employed with no clinical evidence of loss of power. There was no sensory loss or significant oedema in or distal to the flap in both cases where neurovascular continuity was preserved during flap elevation. Fracture union and consolidation were satisfactory. One patient had minimal graft loss over fascia which required application of stored grafts with subsequent take. No patient required re-operation. Conclusions: Perforator plus concept is holistic and applicable to most flap types in varied situations. It permits the exercise of many locoregional flap options while limiting collateral functional damage. Aesthetic considerations are also addressed while raising adipofascial flaps because of no appreciable donor defects. With quick operating times and low failure risk, these flaps can be a better substitute to traditional flaps and at

  13. Benchmarking Non-Hardware Balance of System (Soft) Costs for U.S. Photovoltaic Systems Using a Data-Driven Analysis from PV Installer Survey Results

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, K.; Barbose, G.; Margolis, R.; Wiser, R.; Feldman, D.; Ong, S.

    2012-11-01

    This report presents results from the first U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs--often referred to as 'business process' or 'soft' costs--for residential and commercial photovoltaic (PV) systems.

  14. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  15. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  16. False Positive Functional Analysis Results as a Contributor of Treatment Failure during Functional Communication Training

    Science.gov (United States)

    Mann, Amanda J.; Mueller, Michael M.

    2009-01-01

    Research has shown that functional analysis results are beneficial for treatment selection because they identify reinforcers for severe behavior that can then be used to reinforce replacement behaviors either differentially or noncontingently. Theoretically then, if a reinforcer is identified in a functional analysis erroneously, a well researched…

  17. Benchmarking health IT among OECD countries: better data for better policy

    Science.gov (United States)

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    Objective To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. Materials and methods The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. Results The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Discussion Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. Conclusions As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this. PMID:23721983

  18. Computational shielding benchmarks

    International Nuclear Information System (INIS)

    The American Nuclear Society Standards Committee 6.2.1 is engaged in the documentation of radiation transport problems and their solutions. The primary objective of this effort is to test computational methods used within the international shielding community. Dissemination of benchmarks will, it is hoped, accomplish several goals: (1) Focus attention on problems whose solutions represent state-of-the-art methodology for representative transport problems of generic interest; (2) Specification of standard problems makes comparisons of alternate computational methods, including use of approximate vs. ''exact'' computer codes, more meaningful; (3) Comparison with experimental data may suggest improvements in computer codes and/or associated data sets; (4) Test reliability of new methods as they are introduced for the solution of specific problems; (5) Verify user ability to apply a given computational method; and (6) Verify status of a computer program being converted for use on a different computer (e.g., CDC vs IBM) or facility

  19. The self-consistent charge density functional tight binding method applied to liquid water and the hydrated excess proton: benchmark simulations.

    Science.gov (United States)

    Maupin, C Mark; Aradi, Bálint; Voth, Gregory A

    2010-05-27

    The self-consistent charge density functional tight binding (SCC-DFTB) method is a relatively new approximate electronic structure method that is increasingly used to study biologically relevant systems in aqueous environments. There have been several gas phase cluster calculations that indicate, in some instances, an ability to predict geometries, energies, and vibrational frequencies in reasonable agreement with high level ab initio calculations. However, to date, there has been little validation of the method for bulk water properties, and no validation for the properties of the hydrated excess proton in water. Presented here is a detailed SCC-DFTB analysis of the latter two systems. This work focuses on the ability of the original SCC-DFTB method, and a modified version that includes a hydrogen bonding damping function (HBD-SCC-DFTB), to describe the structural, energetic, and dynamical nature of these aqueous systems. The SCC-DFTB and HBD-SCC-DFTB results are compared to experimental data and Car-Parrinello molecular dynamics (CPMD) simulations using the HCTH/120 gradient-corrected exchange-correlation energy functional. All simulations for these systems contained 128 water molecules, plus one additional proton in the case of the excess proton system, and were carried out in a periodic simulation box with Ewald long-range electrostatics. The liquid water structure for the original SCC-DFTB is shown to poorly reproduce bulk water properties, while the HBD-SCC-DFTB somewhat more closely represents bulk water due to an improved ability to describe hydrogen bonding energies. Both SCC-DFTB methods are found to underestimate the water dimer interaction energy, resulting in a low heat of vaporization and a significantly elevated water oxygen diffusion coefficient as compared to experiment. The addition of an excess hydrated proton to the bulk water resulted in the Zundel cation (H(5)O(2)(+)) stabilized species being the stable form of the charge defect, which

  20. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  1. FUNCTIONAL RESULTS OF SURGICAL TREATMENT OF CERVICAL SPONDYLOTIC MYELOPATHY

    Directory of Open Access Journals (Sweden)

    MARVIN JESUALDO VARGAS UH

    Full Text Available ABSTRACT Objective: To analyze the functional outcome of surgical treatment of cervical spondylotic myelopathy. Methods: A retrospective study involving 34 patients with CSM, operated from January 2014 to June 2015. The neurological status was assessed using the Nurick and modified Japanese Orthopedic Association (mJOA scales preoperatively and at 12 months. Sex, age, time of evolution, affected cervical levels, surgical approach and T2-weighted magnetic resonance hyperintense signal were also evaluated. Results: A total of 14 men and 20 women participated. The mean age was 58.12 years. The average progression time was 12.38 months. The preoperative neurological state by mJOA was mild in 2 patients, moderate in 16 and severe in 16, with a mean of 11.44 points. The preoperative Nurick was grade II in 14 patients, grade III in 8, grade IV in 10 and grade V in 2. The T2-weighted hyperintense signal was documented in 18 patients (52.9%. The functional outcome according to the mJOA recovery rate was good in 15 patients (44.1% and poor in 19 (55.9%. The degree of Nurick recovery was good in 20 (58.8% and poor in 14 (41.2%. Conclusions: Decompressive surgery of the spinal cord has been shown to be effective in the treatment of cervical spondylotic myelopathy in well-selected patients. Although it is suggested that there are certain factors that correlate with functional outcome, we believe that more prospective randomized studies should be conducted to clarify this hypothesis.

  2. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  3. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  4. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  5. Benchmarking of nuclear economics tools

    International Nuclear Information System (INIS)

    Moore, Megan; Korinny, Andriy; Shropshire, David; Sadhankar, Ramesh

    2017-01-01

    Highlights: • INPRO and GIF economic tools exhibited good alignment in total capital cost estimation. • Subtle discrepancies in the cost result from differences in financing and the fuel cycle assumptions. • A common set of assumptions was found to reduce the discrepancies to 1% or less. • Opportunities for harmonisation of economic tools exists. - Abstract: Benchmarking of the economics methodologies developed by the Generation IV International Forum (GIF) and the International Atomic Energy Agency’s International Project on Innovative Nuclear Reactors and Fuel Cycles (INPRO), was performed for three Generation IV nuclear energy systems. The Economic Modeling Working Group of GIF developed an Excel based spreadsheet package, G4ECONS (Generation 4 Excel-based Calculation Of Nuclear Systems), to calculate the total capital investment cost (TCIC) and the levelised unit energy cost (LUEC). G4ECONS is sufficiently generic in the sense that it can accept the types of projected input, performance and cost data that are expected to become available for Generation IV systems through various development phases and that it can model both open and closed fuel cycles. The Nuclear Energy System Assessment (NESA) Economic Support Tool (NEST) was developed to enable an economic analysis using the INPRO methodology to easily calculate outputs including the TCIC, LUEC and other financial figures of merit including internal rate of return, return of investment and net present value. NEST is also Excel based and can be used to evaluate nuclear reactor systems using the open fuel cycle, MOX (mixed oxide) fuel recycling and closed cycles. A Super Critical Water-cooled Reactor system with an open fuel cycle and two Fast Reactor systems, one with a break-even fuel cycle and another with a burner fuel cycle, were selected for the benchmarking exercise. Published data on capital and operating costs were used for economics analyses using G4ECONS and NEST tools. Both G4ECONS and

  6. Benchmarking the UAF Tsunami Code

    Science.gov (United States)

    Nicolsky, D.; Suleimani, E.; West, D.; Hansen, R.

    2008-12-01

    We have developed a robust numerical model to simulate propagation and run-up of tsunami waves in the framework of non-linear shallow water theory. A temporal position of the shoreline is calculated using the free-surface moving boundary condition. The numerical code adopts a staggered leapfrog finite-difference scheme to solve the shallow water equations formulated for depth-averaged water fluxes in spherical coordinates. To increase spatial resolution, we construct a series of telescoping embedded grids that focus on areas of interest. For large scale problems, a parallel version of the algorithm is developed by employing a domain decomposition technique. The developed numerical model is benchmarked in an exhaustive series of tests suggested by NOAA. We conducted analytical and laboratory benchmarking for the cases of solitary wave runup on simple and composite beaches, run-up of a solitary wave on a conical island, and the extreme runup in the Monai Valley, Okushiri Island, Japan, during the 1993 Hokkaido-Nansei-Oki tsunami. Additionally, we field-tested the developed model to simulate the November 15, 2006 Kuril Islands tsunami, and compared the simulated water height to observations at several DART buoys. In all conducted tests we calculated a numerical solution with an accuracy recommended by NOAA standards. In this work we summarize results of numerical benchmarking of the code, its strengths and limits with regards to reproduction of fundamental features of coastal inundation, and also illustrate some possible improvements. We applied the developed model to simulate potential inundation of the city of Seward located in Resurrection Bay, Alaska. To calculate an aerial extent of potential inundation, we take into account available near-shore bathymetry and inland topography on a grid of 15 meter resolution. By choosing several scenarios of potential earthquakes, we calculated the maximal aerial extent of Seward inundation. As a test to validate our model, we

  7. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    Science.gov (United States)

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  8. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  9. Retrograde femoral nailing in elderly patients: outcome and functional results.

    Science.gov (United States)

    Neubauer, Thomas; Krawany, Manfred; Leitner, Lukas; Karlbauer, Alois; Wagner, Michael; Plecko, Michael

    2012-06-01

    Functional outcome after retrograde femoral intramedullary nailing was investigated in 35 patients older than 60 years (mean, 86 years) with 36 fractures, comprising 15 (41.7%) shaft and 21 (58.3%) distal fractures; overall, 7 (19.4%) periprosthetic fractures occured. Twenty-two (62.9%) of 35 patients were evaluated at a mean 16.5-month follow-up with the Lyshom-Gillquist score and the SF-8 questionaire. Primary union rate was 97.8%, with no significant differences in duration of surgery, bone healing, mobilization, and weight bearing among different fracture types; periprosthetic fractures revealed a significantly delayed mobilization (P=.03). Complications occured significantly more often among distal femoral fractures (P=.009), including all revision surgeries. The most frequently encountered complication was loosening of distal locking bolts (n=3). Lysholm score results were mainly influenced by age-related entities and revealed fair results in all fractures (mean in the femoral shaft fracture group, 78.1 vs mean in the distal femoral fracture group, 74.9; P=.69), except in the periprosthetic subgroup, which had good results (mean, 84.8; P=.23). This group also had increased physical parameters according to SF-8 score (P=.026). No correlation existed between SF-8 physical parameters and patient age or surgery delay, whereas a negative correlation existed between patient age and SF-8 mental parameters (P=.012). Retrograde femoral intramedullary nailing is commonly used in elderly patients due to reliable bone healing, minimal soft tissue damage, and immediate full weight bearing. It also offers a valid alternative to antegrade nailing in femoral shaft fractures. Copyright 2012, SLACK Incorporated.

  10. An isomeric reaction benchmark set to test if the performance of state-of-the-art density functionals can be regarded as independent of the external potential.

    Science.gov (United States)

    Schwabe, Tobias

    2014-07-28

    Some representative density functionals are assessed for isomerization reactions in which heteroatoms are systematically substituted with heavier members of the same element group. By this, it is investigated if the functional performance depends on the elements involved, i.e. on the external potential imposed by the atomic nuclei. Special emphasis is placed on reliable theoretical reference data and the attempt to minimize basis set effects. Both issues are challenging for molecules including heavy elements. The data suggest that no general bias can be identified for the functionals under investigation except for one case - M11-L. Nevertheless, large deviations from the reference data can be found for all functional approximations in some cases. The average error range for the nine functionals in this test is 17.6 kcal mol(-1). These outliers depreciate the general reliability of density functional approximations.

  11. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  12. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  13. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  14. Functional and social results of osseointegrated hearing aids

    Directory of Open Access Journals (Sweden)

    Inmaculada MORENO-ALARCÓN

    2017-06-01

    Full Text Available Introduction and objective: Osseointegrated implants are nowadays a good therapeutic option for patients suffering from transmission or mixed hearing loss. The aims of this study are both to assess audiology benefits for patients with osseointegrated implants and quantify the change in their quality of life. Method: The study included 10 patients who were implanted in our hospital between March 2013 and September 2014. The instrument used to quantify their quality of life was the Glasgow Benefit Inventory (GBI and a questionnaire including three questions: use of implant, postoperative pain and whether they would recommend the operation to other patients. Audiology assessment was performed through tone audiometry and free field speech audiometric testing. Results: The average total benefit score with the Glasgow Benefit Inventory was +58, and the general, social and physical scores were +75, +18 and +29, respectively. The improvement with the implant regarding free-field tonal audiometry at the frequencies of 500, 1000 and 2000 Hz was found to be statistically significant, as was the difference between verbal audiometry before and after implantation. Discussion: Improvements in surgical technique for osseointegrated implants, at present minimally invasive, foregrounds the assessment of functional and social aspects as a measure of their effectiveness. Conclusions: The use of the osseointegrated implant is related to an important improvement in the audiological level, especially in patients with conductive or mixed hearing loss, together with a great change in the quality of life of implanted patients.

  15. [Operative treatment of flexor pollicis longus tendon with Krackow suture, functional results--preliminary results].

    Science.gov (United States)

    Bumbasirević, Marko Z; Andjelković, Sladjana; Lesić, Aleksandar R; Sudjić, Vojo S; Palibrk, Tomislav; Tulić, Goran Dz; Radenković, Dejan V; Bajec, Djordje D

    2010-01-01

    Surgical treatment of the injuried flexor tensons is the important part of hand surgery. Tendon adhesions, ruptures, joint contcatures-stifness are only one part of the problem one is faced during the tendon treatment. In spite of improvement in surgical technique and suture material, the end result of sutured flexor tendons still represent a serious problem. To present of operative treatment of flexor pollicis longus injury with Krakow suture technique. All patients are treated in the first 48 hours after the accident. The regional anesthesia was performed with use of turniquet. Beside spare debridement, the reconstruction of digital nerves was done. All patients started with active and pasive movements-excercises on the first postoperative day. Follow-up was from 6 to 24 months. In evaluation of functional recovery the grip strenght, pinch strenght, range of movements of interphalangeal and metacarpophalangeal joiht and DASH score were used. In the last two years there were 30 patients, 25 males (83.33%) and 5 females (16.66%). Mean age was 39.8 years, ranged from 17 to 65 years. According to mechanism of injury the patients were divided in two groups: one with sharp and other with wider zone of injury. Concomitant digital nerve lesions was noticed in 15 patients (50%). the Krackow sutrue allowed early rehabilitation, which prevent tendon adhesions, enabled faster and better functional recovery.

  16. Plethyzmography in assessment of hemodynamic results of pacemaker functions programming

    Science.gov (United States)

    Wojciechowski, Dariusz; Sionek, Piotr; Peczalski, Kazimierz; Janusek, Dariusz

    2011-01-01

    The paper presents potential role of plethyzmography in optimization of heart hemodynamic function during pacemaker programming. The assessment of optimal stroke volume in patients, with implanted dual chamber pacemaker (DDD), by plethyzmography was a goal of the study. The data were collected during pacing rhythm. 20 patients (8 female and 12 male, average 77.4+/-4.6 years) with dual chamber pacemaker (DDD) and with pacing rhythm during routine pacemaker control and study tests were incorporated in the study group. Hemodynamic parameters were assessed during modification of atrio-ventricular delay (AVD) for pacing rhythm of 70 bpm and 90 bpm. The time of atrioventricular was programmed with 20 ms steps within range 100-200 ms and data were recorded with two minutes delay between two consecutive measurements. Stroke volume (SV) and cardiac output (CO) were calculated from plethyzmographic signal by using Beatscope software (TNO Holand). Highest SV calculated for given pacing rhythm was named optimal stroke volume (OSV) and consequently highest cardiac output was named maximal cardiac output (MCO). The time of atrio-ventricular delay for OSV was named optimal atrioventricular delay (OAVD). The results have showed: mean values of OAVD for 70 bpm - 152+/-33 ms and for 90 bpm -149+/-35 ms, shortening of the mean OAVD time caused by increase of pacing rate from 70 bpm to 90 bpm what resulted in statistically significant decrease of OSV with not statistically significant increase of MCO. The analysis of consecutive patients revealed three types of response to increase of pacing rhythm: 1. typical-shortening of OAVD, 2. neutral-no change of OAVD and 3.atypical-lengthening of OAVD.

  17. Functional Echomyography of the human denervated muscle: first results

    Directory of Open Access Journals (Sweden)

    Riccardo Zanato

    2011-03-01

    Full Text Available In this study we followed with ultrasound three patients with permanent denervation to evaluate changes in morphology, thickness, contraction and vascularisation of muscles undergoing the home-based electrical stimulation program of the Rise2-Italy project. During a period of 1 year for the first subject, 6 months for the second subject and 3 months for the third subject we studied with ultrasound the denervated muscle comparing it (if possible to the contralateral normal muscle. We evaluated: 1. Changes in morphology and sonographic structure of the pathologic muscle; 2. Muscular thickness in response to the electrical stimulation therapy; 3. Short-term modifications in muscle perfusion and arterial flow patterns after stimulation; 4. Contraction-relaxation kinetic induced by volitional activity or electrical stimulation. Morphology and ultrasonographic structure of the denervated muscles changed during the period of stimulation from a pattern typical of complete muscular atrophy to a pattern which might be considered “normal” when detected in an old patient. Thickness improved significantly more in the middle third than in the proximal and distal third of the denervated muscle, reaching in the last measurements of the first subject approximately the same thickness as the contralateral normal muscle. In all the measurements done within this study, arterial flow of the denervated muscle showed at rest a low-resistance pattern with Doppler Ultra Sound (US, and a pulsed pattern after electrical stimulation. The stimulation- induced pattern is similar to the trifasic high-resistance pattern of the normal muscle. Contraction- relaxation kinetic, measured by recording the muscular movements during electrical stimulation, showed an abnormal behaviour of the denervated muscle during the relaxation phase, which resulted to be significantly longer than in normal muscle (880 msec in the denervated muscle vs 240 msec in the contralateral normal one

  18. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  19. Benchmarking & european sustainable transport policies

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2003-01-01

    way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...

  20. NASA Software Engineering Benchmarking Study

    Science.gov (United States)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    was its software assurance practices, which seemed to rate well in comparison to the other organizational groups and also seemed to include a larger scope of activities. An unexpected benefit of the software benchmarking study was the identification of many opportunities for collaboration in areas including metrics, training, sharing of CMMI experiences and resources such as instructors and CMMI Lead Appraisers, and even sharing of assets such as documented processes. A further unexpected benefit of the study was the feedback on NASA practices that was received from some of the organizations interviewed. From that feedback, other potential areas where NASA could improve were highlighted, such as accuracy of software cost estimation and budgetary practices. The detailed report contains discussion of the practices noted in each of the topic areas, as well as a summary of observations and recommendations from each of the topic areas. The resulting 24 recommendations from the topic areas were then consolidated to eliminate duplication and culled into a set of 14 suggested actionable recommendations. This final set of actionable recommendations, listed below, are items that can be implemented to improve NASA's software engineering practices and to help address many of the items that were listed in the NASA top software engineering issues. 1. Develop and implement standard contract language for software procurements. 2. Advance accurate and trusted software cost estimates for both procured and in-house software and improve the capture of actual cost data to facilitate further improvements. 3. Establish a consistent set of objectives and expectations, specifically types of metrics at the Agency level, so key trends and models can be identified and used to continuously improve software processes and each software development effort. 4. Maintain the CMMI Maturity Level requirement for critical NASA projects and use CMMI to measure organizations developing software for NASA. 5

  1. FUNCTIONAL RESULTS OF ENDOSCOPIC EXTRAPERITONEAL RADICAL INTRAFASCIAL PROSTATECTOMY

    Directory of Open Access Journals (Sweden)

    D. V. Perlin

    2014-01-01

    Full Text Available Introduction. Endoscopic radical prostatectomy is a highly effective treatment for localized prostate cancer. Intrafascial prostate dissection ensures early recovery of urine continence function and erectile function. This article sums up our own experience of performing intrafascial endoscopic prostatectomy.Materials and methods. 25 patients have undergone this procedure. 12 months after surgery 88.2 % of the patients were fully continent, 11.7 % had symptoms of minimal stress urinary incontinence. We encountered no cases of positive surgical margins and one case of bio-chemical recurrence of the disease.Conclusion. Oncologically, intrafascial endoscopic radical prostatectomy is as effective as other modifications of radical prostatectomy and has the benefits of early recovery of urine continence function and erectile function

  2. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui [National Renewable Energy Laboratory (NREL), Golden, CO (United States). Transportation and Hydrogen Systems Center

    2017-10-19

    In FY16, the thermal performance of the 2014 Honda Accord Hybrid power electronics thermal management systems were benchmarked. Both experiments and numerical simulation were utilized to thoroughly study the thermal resistances and temperature distribution in the power module. Experimental results obtained from the water-ethylene glycol tests provided the junction-to-liquid thermal resistance. The finite element analysis (FEA) and computational fluid dynamics (CFD) models were found to yield a good match with experimental results. Both experimental and modeling results demonstrate that the passive stack is the dominant thermal resistance for both the motor and power electronics systems. The 2014 Accord power electronics systems yield steady-state thermal resistance values around 42- 50 mm to the 2nd power K/W, depending on the flow rates. At a typical flow rate of 10 liters per minute, the thermal resistance of the Accord system was found to be about 44 percent lower than that of the 2012 Nissan LEAF system that was benchmarked in FY15. The main reason for the difference is that the Accord power module used a metalized-ceramic substrate and eliminated the thermal interface material layers. FEA models were developed to study the transient performance of 2012 Nissan LEAF, 2014 Accord, and two other systems that feature conventional power module designs. The simulation results indicate that the 2012 LEAF power module has lowest thermal impedance at a time scale less than one second. This is probably due to moving low thermally conductive materials further away from the heat source and enhancing the heat spreading effect from the copper-molybdenum plate close to the insulated gate bipolar transistors. When approaching steady state, the Honda system shows lower thermal impedance. Measurement results of the thermal resistance of the 2015 BMW i3 power electronic system indicate that the i3 insulated gate bipolar transistor module has significantly lower junction

  3. Water Level Superseded Benchmark Sheets

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Images of National Coast & Geodetic Survey (now NOAA's National Geodetic Survey/NGS) tidal benchmarks which have been superseded by new markers or locations....

  4. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together...... to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal...... and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work...

  5. Benchmarking Complications Associated with Esophagectomy

    NARCIS (Netherlands)

    Low, Donald E.; Kuppusamy, Madhan Kumar; Alderson, Derek; Cecconello, Ivan; Chang, Andrew C.; Darling, Gail; Davies, Andrew; D'journo, Xavier Benoit; Gisbertz, Suzanne S.; Griffin, S. Michael; Hardwick, Richard; Hoelscher, Arnulf; Hofstetter, Wayne; Jobe, Blair; Kitagawa, Yuko; Law, Simon; Mariette, Christophe; Maynard, Nick; Morse, Christopher R.; Nafteux, Philippe; Pera, Manuel; Pramesh, C. S.; Puig, Sonia; Reynolds, John V.; Schroeder, Wolfgang; Smithers, Mark; Wijnhoven, B. P. L.

    2017-01-01

    Utilizing a standardized dataset with specific definitions to prospectively collect international data to provide a benchmark for complications and outcomes associated with esophagectomy. Outcome reporting in oncologic surgery has suffered from the lack of a standardized system for reporting

  6. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1997-01-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green's function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade

  7. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Ganapol, B.D.; Kornreich, D.E. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Nuclear Engineering

    1997-07-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.

  8. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  9. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  10. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  11. [Horizontal supraglottic laryngectomy. Technique, indications, oncologic results and early functional results. Apropos of 87 cases].

    Science.gov (United States)

    Maurice, N; Delol, J; Makeieff, M; Arnoux, A; Crampette, L; Guerrier, B

    1996-01-01

    In this article, we advocate supraglottic laryngectomy with bilateral neck dissection for the treatment of supraglottic carcinomas with preserved laryngeal mobility. Post-operative results and follow-up of 87 patients are discussed. This technique allows an excellent loco-regional control of the disease with preservation of laryngeal function. Radiation therapy is preserved for treatment of metachronous (2nd primary) in cases with satisfactory local control without neck metastases. All stage 5-year overall survival rate was 55% with a 68.5% disease survival rate. Five-year local control of the disease and regional control of neck nodes were respectively 94% and 92%. Five-year disease survival rate for N- population was 71% Vs 61% for N+ population. Five-year disease survival rate according to the tumor classification was 70% for T1, 75% for T2, 69% for T3 and 54% for T4. In the post-operative follow-up, the median of time to decanulation was 17 days, that of nasogastric tube removal was 19 days, that of hospital stay 38 days.

  12. Geant4 Computing Performance Benchmarking and Monitoring

    Science.gov (United States)

    Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; Genser, Krzysztof; Jun, Soon Yung; Kowalkowski, James B.; Paterno, Marc

    2015-12-01

    Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared to previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. The scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.

  13. An improved benchmark model for the Big Ten critical assembly - 021

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    2010-01-01

    A new benchmark specification is developed for the BIG TEN uranium critical assembly. The assembly has a fast spectrum, and its core contains approximately 10 wt.% enriched uranium. Detailed specifications for the benchmark are provided, and results from the MCNP5 Monte Carlo code using a variety of nuclear-data libraries are given for this benchmark and two others. (authors)

  14. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  15. Construction of a Benchmark for the User Experience Questionnaire (UEQ

    Directory of Open Access Journals (Sweden)

    Martin Schrepp

    2017-08-01

    Full Text Available Questionnaires are a cheap and highly efficient tool for achieving a quantitative measure of a product’s user experience (UX. However, it is not always easy to decide, if a questionnaire result can really show whether a product satisfies this quality aspect. So a benchmark is useful. It allows comparing the results of one product to a large set of other products. In this paper we describe a benchmark for the User Experience Questionnaire (UEQ, a widely used evaluation tool for interactive products. We also describe how the benchmark can be applied to the quality assurance process for concrete projects.

  16. A PWR Thorium Pin Cell Burnup Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  17. Benchmark On Sensitivity Calculation (Phase III)

    Energy Technology Data Exchange (ETDEWEB)

    Ivanova, Tatiana [IRSN; Laville, Cedric [IRSN; Dyrda, James [Atomic Weapons Establishment; Mennerdahl, Dennis [E. Mennerdahl Systems; Golovko, Yury [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Raskach, Kirill [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Tsiboulia, Anatoly [Institute of Physics and Power Engineering (IPPE), Obninsk, Russia; Lee, Gil Soo [Korea Institute of Nuclear Safety (KINS); Woo, Sweng-Woong [Korea Institute of Nuclear Safety (KINS); Bidaud, Adrien [Labratoire de Physique Subatomique et de Cosmolo-gie (LPSC); Patel, Amrit [NRC; Bledsoe, Keith C [ORNL; Rearden, Bradley T [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  18. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  19. Calculation of WWER-440 nuclide benchmark (CB2)

    International Nuclear Information System (INIS)

    Prodanova, R

    2005-01-01

    The present paper is intended to show the results, obtained at the INRNE, Sofia, Bulgaria on the benchmark task, announced by L. Markova at the sixth Symposium of AE, Kirkkonummi Finland 1996 (Authors)

  20. Recent structure function results from neutrino scattering at fermilab

    International Nuclear Information System (INIS)

    Yang, U.K.; Avvakumov, S.; Barbaro, P. de

    2001-01-01

    We report on the extraction of the structure functions F 2 and ΔxF 3 = xF ν 3 - xF ν -bar 3 from CCFR ν μ -Fe and ν-bar μ -Fe differential cross sections. The extraction is performed in a physics model independent (PMI) way. This first measurement of ΔxF 3 , which is useful in testing models of heavy charm production, is higher than current theoretical predictions. The ratio of the F 2 (PMI) values measured in ν μ , and μ scattering is in agreement (within 5%) with the NLO predictions using massive charm production schemes, thus resolving the long-standing discrepancy between the two sets of data. In addition, measurements of F L (or, equivalently, R) and 2xF 1 are reported in the kinematic region where anomalous nuclear effects in R are observed at HERMES. (author)

  1. Fault detection of a benchmark wind turbine using interval analysis

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Odgaard, Peter Fogh; Bak, Thomas

    2012-01-01

    This paper investigates a state estimation set- membership approach for fault detection of a benchmark wind turbine. The main challenges in the benchmark are high noise on the wind speed measurement and the nonlinearities in the aerodynamic torque such that the overall model of the turbine is non...... of the measurement with a closed set that is computed based on the past measurements and a model of the system. If the measurement is not consistent with this set, a fault is detected. The result demonstrates effectiveness of the method for fault detection of the benchmark wind turbine....

  2. Status on benchmark testing of CENDL-3

    CERN Document Server

    Liu Ping

    2002-01-01

    CENDL-3, the newest version of China Evaluated Nuclear Data Library has been finished, and distributed for some benchmarks analysis recently. The processing was carried out using the NJOY nuclear data processing code system. The calculations and analysis of benchmarks were done with Monte Carlo code MCNP and reactor lattice code WIMSD5A. The calculated results were compared with the experimental results based on ENDF/B6. In most thermal and fast uranium criticality benchmarks, the calculated k sub e sub f sub f values with CENDL-3 were in good agreements with experimental results. In the plutonium fast cores, the k sub e sub f sub f values were improved significantly with CENDL-3. This is duo to reevaluation of the fission spectrum and elastic angular distributions of sup 2 sup 3 sup 9 Pu and sup 2 sup 4 sup 0 Pu. CENDL-3 underestimated the k sub e sub f sub f values compared with other evaluated data libraries for most spherical or cylindrical assemblies of plutonium or uranium with beryllium

  3. WIPP Benchmark calculations with the large strain SPECTROM codes

    Energy Technology Data Exchange (ETDEWEB)

    Callahan, G.D.; DeVries, K.L. [RE/SPEC, Inc., Rapid City, SD (United States)

    1995-08-01

    This report provides calculational results from the updated Lagrangian structural finite-element programs SPECTROM-32 and SPECTROM-333 for the purpose of qualifying these codes to perform analyses of structural situations in the Waste Isolation Pilot Plant (WIPP). Results are presented for the Second WIPP Benchmark (Benchmark II) Problems and for a simplified heated room problem used in a parallel design calculation study. The Benchmark II problems consist of an isothermal room problem and a heated room problem. The stratigraphy involves 27 distinct geologic layers including ten clay seams of which four are modeled as frictionless sliding interfaces. The analyses of the Benchmark II problems consider a 10-year simulation period. The evaluation of nine structural codes used in the Benchmark II problems shows that inclusion of finite-strain effects is not as significant as observed for the simplified heated room problem, and a variety of finite-strain and small-strain formulations produced similar results. The simplified heated room problem provides stratigraphic complexity equivalent to the Benchmark II problems but neglects sliding along the clay seams. The simplified heated problem does, however, provide a calculational check case where the small strain-formulation produced room closures about 20 percent greater than those obtained using finite-strain formulations. A discussion is given of each of the solved problems, and the computational results are compared with available published results. In general, the results of the two SPECTROM large strain codes compare favorably with results from other codes used to solve the problems.

  4. Benchmark and Continuous Improvement of Performance

    Directory of Open Access Journals (Sweden)

    Alina Alecse Stanciu

    2017-12-01

    Full Text Available The present Economic Environment is challenge us to perform, to think and re-think our personal strategies in according with our entities strategies, even if we are simply employed or we are entrepreneurs. Is an environment characterised by Volatility, Uncertainity, Complexity and Ambiguity - a VUCA World in which the entities must fight for their position gained in the market, disrupt new markets and new economies, developing their client portofolio, with the Performance as one final goal. The pressure of driving forces known as the 2030 Megatrends: Globalization 2.0, Environmental Crisis and the Scarcity of Resources, Individualism and Value Pluralism, Demographic Change, This paper examines whether using benchmark is an opportunity to increase the competitiveness of Romanian SMEs and the results show that benchmark is therefore a powerful instrument, combining reduced negative impact on the environment with a positive impact on the economy and society.

  5. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  6. On some results for meromorphic univalent functions having ...

    Indian Academy of Sciences (India)

    71

    2017-08-05

    Aug 5, 2017 ... till date. We refer to the following articles [11]–[16] for various other results on such mappings. In this paper our main concern is the univalent meromorphic mappings defined in. D with pole at z = p ∈ [0,1) having quasiconformal extension to the whole complex plane. O. Lehto extensively studied coefficient ...

  7. Benchmark of a Cubieboard cluster

    Science.gov (United States)

    Schnepf, M. J.; Gudu, D.; Rische, B.; Fischer, M.; Jung, C.; Hardt, M.

    2015-12-01

    We built a cluster of ARM-based Cubieboards2 which has a SATA interface to connect a harddrive. This cluster was set up as a storage system using Ceph and as a compute cluster for high energy physics analyses. To study the performance in these applications, we ran two benchmarks on this cluster. We also checked the energy efficiency of the cluster using the preseted benchmarks. Performance and energy efficency of our cluster were compared with a network-attached storage (NAS), and with a desktop PC.

  8. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  9. Results on nucleon structure functions in quantum chromodynamics

    International Nuclear Information System (INIS)

    Martin, F.

    1979-01-01

    Gluon bremsstrahlung processes inside the nucleon are investigated using the standard renormalization-group analysis. A new method of inverting the moments is applied which leads to analytic results for the parton distributions near x = 1 and x = 0. The nucleon is considered as a bound state of three quarks subsequently ''renormalized'' by gluon bremsstrahlung and quark-antiquark pair production. An ''unrenormalized'' valance quark distribution peaked at x = 1/3, with a width related to the nucleon radius, leads to good agreement with deep-inelastic data. However, the gluon distribution obtained seems too steep near x = 0

  10. Verification, validation, and benchmarking report for GILDA: An infinite lattice diffusion theory calculation

    Energy Technology Data Exchange (ETDEWEB)

    Le, T.T.

    1991-09-01

    This report concerns the verification and validation of GILDA, a static two dimensional infinite lattice diffusion theory code. The verification was performed to determine if GILDA was applying the correct theory and that all the subroutines function as required. The validation was performed to determine the accuracy of the code by comparing the results of the code with the integral transport solutions (GLASS) of benchmark problems. Since GLASS uses multigroup integral transport theory, a more accurate method than fewgroup diffusion theory, using solutions from GLASS as reference solutions to benchmark GILDA is acceptable. Eight benchmark problems used in this process are infinite mixed lattice problems. The lattice is constructed by repeating an infinite number of identical super-cells (zones). Two types of super-cell have been used for these benchmark problems: one consists of six Mark22 assemblies surrounding one control assembly and the other consists of three Markl6 fuel assemblies and three Mark31 target assemblies surrounding a control assembly.

  11. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  12. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  13. The Benchmarking of Integrated Business Structures

    Directory of Open Access Journals (Sweden)

    Nifatova Olena M.

    2017-12-01

    Full Text Available The aim of the article is to study the role of the benchmarking in the process of integration of business structures in the aspect of knowledge sharing. The results of studying the essential content of the concept “integrated business structure” and its semantic analysis made it possible to form our own understanding of this category with an emphasis on the need to consider it in the plane of three projections — legal, economic and organizational one. The economic projection of the essential content of integration associations of business units is supported by the organizational projection, which is expressed through such essential aspects as existence of a single center that makes key decisions; understanding integration as knowledge sharing; using the benchmarking as exchange of experience on key business processes. Understanding the process of integration of business units in the aspect of knowledge sharing involves obtaining certain information benefits. Using the benchmarking as exchange of experience on key business processes in integrated business structures will help improve the basic production processes, increase the efficiency of activity of both the individual business unit and the IBS as a whole.

  14. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    Science.gov (United States)

    This report adapts the standard U.S. EPA methodology for deriving ambient water quality criteria. Rather than use toxicity test results, the adaptation uses field data to determine the loss of 5% of genera from streams. The method is applied to derive effect benchmarks for dissolved salts as measured by conductivity in Central Appalachian streams using data from West Virginia and Kentucky. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  15. Parameter Curation for Benchmark Queries

    NARCIS (Netherlands)

    Gubichev, Andrey; Boncz, Peter

    2014-01-01

    In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters

  16. Benchmarked Library Websites Comparative Study

    KAUST Repository

    Ramli, Rindra M.

    2015-01-01

    This presentation provides an analysis of services provided by the benchmarked library websites. The exploratory study includes comparison of these websites against a list of criterion and presents a list of services that are most commonly deployed by the selected websites. In addition to that, the investigators proposed a list of services that could be provided via the KAUST library website.

  17. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  18. Benchmarking Terrestrial Ecosystem Models in the South Central US

    Science.gov (United States)

    Kc, M.; Winton, K.; Langston, M. A.; Luo, Y.

    2016-12-01

    Ecosystem services and products are the foundation of sustainability for regional and global economy since we are directly or indirectly dependent on the ecosystem services like food, livestock, water, air, wildlife etc. It has been increasingly recognized that for sustainability concerns, the conservation problems need to be addressed in the context of entire ecosystems. This approach is even more vital in the 21st century with formidable increasing human population and rapid changes in global environment. This study was conducted to find the state of the science of ecosystem models in the South-Central region of US. The ecosystem models were benchmarked using ILAMB diagnostic package developed as a result of International Land Model Benchmarking (ILAMB) project on four main categories; viz, Ecosystem and Carbon Cycle, Hydrology Cycle, Radiation and Energy Cycle and Climate forcings. A cumulative assessment was generated with weighted seven different skill assessment metrics for the ecosystem models. This synthesis on the current state of the science of ecosystem modeling in the South-Central region of US will be highly useful towards coupling these models with climate, agronomic, hydrologic, economic or management models to better represent ecosystem dynamics as affected by climate change and human activities; and hence gain more reliable predictions of future ecosystem functions and service in the region. Better understandings of such processes will increase our ability to predict the ecosystem responses and feedbacks to environmental and human induced change in the region so that decision makers can make an informed management decisions of the ecosystem.

  19. Development of common user data model for APOLLO3 and MARBLE and application to benchmark problems

    International Nuclear Information System (INIS)

    Yokoyama, Kenji

    2009-07-01

    A Common User Data Model, CUDM, has been developed for the purpose of benchmark calculations between APOLLO3 and MARBLE code systems. The current version of CUDM was designed for core calculation benchmark problems with 3-dimensional Cartesian, 3-D XYZ, geometry. CUDM is able to manage all input/output data such as 3-D XYZ geometry, effective macroscopic cross section, effective multiplication factor and neutron flux. In addition, visualization tools for geometry and neutron flux were included. CUDM was designed by the object-oriented technique and implemented using Python programming language. Based on the CUDM, a prototype system for a benchmark calculation, CUDM-benchmark, was also developed. The CUDM-benchmark supports input/output data conversion for IDT solver in APOLLO3, and TRITAC and SNT solvers in MARBLE. In order to evaluate pertinence of CUDM, the CUDM-benchmark was applied to benchmark problems proposed by T. Takeda, G. Chiba and I. Zmijarevic. It was verified that the CUDM-benchmark successfully reproduced the results calculated with reference input data files, and provided consistent results among all the solvers by using one common input data defined by CUDM. In addition, a detailed benchmark calculation for Chiba benchmark was performed by using the CUDM-benchmark. Chiba benchmark is a neutron transport benchmark problem for fast criticality assembly without homogenization. This benchmark problem consists of 4 core configurations which have different sodium void regions, and each core configuration is defined by more than 5,000 fuel/material cells. In this application, it was found that the results by IDT and SNT solvers agreed well with the reference results by Monte-Carlo code. In addition, model effects such as quadrature set effect, S n order effect and mesh size effect were systematically evaluated and summarized in this report. (author)

  20. Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Machines

    Directory of Open Access Journals (Sweden)

    Helmut G. Katzgraber

    2014-04-01

    Full Text Available Recently, a programmable quantum annealing machine has been built that minimizes the cost function of hard optimization problems by, in principle, adiabatically quenching quantum fluctuations. Tests performed by different research teams have shown that, indeed, the machine seems to exploit quantum effects. However, experiments on a class of random-bond instances have not yet demonstrated an advantage over classical optimization algorithms on traditional computer hardware. Here, we present evidence as to why this might be the case. These engineered quantum annealing machines effectively operate coupled to a decohering thermal bath. Therefore, we study the finite-temperature critical behavior of the standard benchmark problem used to assess the computational capabilities of these complex machines. We simulate both random-bond Ising models and spin glasses with bimodal and Gaussian disorder on the D-Wave Chimera topology. Our results show that while the worst-case complexity of finding a ground state of an Ising spin glass on the Chimera graph is not polynomial, the finite-temperature phase space is likely rather simple because spin glasses on Chimera have only a zero-temperature transition. This means that benchmarking optimization methods using spin glasses on the Chimera graph might not be the best benchmark problems to test quantum speedup. We propose alternative benchmarks by embedding potentially harder problems on the Chimera topology. Finally, we also study the (reentrant disorder-temperature phase diagram of the random-bond Ising model on the Chimera graph and show that a finite-temperature ferromagnetic phase is stable up to 19.85(15% antiferromagnetic bonds. Beyond this threshold, the system only displays a zero-temperature spin-glass phase. Our results therefore show that a careful design of the hardware architecture and benchmark problems is key when building quantum annealing machines.

  1. Controllability results for semilinear functional and neutral functional evolution equations with infinite delay

    Directory of Open Access Journals (Sweden)

    Selma Baghli

    2009-02-01

    Full Text Available In this paper sufficient conditions are given ensuring the controllability of mild solutions defined on a bounded interval for two classes of first order semilinear functional and neutral functional differential equations involving evolution operators when the delay is infinite using the nonlinear alternative of Leray-Schauder type.

  2. SPICE benchmark for global tomographic methods

    Science.gov (United States)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  3. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  4. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  5. Benchmarking HIV health care

    DEFF Research Database (Denmark)

    Podlekareva, Daria; Reekie, Joanne; Mocroft, Amanda

    2012-01-01

    ABSTRACT: BACKGROUND: State-of-the-art care involving the utilisation of multiple health care interventions is the basis for an optimal long-term clinical prognosis for HIV-patients. We evaluated health care for HIV-patients based on four key indicators. METHODS: Four indicators of health care were...... assessed: Compliance with current guidelines on initiation of 1) combination antiretroviral therapy (cART), 2) chemoprophylaxis, 3) frequency of laboratory monitoring, and 4) virological response to cART (proportion of patients with HIV-RNA 90% of time on cART). RESULTS: 7097 Euro...... to North, patients from other regions had significantly lower odds of virological response; the difference was most pronounced for East and Argentina (adjusted OR 0.16[95%CI 0.11-0.23, p HIV health care utilization...

  6. A Framework for Urban Transport Benchmarking

    OpenAIRE

    Theuns Henning; Mohammed Dalil Essakali; Jung Eun Oh

    2011-01-01

    This report summarizes the findings of a study aimed at exploring key elements of a benchmarking framework for urban transport. Unlike many industries where benchmarking has proven to be successful and straightforward, the multitude of the actors and interactions involved in urban transport systems may make benchmarking a complex endeavor. It was therefore important to analyze what has bee...

  7. Assessment of Static Delamination Propagation Capabilities in Commercial Finite Element Codes Using Benchmark Analysis

    Science.gov (United States)

    Orifici, Adrian C.; Krueger, Ronald

    2010-01-01

    With capabilities for simulating delamination growth in composite materials becoming available, the need for benchmarking and assessing these capabilities is critical. In this study, benchmark analyses were performed to assess the delamination propagation simulation capabilities of the VCCT implementations in Marc TM and MD NastranTM. Benchmark delamination growth results for Double Cantilever Beam, Single Leg Bending and End Notched Flexure specimens were generated using a numerical approach. This numerical approach was developed previously, and involves comparing results from a series of analyses at different delamination lengths to a single analysis with automatic crack propagation. Specimens were analyzed with three-dimensional and two-dimensional models, and compared with previous analyses using Abaqus . The results demonstrated that the VCCT implementation in Marc TM and MD Nastran(TradeMark) was capable of accurately replicating the benchmark delamination growth results and that the use of the numerical benchmarks offers advantages over benchmarking using experimental and analytical results.

  8. Health services research related to performance indicators and benchmarking in Europe.

    Science.gov (United States)

    Klazinga, Niek; Fischer, Claudia; ten Asbroek, Augustinus

    2011-07-01

    ), and the Organization for Economic Co-operation and Development (OECD) to facilitate the availability of internationally comparable performance information. This study suggests a number of themes for future research. These include testing and improving: the validity and reliability of performance indicators, especially related to avoidable mortality and other outcome indicators; the effectiveness and efficiency of embedding performance indicators in the various governance, monitoring and management models, and their effect on health systems, services and professionals; and the effectiveness and efficiency of linking performance indicators to other national and international strategies and policies such as accreditation and certification, practice guidelines, audits, quality systems, patient safety strategies, national standards on volume and/or quality, public reporting, pay-for-performance and patient/consumer involvement. The field would benefit from strengthening the clearinghouse function for research findings, training of researchers and appropriate scientific publication media. Results should be systematically shared with policy-makers and managers, and networking stimulated between the growing number of regional and national institutes involved in quality measurement and reporting.

  9. Closed-Loop Neuromorphic Benchmarks

    Directory of Open Access Journals (Sweden)

    Terrence C Stewart

    2015-12-01

    Full Text Available Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is evenmore difficult when the task of interest is a closed-loop task; that is, a task where the outputfrom the neuromorphic hardware affects some environment, which then in turn affects thehardware’s future input. However, closed-loop situations are one of the primary potential uses ofneuromorphic hardware. To address this, we present a methodology for generating closed-loopbenchmarks that makes use of a hybrid of real physical embodiment and a type of minimalsimulation. Minimal simulation has been shown to lead to robust real-world performance, whilestill maintaining the practical advantages of simulation, such as making it easy for the samebenchmark to be used by many researchers. This method is flexible enough to allow researchersto explicitly modify the benchmarks to identify specific task domains where particular hardwareexcels. To demonstrate the method, we present a set of novel benchmarks that focus on motorcontrol for an arbitrary system with unknown external forces. Using these benchmarks, we showthat an error-driven learning rule can consistently improve motor control performance across arandomly generated family of closed-loop simulations, even when there are up to 15 interactingjoints to be controlled.

  10. Ab initio and DFT benchmarking of tungsten nanoclusters and tungsten hydrides

    International Nuclear Information System (INIS)

    Skoviera, J.; Novotny, M.; Cernusak, I.; Oda, T.; Louis, F.

    2015-01-01

    We present several benchmark calculations comparing wave-function based methods and density functional theory for model systems containing tungsten. They include W 4 cluster as well as W 2 , WH and WH 2 molecules. (authors)

  11. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  12. Benchmarking organic mixed conductors for transistors

    KAUST Repository

    Inal, Sahika

    2017-11-20

    Organic mixed conductors have garnered significant attention in applications from bioelectronics to energy storage/generation. Their implementation in organic transistors has led to enhanced biosensing, neuromorphic function, and specialized circuits. While a narrow class of conducting polymers continues to excel in these new applications, materials design efforts have accelerated as researchers target new functionality, processability, and improved performance/stability. Materials for organic electrochemical transistors (OECTs) require both efficient electronic transport and facile ion injection in order to sustain high capacity. In this work, we show that the product of the electronic mobility and volumetric charge storage capacity (µC*) is the materials/system figure of merit; we use this framework to benchmark and compare the steady-state OECT performance of ten previously reported materials. This product can be independently verified and decoupled to guide materials design and processing. OECTs can therefore be used as a tool for understanding and designing new organic mixed conductors.

  13. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  14. Adapting benchmarking to project management : an analysis of project management processes, metrics, and benchmarking process models

    OpenAIRE

    Emhjellen, Kjetil

    1997-01-01

    Avhandling (dr.ing.) - Høgskolen i Telemark / Norges teknisk-naturvitenskapelige universitet Since the first publication on benchmarking in 1989 by Robert C. Camp of “Benchmarking: The search for Industry Best Practices that Lead to Superior Performance”, the improvement technique benchmarking has been established as an important tool in the process focused manufacturing or production environment. The use of benchmarking has expanded to other types of industry. Benchmarking has past t...

  15. Benchmarking urban energy efficiency in the UK

    International Nuclear Information System (INIS)

    Keirstead, James

    2013-01-01

    This study asks what is the ‘best’ way to measure urban energy efficiency. There has been recent interest in identifying efficient cities so that best practices can be shared, a process known as benchmarking. Previous studies have used relatively simple metrics that provide limited insight on the complexity of urban energy efficiency and arguably fail to provide a ‘fair’ measure of urban performance. Using a data set of 198 urban UK local administrative units, three methods are compared: ratio measures, regression residuals, and data envelopment analysis. The results show that each method has its own strengths and weaknesses regarding the ease of interpretation, ability to identify outliers and provide consistent rankings. Efficient areas are diverse but are notably found in low income areas of large conurbations such as London, whereas industrial areas are consistently ranked as inefficient. The results highlight the shortcomings of the underlying production-based energy accounts. Ideally urban energy efficiency benchmarks would be built on consumption-based accounts, but interim recommendations are made regarding the use of efficiency measures that improve upon current practice and facilitate wider conversations about what it means for a specific city to be energy-efficient within an interconnected economy. - Highlights: • Benchmarking is a potentially valuable method for improving urban energy performance. • Three different measures of urban energy efficiency are presented for UK cities. • Most efficient areas are diverse but include low-income areas of large conurbations. • Least efficient areas perform industrial activities of national importance. • Improve current practice with grouped per capita metrics or regression residuals

  16. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  17. Space network scheduling benchmark: A proof-of-concept process for technology transfer

    Science.gov (United States)

    Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy

    1993-01-01

    This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.

  18. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    Science.gov (United States)

    Benelli, G.; CMS Offline Computing Projects

    2010-04-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint [1]) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  19. International benchmarking of electricity transmission by regulators: A contrast between theory and practice?

    International Nuclear Information System (INIS)

    Haney, Aoife Brophy; Pollitt, Michael G.

    2013-01-01

    Benchmarking of electricity networks has a key role in sharing the benefits of efficiency improvements with consumers and ensuring regulated companies earn a fair return on their investments. This paper analyses and contrasts the theory and practice of international benchmarking of electricity transmission by regulators. We examine the literature relevant to electricity transmission benchmarking and discuss the results of a survey of 25 national electricity regulators. While new panel data techniques aimed at dealing with unobserved heterogeneity and the validity of the comparator group look intellectually promising, our survey suggests that they are in their infancy for regulatory purposes. In electricity transmission, relative to electricity distribution, choosing variables is particularly difficult, because of the large number of potential variables to choose from. Failure to apply benchmarking appropriately may negatively affect investors’ willingness to invest in the future. While few of our surveyed regulators acknowledge that regulatory risk is currently an issue in transmission benchmarking, many more concede it might be. In the meantime new regulatory approaches – such as those based on tendering, negotiated settlements, a wider range of outputs or longer term grid planning – are emerging and will necessarily involve a reduced role for benchmarking. -- Highlights: •We discuss how to benchmark electricity transmission. •We report survey results from 25 national energy regulators. •Electricity transmission benchmarking is more challenging than benchmarking distribution. •Many regulators concede benchmarking may raise capital costs. •Many regulators are considering new regulatory approaches

  20. Benchmarking in TESOL: A Study of the Malaysia Education Blueprint 2013

    Science.gov (United States)

    Jawaid, Arif

    2014-01-01

    Benchmarking is a very common real-life function occurring every moment unnoticed. It has travelled from industry to education like other quality disciplines. Initially benchmarking was used in higher education. .Now it is diffusing into other areas including TESOL (Teaching English to Speakers of Other Languages), which has yet to devise a…

  1. Model-Based Engineering and Manufacturing CAD/CAM Benchmark

    International Nuclear Information System (INIS)

    Domm, T.D.; Underwood, R.S.

    1999-01-01

    The Benehmark Project was created from a desire to identify best practices and improve the overall efficiency and performance of the Y-12 Plant's systems and personnel supporting the manufacturing mission. The mission of the benchmark team was to search out industry leaders in manufacturing and evaluate their engineering practices and processes to determine direction and focus fm Y-12 modmizadon efforts. The companies visited included several large established companies and anew, small, high-tech machining firm. As a result of this effort changes are recommended that will enable Y-12 to become a more responsive cost-effective manufacturing facility capable of suppording the needs of the Nuclear Weapons Complex (NW at sign) and Work Fw Others into the 21' century. The benchmark team identified key areas of interest, both focused and gencml. The focus arm included Human Resources, Information Management, Manufacturing Software Tools, and Standarda/ Policies and Practices. Areas of general interest included Inhstructure, Computer Platforms and Networking, and Organizational Structure. The method for obtaining the desired information in these areas centered on the creation of a benchmark questionnaire. The questionnaire was used throughout each of the visits as the basis for information gathering. The results of this benchmark showed that all companies are moving in the direction of model-based engineering and manufacturing. There was evidence that many companies are trying to grasp how to manage current and legacy data. In terms of engineering design software tools, the companies contacted were using both 3-D solid modeling and surfaced Wire-frame models. The manufacturing computer tools were varie4 with most companies using more than one software product to generate machining data and none currently performing model-based manufacturing (MBM) ftom a common medel. The majority of companies were closer to identifying or using a single computer-aided design (CAD) system

  2. Comment on 'Analytical results for a Bessel function times Legendre polynomials class integrals'

    International Nuclear Information System (INIS)

    Cregg, P J; Svedlindh, P

    2007-01-01

    A result is obtained, stemming from Gegenbauer, where the products of certain Bessel functions and exponentials are expressed in terms of an infinite series of spherical Bessel functions and products of associated Legendre functions. Closed form solutions for integrals involving Bessel functions times associated Legendre functions times exponentials, recently elucidated by Neves et al (J. Phys. A: Math. Gen. 39 L293), are then shown to result directly from the orthogonality properties of the associated Legendre functions. This result offers greater flexibility in the treatment of classical Heisenberg chains and may do so in other problems such as occur in electromagnetic diffraction theory. (comment)

  3. A Uranium Bioremediation Reactive Transport Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  4. DNA breathing dynamics: analytic results for distribution functions of relevant Brownian functionals.

    Science.gov (United States)

    Bandyopadhyay, Malay; Gupta, Shamik; Segal, Dvira

    2011-03-01

    We investigate DNA breathing dynamics by suggesting and examining several Brownian functionals associated with bubble lifetime and reactivity. Bubble dynamics is described as an overdamped random walk in the number of broken base pairs. The walk takes place on the Poland-Scheraga free-energy landscape. We suggest several probability distribution functions that characterize the breathing process, and adopt the recently studied backward Fokker-Planck method and the path decomposition method as elegant and flexible tools for deriving these distributions. In particular, for a bubble of an initial size x₀, we derive analytical expressions for (i) the distribution P(t{f}|x₀) of the first-passage time t{f}, characterizing the bubble lifetime, (ii) the distribution P(A|x₀) of the area A until the first-passage time, providing information about the effective reactivity of the bubble to processes within the DNA, (iii) the distribution P(M) of the maximum bubble size M attained before the first-passage time, and (iv) the joint probability distribution P(M,t{m}) of the maximum bubble size M and the time t{m} of its occurrence before the first-passage time. These distributions are analyzed in the limit of small and large bubble sizes. We supplement our analytical predictions with direct numericalsimulations of the related Langevin equation, and obtain a very good agreement in the appropriate limits. The nontrivial scaling behavior of the various quantities analyzed here can, in principle, be explored experimentally.

  5. NEACRP thermal fission product benchmark

    International Nuclear Information System (INIS)

    Halsall, M.J.; Taubman, C.J.

    1989-09-01

    The objective of the thermal fission product benchmark was to compare the range of fission product data in use at the present time. A simple homogeneous problem was set with 200 atoms H/1 atom U235, to be burnt up to 1000 days and then decay for 1000 days. The problem was repeated with 200 atoms H/1 atom Pu239, 20 atoms H/1 atom U235 and 20 atoms H/1 atom Pu239. There were ten participants and the submissions received are detailed in this report. (author)

  6. NASA Software Engineering Benchmarking Effort

    Science.gov (United States)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  7. Burn-up TRIGA Mark II benchmark experiment

    International Nuclear Information System (INIS)

    Persic, A.; Ravnik, M.; Zagar, T.

    1998-01-01

    Different reactor codes are used for calculations of reactor parameters. The accuracy of the programs is tested through comparison of the calculated values with the experimental results. Well-defined and accurately measured benchmarks are required. The experimental results of reactivity measurements, fuel element reactivity worth distribution and fuel-up measurements are presented in this paper. The experiments were performed with partly burnt reactor core. The experimental conditions were well defined, so that the results can be used as a burn-up benchmark test case for a TRIGA Mark II reactor calculations.(author)

  8. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  9. INFLUENCE OF ASSESSMENT SETTING ON THE RESULTS OF FUNCTIONAL ANALYSES OF PROBLEM BEHAVIOR

    OpenAIRE

    Lang, Russell; Sigafoos, Jeff; Lancioni, Giulio; Didden, Robert; Rispoli, Mandy

    2010-01-01

    Analogue functional analyses are widely used to identify the operant function of problem behavior in individuals with developmental disabilities. Because problem behavior often occurs across multiple settings (e.g., homes, schools, outpatient clinics), it is important to determine whether the results of functional analyses vary across settings. This brief review covers 3 recent studies that examined the influence of different settings on the results of functional analyses and identifies direc...

  10. Analysis of a multigroup stylized CANDU half-core benchmark

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru

    2011-01-01

    Highlights: → This paper provides a benchmark that is a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. → An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core CANDU benchmark problem. → Reference eigenvalues and selected pin and bundle fission rates are included. → 2-, 4- and 47-group Monte Carlo solutions are compared to analyze homogenization-free transport approximations that result from energy condensation. - Abstract: An 8-group cross section library is provided to augment a previously published 2-group 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem. Reference eigenvalues and selected pin and bundle fission rates are also included. This benchmark is intended to provide computational reactor physicists and methods developers with a stylized model problem in more than two energy groups that is realistic with respect to the underlying physics. In addition to transport theory code verification, the 8-group energy structure provides reactor physicist with an ideal problem for examining cross section homogenization and collapsing effects in a full-core environment. To this end, additional 2-, 4- and 47-group full-core Monte Carlo benchmark solutions are compared to analyze homogenization-free transport approximations incurred as a result of energy group condensation.

  11. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  12. Developing a benchmark for emotional analysis of music.

    Directory of Open Access Journals (Sweden)

    Anna Aljanaki

    Full Text Available Music emotion recognition (MER field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM, is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution. Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  13. Development of an MPI benchmark program library

    Energy Technology Data Exchange (ETDEWEB)

    Uehara, Hitoshi

    2001-03-01

    Distributed parallel simulation software with message passing interfaces has been developed to realize large-scale and high performance numerical simulations. The most popular API for message communication is an MPI. The MPI will be provided on the Earth Simulator. It is known that performance of message communication using the MPI libraries gives a significant influence on a whole performance of simulation programs. We developed an MPI benchmark program library named MBL in order to measure the performance of message communication precisely. The MBL measures the performance of major MPI functions such as point-to-point communications and collective communications and the performance of major communication patterns which are often found in application programs. In this report, the description of the MBL and the performance analysis of the MPI/SX measured on the SX-4 are presented. (author)

  14. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  15. Reevaluation of the Jezebel Benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-10

    Every nuclear engineering student is familiar with Jezebel, the homogeneous bare sphere of plutonium first assembled at Los Alamos in 1954-1955. The actual Jezebel assembly was neither homogeneous, nor bare, nor spherical; nor was it singular – there were hundreds of Jezebel configurations assembled. The Jezebel benchmark has been reevaluated for the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Logbooks, original drawings, mass accountability statements, internal reports, and published reports have been used to model four actual three-dimensional Jezebel assemblies with high fidelity. Because the documentation available today is often inconsistent, three major assumptions were made regarding plutonium part masses and dimensions. The first was that the assembly masses given in Los Alamos report LA-4208 (1969) were correct, and the second was that the original drawing dimension for the polar height of a certain major part was correct. The third assumption was that a change notice indicated on the original drawing was not actually implemented. This talk will describe these assumptions, the alternatives, and the implications. Since the publication of the 2013 ICSBEP Handbook, the actual masses of the major components have turned up. Our assumption regarding the assembly masses was proven correct, but we had the mass distribution incorrect. Work to incorporate the new information is ongoing, and this talk will describe the latest assessment.

  16. Swiss electricity grid - Benchmarking pilot project

    International Nuclear Information System (INIS)

    2001-01-01

    This article is a short version of the ENET number 210369. This report for the Swiss Federal Office of Energy (SFOE) describes a benchmarking pilot project carried out as a second phase in the development of a formula for the regulation of an open electricity market in Switzerland. It follows on from an initial phase involving the definition of a 'blue print' and a basic concept. The aims of the pilot project - to check out the practicability of the concept - are discussed. The collection of anonymised data for the benchmarking model from over 30 electricity utilities operating on all 7 Swiss grid levels and their integration in the three areas 'Technology', 'Grid Costs' and 'Capital Invested' are discussed in detail. In particular, confidentiality and data protection aspects are looked at. The methods used in the analysis of the data are described and the results of an efficiency analysis of various utilities are presented. The report is concluded with a listing of questions concerning data collection and analysis as well as operational and capital costs that are still to be answered

  17. Benchmarking of SIMULATE-3 on engineering workstations

    International Nuclear Information System (INIS)

    Karlson, C.F.; Reed, M.L.; Webb, J.R.; Elzea, J.D.

    1990-01-01

    The nuclear fuel management department of Arizona Public Service Company (APS) has evaluated various computer platforms for a departmental engineering and business work-station local area network (LAN). Historically, centralized mainframe computer systems have been utilized for engineering calculations. Increasing usage and the resulting longer response times on the company mainframe system and the relative cost differential between a mainframe upgrade and workstation technology justified the examination of current workstations. A primary concern was the time necessary to turn around routine reactor physics reload and analysis calculations. Computers ranging from a Definicon 68020 processing board in an AT compatible personal computer up to an IBM 3090 mainframe were benchmarked. The SIMULATE-3 advanced nodal code was selected for benchmarking based on its extensive use in nuclear fuel management. SIMULATE-3 is used at APS for reload scoping, design verification, core follow, and providing predictions of reactor behavior under nominal conditions and planned reactor maneuvering, such as axial shape control during start-up and shutdown

  18. Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William

    2012-01-01

    Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.

  19. Benchmarking ensemble streamflow prediction skill in the UK

    Science.gov (United States)

    Harrigan, Shaun; Prudhomme, Christel; Parry, Simon; Smith, Katie; Tanguy, Maliko

    2018-03-01

    Skilful hydrological forecasts at sub-seasonal to seasonal lead times would be extremely beneficial for decision-making in water resources management, hydropower operations, and agriculture, especially during drought conditions. Ensemble streamflow prediction (ESP) is a well-established method for generating an ensemble of streamflow forecasts in the absence of skilful future meteorological predictions, instead using initial hydrologic conditions (IHCs), such as soil moisture, groundwater, and snow, as the source of skill. We benchmark when and where the ESP method is skilful across a diverse sample of 314 catchments in the UK and explore the relationship between catchment storage and ESP skill. The GR4J hydrological model was forced with historic climate sequences to produce a 51-member ensemble of streamflow hindcasts. We evaluated forecast skill seamlessly from lead times of 1 day to 12 months initialized at the first of each month over a 50-year hindcast period from 1965 to 2015. Results showed ESP was skilful against a climatology benchmark forecast in the majority of catchments across all lead times up to a year ahead, but the degree of skill was strongly conditional on lead time, forecast initialization month, and individual catchment location and storage properties. UK-wide mean ESP skill decayed exponentially as a function of lead time with continuous ranked probability skill scores across the year of 0.75, 0.20, and 0.11 for 1-day, 1-month, and 3-month lead times, respectively. However, skill was not uniform across all initialization months. For lead times up to 1 month, ESP skill was higher than average when initialized in summer and lower in winter months, whereas for longer seasonal and annual lead times skill was higher when initialized in autumn and winter months and lowest in spring. ESP was most skilful in the south and east of the UK, where slower responding catchments with higher soil moisture and groundwater storage are mainly located

  20. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  1. A proposal for benchmarking learning objects

    OpenAIRE

    Rita Falcão; Alfredo Soeiro

    2007-01-01

    This article proposes a methodology for benchmarking learning objects. It aims to deal with twoproblems related to e-learning: the validation of learning using this method and the return oninvestment of the process of development and use: effectiveness and efficiency.This paper describes a proposal for evaluating learning objects (LOs) through benchmarking, basedon the Learning Object Metadata Standard and on an adaptation of the main tools of the BENVICproject. The Benchmarking of Learning O...

  2. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  3. Influence of Assessment Setting on the Results of Functional Analyses of Problem Behavior

    Science.gov (United States)

    Lang, Russell; Sigafoos, Jeff; Lancioni, Giulio; Didden, Robert; Rispoli, Mandy

    2010-01-01

    Analogue functional analyses are widely used to identify the operant function of problem behavior in individuals with developmental disabilities. Because problem behavior often occurs across multiple settings (e.g., homes, schools, outpatient clinics), it is important to determine whether the results of functional analyses vary across settings.…

  4. Heart rate at admission is a predictor of in-hospital mortality in patients with acute coronary syndromes: Results from 58 European hospitals: The European Hospital Benchmarking by Outcomes in acute coronary syndrome Processes study.

    Science.gov (United States)

    Jensen, Magnus T; Pereira, Marta; Araujo, Carla; Malmivaara, Anti; Ferrieres, Jean; Degano, Irene R; Kirchberger, Inge; Farmakis, Dimitrios; Garel, Pascal; Torre, Marina; Marrugat, Jaume; Azevedo, Ana

    2018-03-01

    The purpose of this study was to investigate the relationship between heart rate at admission and in-hospital mortality in patients with ST-segment elevation myocardial infarction (STEMI) and non-ST-segment elevation acute coronary syndrome (NSTE-ACS). Consecutive ACS patients admitted in 2008-2010 across 58 hospitals in six participant countries of the European Hospital Benchmarking by Outcomes in ACS Processes (EURHOBOP) project (Finland, France, Germany, Greece, Portugal and Spain). Cardiogenic shock patients were excluded. Associations between heart rate at admission in categories of 10 beats per min (bpm) and in-hospital mortality were estimated by logistic regression in crude models and adjusting for age, sex, obesity, smoking, hypertension, diabetes, known heart failure, renal failure, previous stroke and ischaemic heart disease. In total 10,374 patients were included. In both STEMI and NSTE-ACS patients, a U-shaped relationship between admission heart rate and in-hospital mortality was found. The lowest risk was observed for heart rates between 70-79 bpm in STEMI and 60-69 bpm in NSTE-ACS; risk of mortality progressively increased with lower or higher heart rates. In multivariable models, the relationship persisted but was significant only for heart rates >80 bpm. A similar relationship was present in both patients with or without diabetes, above or below age 75 years, and irrespective of the presence of atrial fibrillation or use of beta-blockers. Heart rate at admission is significantly associated with in-hospital mortality in patients with both STEMI and NSTE-ACS. ACS patients with admission heart rate above 80 bpm are at highest risk of in-hospital mortality.

  5. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  6. HPC Benchmark Suite NMx, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  7. Analysis of VENUS-3 benchmark experiment

    International Nuclear Information System (INIS)

    Kodeli, I.; Sartori, E.

    1998-01-01

    The paper presents the revision and the analysis of VENUS-3 benchmark experiment performed at CEN/SCK, Mol (Belgium). This benchmark was found to be particularly suitable for validation of current calculation tools like 3-D neutron transport codes, and in particular of the 3D sensitivity and uncertainty analysis code developed within the EFF project. The compilation of the integral experiment was integrated into the SINBAD electronic data base for storing and retrieving information about the shielding experiments for nuclear systems. SINBAD now includes 33 reviewed benchmark descriptions and several compilations waiting for the review, among them many benchmarks relevant for pressure vessel dosimetry system validation.(author)

  8. Benchmarking analysis of three multimedia models: RESRAD, MMSOILS, and MEPAS

    International Nuclear Information System (INIS)

    Cheng, J.J.; Faillace, E.R.; Gnanapragasam, E.K.

    1995-11-01

    Multimedia modelers from the United States Environmental Protection Agency (EPA) and the United States Department of Energy (DOE) collaborated to conduct a comprehensive and quantitative benchmarking analysis of three multimedia models. The three models-RESRAD (DOE), MMSOILS (EPA), and MEPAS (DOE)-represent analytically based tools that are used by the respective agencies for performing human exposure and health risk assessments. The study is performed by individuals who participate directly in the ongoing design, development, and application of the models. A list of physical/chemical/biological processes related to multimedia-based exposure and risk assessment is first presented as a basis for comparing the overall capabilities of RESRAD, MMSOILS, and MEPAS. Model design, formulation, and function are then examined by applying the models to a series of hypothetical problems. Major components of the models (e.g., atmospheric, surface water, groundwater) are evaluated separately and then studied as part of an integrated system for the assessment of a multimedia release scenario to determine effects due to linking components of the models. Seven modeling scenarios are used in the conduct of this benchmarking study: (1) direct biosphere exposure, (2) direct release to the air, (3) direct release to the vadose zone, (4) direct release to the saturated zone, (5) direct release to surface water, (6) surface water hydrology, and (7) multimedia release. Study results show that the models differ with respect to (1) environmental processes included (i.e., model features) and (2) the mathematical formulation and assumptions related to the implementation of solutions (i.e., parameterization)

  9. Windows NT Workstation Performance Evaluation Based on Pro/E 2000i BENCHMARK

    Energy Technology Data Exchange (ETDEWEB)

    DAVIS,SEAN M.

    2000-08-02

    A performance evaluation of several computers was necessary, so an evaluation program, or benchmark, was run on each computer to determine maximum possible performance. The program was used to test the Computer Aided Drafting (CAD) ability of each computer by monitoring the speed with which several functions were executed. The main objective of the benchmarking program was to record assembly loading times and image regeneration times and then compile a composite score that could be compared with the same tests on other computers. The three computers that were tested were the Compaq AP550, the SGI 230, and the Hewlett-PackardP750C. The Compaq and SGI computers each had a Pentium III 733mhz processor, while the Hewlett-Packard had a Pentium III 750mhz processor. The size and speed of Random Access Memory (RAM) in each computer varied, as did the type of graphics card. Each computer that was tested was using Windows NT 4.0 and Pro/ENGINEER{trademark} 2000i CAD benchmark software provided by Standard Performance Evaluation Corporation (SPEC). The benchmarking program came with its own assembly, automatically loaded and ran tests on the assembly, then compiled the time each test took to complete. Due to the automation of the tests, any sort of user error affecting test scores was virtually eliminated. After all the tests were completed, scores were then compiled and compared. The Silicon Graphics 230 was by far the overall winner with a composite score of 8.57. The Compaq AP550 was next with a score of 5.19, while the Hewlett-Packard P750C performed dismally, achieving a score of 3.34. Several factors, including motherboard chipset, graphics card, and the size and speed of RAM, were involved in the differing scores of the three machines. Surprisingly the Hewlett-Packard, which had the fastest processor, came back with the lowest score. The above factors most likely contributed to the poor performance of the Hewlett-Packard. Based on the results of the benchmark test

  10. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-02-01

    The two point correlation function for the quantum nonlinear Schroedinger (delta-function gas) model is studied. An infinite series representation for this function is derived using the quantum inverse scattering formalism. For the case of zero temperature, the infinite coupling (c → infinity) result of Jimbo, Miwa, Mori and Sato is extended to give an exact expression for the order 1/c correction to the two point function in terms of a Painleve transcendent of the fifth kind

  11. 2009 South American benchmarking study: natural gas transportation companies

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Nathalie [Gas TransBoliviano S.A. (Bolivia); Walter, Juliana S. [TRANSPETRO, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In the current business environment large corporations are constantly seeking to adapt their strategies. Benchmarking is an important tool for continuous improvement and decision-making. Benchmarking is a methodology that determines which aspects are the most important to be improved upon, and it proposes establishing a competitive parameter in an analysis of the best practices and processes, applying continuous improvement driven by the best organizations in their class. At the beginning of 2008, GTB (Gas TransBoliviano S.A.) contacted several South American gas transportation companies to carry out a regional benchmarking study in 2009. In this study, the key performance indicators of the South American companies, whose reality is similar, for example, in terms of prices, availability of labor, and community relations, will be compared. Within this context, a comparative study of the results, the comparative evaluation among natural gas transportation companies, is becoming an essential management instrument to help with decision-making. (author)

  12. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  13. Effects of exposure imprecision on estimation of the benchmark dose

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2004-01-01

    In regression analysis failure to adjust for imprecision in the exposure variable is likely to lead to underestimation of the exposure effect. However, the consequences of exposure error for determination of safe doses of toxic substances have so far not received much attention. The benchmark...... approach is one of the most widely used methods for development of exposure limits. An important advantage of this approach is that it can be applied to observational data. However, in this type of data, exposure markers are seldom measured without error. It is shown that, if the exposure error is ignored......, then the benchmark approach produces results that are biased toward higher and less protective levels. It is therefore important to take exposure measurement error into account when calculating benchmark doses. Methods that allow this adjustment are described and illustrated in data from an epidemiological study...

  14. Updates to the Integrated Protein-Protein Interaction Benchmarks : Docking Benchmark Version 5 and Affinity Benchmark Version 2

    NARCIS (Netherlands)

    Vreven, Thom; Moal, Iain H.; Vangone, Anna|info:eu-repo/dai/nl/370549694; Pierce, Brian G.; Kastritis, Panagiotis L.|info:eu-repo/dai/nl/315886668; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M J J|info:eu-repo/dai/nl/113691238; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were

  15. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  16. Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring

    Science.gov (United States)

    Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John

    2014-01-01

    Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.

  17. Experimental and computational benchmark tests

    International Nuclear Information System (INIS)

    Gilliam, D.M.; Briesmeister, J.F.

    1994-01-01

    A program involving principally NIST, LANL, and ORNL has been in progress for about four years now to establish a series of benchmark measurements and calculations related to the moderation and leakage of 252 Cf neutrons from a source surrounded by spherical aqueous moderators of various thicknesses and compositions. The motivation for these studies comes from problems in criticality calculations concerning arrays of multiplying components, where the leakage from one component acts as a source for the other components. This talk compares experimental and calculated values for the fission rates of four nuclides - 235 U, 239 Pu, 238 U, and 237 Np - in the leakage spectrum from moderator spheres of diameters 76.2 mm, 101.6 mm, and 127.0 mm, with either pure water or enriched B-10 solutions as the moderator. Very detailed Monte Carlo calculations were done with the MCNP code, using a open-quotes light waterclose quotes S(α,β) scattering kernel

  18. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced modal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs) and CANDU heavy- water reactors (HWRs)

  19. Validation of NESTLE against static reactor benchmark problems

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1996-01-01

    The NESTLE advanced nodal code was developed at North Carolina State University with support from Los Alamos National Laboratory and Idaho National Engineering Laboratory. It recently has been benchmarked successfully against measured data from pressurized water reactors (PWRs). However, NESTLE's geometric capabilities are very flexible, and it can be applied to a variety of other types of reactors. This study presents comparisons of NESTLE results with those from other codes for static benchmark problems for PWRs, boiling water reactors (BWRs), high-temperature gas-cooled reactors (HTGRs), and Canada deuterium uranium (CANDU) heavy-water reactors (HWRs)

  20. NASA Indexing Benchmarks: Evaluating Text Search Engines

    Science.gov (United States)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  1. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Killick, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2016-04-01

    This work focuses on the results of a recent daily benchmarking study carried out to compare different temperature homogenisation algorithms; it also gives an overview of the creation of the realistic synthetic data used in the study. Four different regions in the United States were chosen and up to four different inhomogeneity scenarios were explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed both in terms of the ability of algorithms to detect changepoints and to correctly remove the inhomogeneities the changepoints create. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information will in turn help to improve future versions of benchmarks. Here I present a summary of this work including an overview of the benchmark creation and the algorithms run and details of the results of this study. This work formed a 3 year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a wider scale and with monthly instead of daily data.

  2. Mid-term results of three dimensional annuloplasty ring repair in treating functional tricuspid valve regurgitation

    Directory of Open Access Journals (Sweden)

    Osama Rashwan

    2017-12-01

    Conclusions: Tricuspid annuloplasty with the Contour 3D ring provided satisfactory early results in functional TR which remained stable at mid-term follow up. Still long-term results need further follow up and assessment.

  3. Functional requirements with survey results for integrated intrusion detection and access control annunciator systems

    Energy Technology Data Exchange (ETDEWEB)

    Arakaki, L.H.; Monaco, F.M.

    1995-09-01

    This report contains the guidance Functional Requirements for an Integrated Intrusion Detection and Access Control Annunciator System, and survey results of selected commercial systems. The survey questions were based upon the functional requirements; therefore, the results reflect which and sometimes how the guidance recommendations were met.

  4. A benchmarking program to reduce red blood cell outdating: implementation, evaluation, and a conceptual framework.

    Science.gov (United States)

    Barty, Rebecca L; Gagliardi, Kathleen; Owens, Wendy; Lauzon, Deborah; Scheuermann, Sheena; Liu, Yang; Wang, Grace; Pai, Menaka; Heddle, Nancy M

    2015-07-01

    Benchmarking is a quality improvement tool that compares an organization's performance to that of its peers for selected indicators, to improve practice. Processes to develop evidence-based benchmarks for red blood cell (RBC) outdating in Ontario hospitals, based on RBC hospital disposition data from Canadian Blood Services, have been previously reported. These benchmarks were implemented in 160 hospitals provincewide with a multifaceted approach, which included hospital education, inventory management tools and resources, summaries of best practice recommendations, recognition of high-performing sites, and audit tools on the Transfusion Ontario website (http://transfusionontario.org). In this study we describe the implementation process and the impact of the benchmarking program on RBC outdating. A conceptual framework for continuous quality improvement of a benchmarking program was also developed. The RBC outdating rate for all hospitals trended downward continuously from April 2006 to February 2012, irrespective of hospitals' transfusion rates or their distance from the blood supplier. The highest annual outdating rate was 2.82%, at the beginning of the observation period. Each year brought further reductions, with a nadir outdating rate of 1.02% achieved in 2011. The key elements of the successful benchmarking strategy included dynamic targets, a comprehensive and evidence-based implementation strategy, ongoing information sharing, and a robust data system to track information. The Ontario benchmarking program for RBC outdating resulted in continuous and sustained quality improvement. Our conceptual iterative framework for benchmarking provides a guide for institutions implementing a benchmarking program. © 2015 AABB.

  5. Analytical benchmarks for nuclear engineering applications. Case studies in neutron transport theory

    International Nuclear Information System (INIS)

    2008-01-01

    The developers of computer codes involving neutron transport theory for nuclear engineering applications seldom apply analytical benchmarking strategies to ensure the quality of their programs. A major reason for this is the lack of analytical benchmarks and their documentation in the literature. The few such benchmarks that do exist are difficult to locate, as they are scattered throughout the neutron transport and radiative transfer literature. The motivation for this benchmark compendium, therefore, is to gather several analytical benchmarks appropriate for nuclear engineering applications under one cover. We consider the following three subject areas: neutron slowing down and thermalization without spatial dependence, one-dimensional neutron transport in infinite and finite media, and multidimensional neutron transport in a half-space and an infinite medium. Each benchmark is briefly described, followed by a detailed derivation of the analytical solution representation. Finally, a demonstration of the evaluation of the solution representation includes qualified numerical benchmark results. All accompanying computer codes are suitable for the PC computational environment and can serve as educational tools for courses in nuclear engineering. While this benchmark compilation does not contain all possible benchmarks, by any means, it does include some of the most prominent ones and should serve as a valuable reference. (author)

  6. A Web Resource for Standardized Benchmark Datasets, Metrics, and Rosetta Protocols for Macromolecular Modeling and Design.

    Directory of Open Access Journals (Sweden)

    Shane Ó Conchúir

    Full Text Available The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.

  7. Integral and Series Representations of Riemann's Zeta Function and Dirichlet's Eta Function and a Medley of Related Results

    Directory of Open Access Journals (Sweden)

    Michael S. Milgram

    2013-01-01

    Full Text Available Contour integral representations of Riemann's Zeta function and Dirichlet's Eta (alternating Zeta function are presented and investigated. These representations flow naturally from methods developed in the 1800s, but somehow they do not appear in the standard reference summaries, textbooks, or literature. Using these representations as a basis, alternate derivations of known series and integral representations for the Zeta and Eta function are obtained on a unified basis that differs from the textbook approach, and results are developed that appear to be new.

  8. Comparison between HELIOS calculations and a PWR cell benchmark for actinides transmutation

    Energy Technology Data Exchange (ETDEWEB)

    Guzman, Rafael [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico); Francois, Juan-Luis [Facultad de Ingenieria, Universidad Nacional Autonoma de Mexico, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Mor. (Mexico)]. E-mail: jlfl@fi-b.unam.mx

    2007-01-15

    This paper shows a comparison between the results obtained with the HELIOS code and other similar codes used in the international community, with respect to the transmutation of actinides. To do this, the international benchmark: 'Calculations of Different Transmutation Concepts' of the Nuclear Energy Agency is analyzed. In this benchmark, two types of cells are analyzed: a small cell corresponding to a standard pressurized water reactor (PWR), and a wide cell corresponding to a highly moderated PWR. Two types of discharge burnup are considered: 33 GWd/tHM and 50 GWd/tHM. The following results are analyzed: the neutron multiplication factor as a function of burnup, the atomic density of the principal actinide isotopes, the radioactivity of selected actinides at reactor shutdown and cooling times from 7 until 50,000 years, the void reactivity and the Doppler reactivity. The results are compared with the following codes: KAPROS/KARBUS (FZK, Germany), SRAC95 (JAERI, Japan), TRIFON (ITTEP, Russian Federation) and WIMS (IPPE, Russian Federation). For the neutron multiplication factor, the results obtained with HELIOS show a difference of around 1% {delta}k/k. For the isotopic concentrations: {sup 241}Pu, {sup 242}Pu, and {sup 242m}Am, the results of all the institutions present a difference that increases at higher burnup; for the case of {sup 237}Np, the results of FZK diverges from the other results as the burnup increases. Regarding the activity, the difference of the results is acceptable, except for the case of {sup 241}Pu. For the Doppler coefficient, the results are acceptable, except for the cells with high moderation. In the case of the void coefficient, the difference of the results increases at higher void fractions, being the highest at 95%. In summary, for the PWR benchmark, the results obtained with HELIOS agree reasonably well within the limits of the multiple plutonium recycling established by the NEA working party on plutonium fuels and

  9. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  10. Benchmarking nutrient use efficiency of dairy farms

    NARCIS (Netherlands)

    Mu, W.; Groen, E.A.; Middelaar, van C.E.; Bokkers, E.A.M.; Hennart, S.; Stilmant, D.; Boer, de I.J.M.

    2017-01-01

    The nutrient use efficiency (NUE) of a system, generally computed as the amount of nutrients in valuable outputs over the amount of nutrients in all inputs, is commonly used to benchmark the environmental performance of dairy farms. Benchmarking the NUE of farms, however, may lead to biased

  11. Benchmark analysis of railway networks and undertakings

    NARCIS (Netherlands)

    Hansen, I.A.; Wiggenraad, P.B.L.; Wolff, J.W.

    2013-01-01

    Benchmark analysis of railway networks and companies has been stimulated by the European policy of deregulation of transport markets, the opening of national railway networks and markets to new entrants and separation of infrastructure and train operation. Recent international railway benchmarking

  12. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...

  13. Benchmarking Successional Progress in a Quantitative Food Web

    Science.gov (United States)

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of

  14. Benchmarking successional progress in a quantitative food web.

    Science.gov (United States)

    Boit, Alice; Gaedke, Ursula

    2014-01-01

    Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile previously disjoint bodies of

  15. Benchmarking successional progress in a quantitative food web.

    Directory of Open Access Journals (Sweden)

    Alice Boit

    Full Text Available Central to ecology and ecosystem management, succession theory aims to mechanistically explain and predict the assembly and development of ecological communities. Yet processes at lower hierarchical levels, e.g. at the species and functional group level, are rarely mechanistically linked to the under-investigated system-level processes which drive changes in ecosystem properties and functioning and are comparable across ecosystems. As a model system for secondary succession, seasonal plankton succession during the growing season is readily observable and largely driven autogenically. We used a long-term dataset from large, deep Lake Constance comprising biomasses, auto- and heterotrophic production, food quality, functional diversity, and mass-balanced food webs of the energy and nutrient flows between functional guilds of plankton and partly fish. Extracting population- and system-level indices from this dataset, we tested current hypotheses about the directionality of successional progress which are rooted in ecosystem theory, the metabolic theory of ecology, quantitative food web theory, thermodynamics, and information theory. Our results indicate that successional progress in Lake Constance is quantifiable, passing through predictable stages. Mean body mass, functional diversity, predator-prey weight ratios, trophic positions, system residence times of carbon and nutrients, and the complexity of the energy flow patterns increased during succession. In contrast, both the mass-specific metabolic activity and the system export decreased, while the succession rate exhibited a bimodal pattern. The weighted connectance introduced here represents a suitable index for assessing the evenness and interconnectedness of energy flows during succession. Diverging from earlier predictions, ascendency and eco-exergy did not increase during succession. Linking aspects of functional diversity to metabolic theory and food web complexity, we reconcile

  16. Existence Results for Some Nonlinear Functional-Integral Equations in Banach Algebra with Applications

    Directory of Open Access Journals (Sweden)

    Lakshmi Narayan Mishra

    2016-04-01

    Full Text Available In the present manuscript, we prove some results concerning the existence of solutions for some nonlinear functional-integral equations which contains various integral and functional equations that considered in nonlinear analysis and its applications. By utilizing the techniques of noncompactness measures, we operate the fixed point theorems such as Darbo's theorem in Banach algebra concerning the estimate on the solutions. The results obtained in this paper extend and improve essentially some known results in the recent literature. We also provide an example of nonlinear functional-integral equation to show the ability of our main result.

  17. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  18. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  19. FRIB driver linac vacuum model and benchmarks

    CERN Document Server

    Durickovic, Bojan; Kersevan, Roberto; Machicoane, Guillaume

    2014-01-01

    The Facility for Rare Isotope Beams (FRIB) is a superconducting heavy-ion linear accelerator that is to produce rare isotopes far from stability for low energy nuclear science. In order to achieve this, its driver linac needs to achieve a very high beam current (up to 400 kW beam power), and this requirement makes vacuum levels of critical importance. Vacuum calculations have been carried out to verify that the vacuum system design meets the requirements. The modeling procedure was benchmarked by comparing models of an existing facility against measurements. In this paper, we present an overview of the methods used for FRIB vacuum calculations and simulation results for some interesting sections of the accelerator. (C) 2013 Elsevier Ltd. All rights reserved.

  20. Supply network configuration—A benchmarking problem

    Science.gov (United States)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  1. The numerical benchmark CB2-S, final evaluation

    International Nuclear Information System (INIS)

    Chrapciak, V.

    2002-01-01

    In this paper are final results of numerical benchmark CB2-S compared (activity, gamma and neutron sources, concentration of important nuclides and decay heat). The participants are: Vladimir Chrapciak (SCALE), Ludmila Markova (SCALE), Svetlana Zabrodskaja (SCALA), Pavel Mikolas (WIMS). Eva Tinkova (HELIOS) and Maria Manolova (SCALE) (Authors)

  2. Benchmarking and scaling studies of pseudospectral code Tarang ...

    Indian Academy of Sciences (India)

    Tarang is a general-purpose pseudospectral parallel code for simulating flows involving fluids, magnetohydrodynamics, and Rayleigh–Bénard convection in turbulence and instability regimes. In this paper we present code validation and benchmarking results of Tarang. We performed our simulations on 10243, 20483, and ...

  3. Benchmark and physics testing of LIFE-4C. Summary

    International Nuclear Information System (INIS)

    Liu, Y.Y.

    1984-06-01

    LIFE-4C is a steady-state/transient analysis code developed for performance evaluation of carbide [(U,Pu)C and UC] fuel elements in advanced LMFBRs. This paper summarizes selected results obtained during a crucial step in the development of LIFE-4C - benchmark and physics testing

  4. Benchmarking of Simulation Codes Based on the Montague Resonance in the CERN Proton Synchrotron

    CERN Document Server

    Hofmann, Ingo; Cousineau, Sarah M; Franchetti, Giuliano; Giovannozzi, Massimo; Holmes, Jeffrey Alan; Jones, Frederick W; Luccio, Alfredo U; Machida, Shinji; Métral, E; Qiang, Ji; Ryne, Robert D; Spentzouris, Panagiotis

    2005-01-01

    Experimental data on emittance exchange by the space charge driven ‘‘Montague resonance'' have been obtained at the CERN Proton Synchrotron in 2002-04 as a function of the working point. These data are used to advance the benchmarking of major simulation codes (ACCSIM, IMPACT, MICROMAP, ORBIT, SIMBAD, SIMPSONS, SYNERGIA) currently employed world-wide in the design or performance improvement of high intensity circular accelerators. In this paper we summarize the experimental findings and compare them with the first three steps of simulation results of the still progressing work.

  5. BENCHMARK DOSES FOR CHEMICAL MIXTURES: EVALUATION OF A MIXTURE OF 18 PHAHS.

    Science.gov (United States)

    Benchmark doses (BMDs), defined as doses of a substance that are expected to result in a pre-specified level of "benchmark" response (BMR), have been used for quantifying the risk associated with exposure to environmental hazards. The lower confidence limit of the BMD is used as...

  6. Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6

    International Nuclear Information System (INIS)

    Liu Guisheng

    1995-11-01

    How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testing of CENDL-2 and ENDF/B-6. (author). 8 refs, 2 figs, 4 tabs

  7. Review of recent benchmark experiments on integral test for high energy nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Nakashima, Hiroshi; Tanaka, Susumu; Konno, Chikara; Fukahori, Tokio; Hayashi, Katsumi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-11-01

    A survey work of recent benchmark experiments on an integral test for high energy nuclear data evaluation was carried out as one of the work of the Task Force on JENDL High Energy File Integral Evaluation (JHEFIE). In this paper the results are compiled and the status of recent benchmark experiments is described. (author)

  8. Homogeneous fast reactor benchmark testing of CENDL-2 and ENDF/B-6

    International Nuclear Information System (INIS)

    Liu Guisheng

    1995-01-01

    How to choose correct weighting spectrum has been studied to produce multigroup constants for fast reactor benchmark calculations. A correct weighting option makes us obtain satisfying results of K eff and central reaction rate ratios for nine fast reactor benchmark testings of CENDL-2 and ENDF/B-6. (4 tabs., 2 figs.)

  9. Benchmarking of HEU Mental Annuli Critical Assemblies with Internally Reflected Graphite Cylinder

    Energy Technology Data Exchange (ETDEWEB)

    Xiaobo, Liu; Bess, John D.; Marshall, Margaret A.

    2016-09-01

    Three experimental configurations of critical assemblies, performed in 1963 at the Oak Ridge Critical Experiment Facility, which are assembled using three different diameter HEU annuli (15-9 inches, 15-7 inches and 13-7 inches) metal annuli with internally reflected graphite cylinder are evaluated and benchmarked. The experimental uncertainties which are 0.00055, 0.00055 and 0.00055 respectively, and biases to the detailed benchmark models which are -0.00179, -0.00189 and -0.00114 respectively, were determined, and the experimental benchmark keff results were obtained for both detailed and simplified model. The calculation results for both detailed and simplified models using MCNP6-1.0 and ENDF VII.1 agree well to the benchmark experimental results with a difference of less than 0.2%. These are acceptable benchmark experiments for inclusion in the ICSBEP Handbook.

  10. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  11. Elements for successful functional result after surgical treatment of intra-articular distal humeral fractures.

    Science.gov (United States)

    Darabos, Nikica; Bajs, Ivana Dovzak; Sabalić, Srećko; Pavić, Roman; Darabos, Anela; Cengić, Tomislav

    2012-12-01

    Intra-articular distal humeral fractures (DHF) present great challenge to an orthopedic-trauma surgeon. We analyzed the relationship between functional results of DHF surgical treatment and elements that can affect patient recovery. During the 5-year follow-up study, 32 patients were treated for DHF at our Trauma Department, 30 of them by surgical procedure. Functional results of surgical treatment were scored according to the Jupiter criteria. According to the A-O classification of DHF, there were 11 type A fractures, 5 type B fractures and 14 type C fractures. Postoperative complications were infections, neural lesions, inadequate healing, and instability of osteosynthesis. Analysis of functional results in patients with operated C type fractures according to different elements influencing postoperative result revealed correct healing in 74% of patients, which was statistically significantly higher than the percentage of unsatisfactory results (p elements for successful functional recovery.

  12. An integrative approach of the marketing research and benchmarking

    Directory of Open Access Journals (Sweden)

    Moraru Gina-Maria

    2017-01-01

    Full Text Available The accuracy of the manager’s actions in a firm depends, among other things, on the accuracy of his/her information about all processes. At this issue, developing a marketing research is essential, because it provides information that represents the current situation in organization and on the market. Although specialists devote the marketing research exclusively to the organizational marketing function, practice has shown that it can be used in any other function of the company: production, finance, human resources, research and development. Firstly, the paper presents the opportunities to use the marketing research as a management tool in various stages of creative thinking. Secondly, based on a study made from secondary sources of economic literature, the paper draws a parallel between marketing research and benchmarking. Finally, the paper shows that creative benchmarking closes the management - marketing - creativity circle for the benefit of the organization and community.

  13. A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.

    Science.gov (United States)

    Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai

    2016-01-01

    Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents

  14. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  15. What Randomized Benchmarking Actually Measures

    Science.gov (United States)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-09-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r . For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. These theories allow explicit computation of the error rate that RB measures (r ), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  16. Benchmark tests of JENDL-1

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki; Hasegawa, Akira; Takano, Hideki; Kamei, Takanobu; Hojuyama, Takeshi; Sasaki, Makoto; Seki, Yuji; Zukeran, Atsushi; Otake, Iwao.

    1982-02-01

    Various benchmark tests were made on JENDL-1. At the first stage, various core center characteristics were tested for many critical assemblies with one-dimensional model. At the second stage, applicability of JENDL-1 was further tested to more sophisticated problems for MOZART and ZPPR-3 assemblies with two-dimensional model. It was proved that JENDL-1 predicted various quantities of fast reactors satisfactorily as a whole. However, the following problems were pointed out: 1) There exists discrepancy of 0.9% in the k sub(eff)-values between the Pu- and U-cores. 2) The fission rate ratio of 239 Pu to 235 U is underestimated by 3%. 3) The Doppler reactivity coefficients are overestimated by about 10%. 4) The control rod worths are underestimated by 4%. 5) The fission rates of 235 U and 239 Pu are underestimated considerably in the outer core and radial blanket regions. 6) The negative sodium void reactivities are overestimated, when the sodium is removed from the outer core. As a whole, most of problems of JENDL-1 seem to be related with the neutron leakage and the neutron spectrum. It was found through the further study that most of these problems came from too small diffusion coefficients and too large elastic removal cross sections above 100 keV, which might be probably caused by overestimation of the total and elastic scattering cross sections for structural materials in the unresolved resonance region up to several MeV. (author)

  17. Some exact results for the two-point function of an integrable quantum field theory

    International Nuclear Information System (INIS)

    Creamer, D.B.; Thacker, H.B.; Wilkinson, D.

    1981-01-01

    The two-point correlation function for the quantum nonlinear Schroedinger (one-dimensional delta-function gas) model is studied. An infinite-series representation for this function is derived using the quantum inverse-scattering formalism. For the case of zero temperature, the infinite-coupling (c→infinity) result of Jimbo, Miwa, Mori, and Sato is extended to give an exact expression for the order-1/c correction to the two-point function in terms of a Painleve transcendent of the fifth kind

  18. Leukocyte depletion results in improved lung function and reduced inflammatory response after cardiac surgery

    NARCIS (Netherlands)

    Gu, YJ; Boonstra, PW; vanOeveren, W

    Leukocyte depletion during cardiopulmonary bypass has been demonstrated in animal experiments to improve pulmonary function, Conflicting results have been reported, however, with clinical depletion by arterial line filter of leukocytes at the beginning of cardiopulmonary bypass. In this study, we

  19. A biosegmentation benchmark for evaluation of bioimage analysis methods

    Directory of Open Access Journals (Sweden)

    Kvilekval Kristian

    2009-11-01

    Full Text Available Abstract Background We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/. Conclusion This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general.

  20. Benchmarking Sustainability Practices Use throughout Industrial Construction Project Delivery

    Directory of Open Access Journals (Sweden)

    Sungmin Yun

    2017-06-01

    Full Text Available Despite the efforts for sustainability studies in building and infrastructure construction, the sustainability issues in industrial construction remain understudied. Further, few studies evaluate sustainability and benchmark sustainability issues in industrial construction from a management perspective. This study presents a phase-based benchmarking framework for evaluating sustainability practices use focusing on industrial facilities project. Based on the framework, this study quantifies and assesses sustainability practices use, and further sorts the results by project phase and major project characteristics, including project type, project nature, and project delivery method. The results show that sustainability practices were implemented higher in the construction and startup phases relative to other phases, with a very broad range. An assessment by project type and project nature showed significant differences in sustainability practices use, but no significant difference in practices use by project delivery method. This study contributes to providing a benchmarking method for sustainability practices in industrial facilities projects at the project phase level. This study also discusses and provides an application of phase-based benchmarking for sustainability in industrial construction.

  1. ICSBEP Benchmarks For Nuclear Data Applications

    Science.gov (United States)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  2. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  3. An application of results by Hardy, Ramanujan and Karamata to Ackermannian functions

    Directory of Open Access Journals (Sweden)

    Andreas Weiermann

    2003-06-01

    Full Text Available The Ackermann function is a fascinating and well studied paradigm for a function which eventually dominates all primitive recursive functions. By a classical result from the theory of recursive functions it is known that the Ackermann function can be defined by an unnested or descent recursion along the segment of ordinals below ω ω (or equivalently along the order type of the polynomials under eventual domination. In this article we give a fine structure analysis of such a Ackermann type descent recursion in the case that the ordinals below ω ω are represented via a Hardy Ramanujan style coding. This paper combines number-theoretic results by Hardy and Ramanujan, Karamata's celebrated Tauberian theorem and techniques from the theory of computability in a perhaps surprising way.

  4. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  5. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  6. A version of Zhong's coercivity result for a general class of nonsmooth functionals

    Directory of Open Access Journals (Sweden)

    D. Motreanu

    2002-01-01

    Full Text Available A version of Zhong's coercivity result (1997 is established for nonsmooth functionals expressed as a sum Φ+Ψ, where Φ is locally Lipschitz and Ψ is convex, lower semicontinuous, and proper. This is obtained as a consequence of a general result describing the asymptotic behavior of the functions verifying the above structure hypothesis. Our approach relies on a version of Ekeland's variational principle. In proving our coercivity result we make use of a new general Palais-Smale condition. The relationship with other results is discussed.

  7. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  8. Benchmarking the financial performance of local councils in Ireland

    Directory of Open Access Journals (Sweden)

    Robbins Geraldine

    2016-05-01

    Full Text Available It was over a quarter of a century ago that information from the financial statements was used to benchmark the efficiency and effectiveness of local government in the US. With the global adoption of New Public Management ideas, benchmarking practice spread to the public sector and has been employed to drive reforms aimed at improving performance and, ultimately, service delivery and local outcomes. The manner in which local authorities in OECD countries compare and benchmark their performance varies widely. The methodology developed in this paper to rate the relative financial performance of Irish city and county councils is adapted from an earlier assessment tool used to measure the financial condition of small cities in the US. Using our financial performance framework and the financial data in the audited annual financial statements of Irish local councils, we calculate composite scores for each of the thirty-four local authorities for the years 2007–13. This paper contributes composite scores that measure the relative financial performance of local councils in Ireland, as well as a full set of yearly results for a seven-year period in which local governments witnessed significant changes in their financial health. The benchmarking exercise is useful in highlighting those councils that, in relative financial performance terms, are the best/worst performers.

  9. MoleculeNet: a benchmark for molecular machine learning.

    Science.gov (United States)

    Wu, Zhenqin; Ramsundar, Bharath; Feinberg, Evan N; Gomes, Joseph; Geniesse, Caleb; Pappu, Aneesh S; Leswing, Karl; Pande, Vijay

    2018-01-14

    Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.

  10. Functionality of hospital information systems: results from a survey of quality directors at Turkish hospitals.

    Science.gov (United States)

    Saluvan, Mehmet; Ozonoff, Al

    2018-01-12

    We aimed to determine availability of core Hospital Information Systems (HIS) functions implemented in Turkish hospitals and the perceived importance of these functions on quality and patient safety. We surveyed quality directors (QDs) at civilian hospitals in the nation of Turkey. Data were collected via web survey using an instrument with 50 items describing core functionality of HIS. We calculated mean availability of each function, mean and median values of perceived impact on quality, and we investigated the relationship between availability and perceived importance. We received responses from 31% of eligible institutions, representing all major geographic regions of Turkey. Mean availability of 50 HIS functions was 65.6%, ranging from 19.6% to 97.4%. Mean importance score was 7.87 (on a 9-point scale) ranging from 7.13 to 8.41. Functions related to result management (89.3%) and decision support systems (52.2%) had the highest and lowest reported availability respectively. Availability and perceived importance were moderately correlated (r = 0.52). QDs report high importance of the HIS functions surveyed as they relate to quality and patient safety. Availability and perceived importance of HIS functions are generally correlated, with some interesting exceptions. These findings may inform future investments and guide policy changes within the Turkish healthcare system. Financial incentives, regulations around certified HIS, revisions to accreditation manuals, and training interventions are all policies which will help integrate HIS functions to support quality and patient safety in Turkish hospitals.

  11. JOVIAN STRATOSPHERE AS A CHEMICAL TRANSPORT SYSTEM: BENCHMARK ANALYTICAL SOLUTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xi; Shia Runlie; Yung, Yuk L., E-mail: xiz@gps.caltech.edu [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)

    2013-04-20

    We systematically investigated the solvable analytical benchmark cases in both one- and two-dimensional (1D and 2D) chemical-advective-diffusive systems. We use the stratosphere of Jupiter as an example but the results can be applied to other planetary atmospheres and exoplanetary atmospheres. In the 1D system, we show that CH{sub 4} and C{sub 2}H{sub 6} are mainly in diffusive equilibrium, and the C{sub 2}H{sub 2} profile can be approximated by modified Bessel functions. In the 2D system in the meridional plane, analytical solutions for two typical circulation patterns are derived. Simple tracer transport modeling demonstrates that the distribution of a short-lived species (such as C{sub 2}H{sub 2}) is dominated by the local chemical sources and sinks, while that of a long-lived species (such as C{sub 2}H{sub 6}) is significantly influenced by the circulation pattern. We find that an equator-to-pole circulation could qualitatively explain the Cassini observations, but a pure diffusive transport process could not. For slowly rotating planets like the close-in extrasolar planets, the interaction between the advection by the zonal wind and chemistry might cause a phase lag between the final tracer distribution and the original source distribution. The numerical simulation results from the 2D Caltech/JPL chemistry-transport model agree well with the analytical solutions for various cases.

  12. A proposed benchmark problem for cargo nuclear threat monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Holmes, Thomas Wesley, E-mail: twholmes@ncsu.edu [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States); Calderon, Adan; Peeples, Cody R.; Gardner, Robin P. [Center for Engineering Applications of Radioisotopes, Nuclear Engineering Department, North Carolina State University, Raleigh, NC 27695-7909 (United States)

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, ). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.x4 in.x16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.x16 in. side facing the system. The two sources used in the benchmark are {sup 137}Cs and {sup 235}U.

  13. Detection of Weak Spots in Benchmarks Memory Space by using PCA and CA

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available This paper describes the weak spots in SPEC CPU INT 2006 Benchmarks memory space by using Principal Component Analysis and Cluster Analysis. We used recently published SPEC CPU INT 2006 Benchmark scores of AMD Opteron 2000+ and AMD Opteron 8000+ series processors. The four most significant PCs, which are retained for 72.6% of the variance, PC2, PC3, and PC4 covers 26.5%, 2.9%, 0.91% and 0.019% variance respectively. The dendrogram is useful to identify the similarities and dissimilarities between the benchmarks in workload space. These results and analysis can be used by performance engineers, scientists and developers to better understand the benchmark behavior in workload space and to design a Benchmark Suite that covers the complete workload space.

  14. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  15. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  16. Attila calculations for the 3-D C5G7 benchmark extension

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.

    2005-01-01

    The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)

  17. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    Science.gov (United States)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-02-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  18. Medicare Contracting - Redacted Benchmark Metric Reports

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services has compiled aggregate national benchmark cost and workload metrics using data submitted to CMS by the AB MACs and the...

  19. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  20. Measuring Distribution Performance? Benchmarking Warrants Your Attention

    Energy Technology Data Exchange (ETDEWEB)

    Ericson, Sean J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Alvarez, Paul [The Wired Group

    2018-04-13

    Identifying, designing, and measuring performance metrics is critical to securing customer value, but can be a difficult task. This article examines the use of benchmarks based on publicly available performance data to set challenging, yet fair, metrics and targets.

  1. Benchmarking Linked Open Data Management Systems

    NARCIS (Netherlands)

    R. Angles Rojas (Renzo); M.-D. Pham (Minh-Duc); P.A. Boncz (Peter)

    2014-01-01

    htmlabstractWith inherent support for storing and analysing highly interconnected data, graph and RDF databases appear as natural solutions for developing Linked Open Data applications. However, current benchmarks for these database technologies do not fully attain the desirable characteristics

  2. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  3. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...

  4. UTILISATION OF BENCHMARKING TECHNIQUES FOR FUNDAMENTING DEVELOPMENT STRATEGIES IN THE MANUFACTURING INDUSTRY IN ROMANIA

    Directory of Open Access Journals (Sweden)

    Geambasu Cristina Venera

    2011-07-01

    Full Text Available Benchmarking is a method used to measure the products, services and processes in comparison to an entity recognized as a leader in terms of performance of its operations. Used in the years 1970-1980 in the strategic management of the company currently has proven to be increasingly useful in many areas, including in international analysis models. In the European Union benchmarking indicators are used especially in the digital economy and as perspective indicators for 2011-2015 (Eurostat, Database. In the introduction we present and define forms of benchmarking, as well as a number of specific terms, which contribute to a better understanding of the content of this scientific work. Time series are used to highlight advances in labor productivity in EU countries, and the analysis is particularized for two countries: Romania and Germany. Quantitative data were collected from the source Eurostat website. A comprehensive indicator at macroeconomic level is resource productivity, representing GDP in relation with domestic consumption of material (DCM. DCM measures the amount of materials used directly by an economy. It is presented in tabular form for all European Union countries and Switzerland, as evolving over a period of eight years. Benchmarking method is used to highlight some differences (gaps between EU countries regarding productivity and particularly the one between Germany and Romania is highlighted, concerning the performance of manufacturing industries. It is expected that this gap will diminish. The gap was highlighted by relevant graphics and interpretations. The second part of the paper focuses on comparative analysis of factors productivity using the production function. We analyze labor and capital productivity and other factors that determine the level of production. For highlighting the contribution of the labour factor we used the number of hours worked, considering that it reflects the analyzed phenomenon more realistically. For

  5. Further Examination of the Quality of Changes in Creative Functioning Resulting from Meditation (Zazen) Training.

    Science.gov (United States)

    Cowger, Ernest L., Jr.; Torrance, E. Paul

    1982-01-01

    The quality of changes in creative functioning resulting from training in ZEN meditation (Zazen) and relaxation training were compared. Pre-posttest changes in the two groups, as revealed by the General Linear Models Procedure, revealed that the meditation group experienced greater perceived change resulting from new conditions, expression of…

  6. Clarifying Inconclusive Functional Analysis Results: Assessment and Treatment of Automatically Reinforced Aggression

    Science.gov (United States)

    Saini, Valdeep; Greer, Brian D.; Fisher, Wayne W.

    2016-01-01

    We conducted a series of studies in which multiple strategies were used to clarify the inconclusive results of one boy’s functional analysis of aggression. Specifically, we (a) evaluated individual response topographies to determine the composition of aggregated response rates, (b) conducted a separate functional analysis of aggression after high rates of disruption masked the consequences maintaining aggression during the initial functional analysis, (c) modified the experimental design used during the functional analysis of aggression to improve discrimination and decrease interaction effects between conditions, and (d) evaluated a treatment matched to the reinforcer hypothesized to maintain aggression. An effective yet practical intervention for aggression was developed based on the results of these analyses and from data collected during the matched-treatment evaluation. PMID:25891269

  7. [Restorative proctocolectomy for ulcerative colitis : Long-term functional results and quality of life].

    Science.gov (United States)

    Rijcken, E; Senninger, N; Mennigen, R

    2017-07-01

    Restorative proctocolectomy with ileo-pouch-anal anastomosis is the standard procedure for ulcerative colitis. It provides complete removal of the diseased colorectum, avoids permanent ileostomy and allows the preservation of continence. Functional results and quality of life after restorative proctocolectomy are of great importance. Patients usually have 5-6 bowel movements per day, and continence is satisfactory in more than 90% of patients. A good pouch function strongly correlates with high quality of life. Postoperative septic complications are the main risk factor for bad pouch function and pouch failure; therefore nowadays most procedures are performed with a covering ileostomy. Quality of life is usually impaired by active ulcerative colitis, and restorative proctocolectomy improves the quality of life up to the level of a healthy reference population. Taken together, restorative proctocolectomy provides excellent results concerning function and quality of life.

  8. Functional disability and death wishes in older Europeans: results from the EURODEP concerted action.

    Science.gov (United States)

    Mellqvist Fässberg, Madeleine; Östling, Svante; Braam, Arjan W; Bäckman, Kristoffer; Copeland, John R M; Fichter, Manfred; Kivelä, Sirkka-Liisa; Lawlor, Brian A; Lobo, Antonio; Magnússon, Halggrimur; Prince, Martin J; Reischies, Friedel M; Turrina, Cesare; Wilson, Kenneth; Skoog, Ingmar; Waern, Margda

    2014-09-01

    Physical illness has been shown to be a risk factor for suicidal behaviour in older adults. The association between functional disability and suicidal behaviour in older adults is less clear. The aim of this study was to examine the relationship between functional disability and death wishes in late life. Data from 11 population studies on depression in persons aged 65 and above were pooled, yielding a total of 15,890 respondents. Level of functional disability was trichotomised (no, intermediate, high). A person was considered to have death wishes if the death wish/suicidal ideation item of the EURO-D scale was endorsed. Odds ratios for death wishes associated with functional disability were calculated in a multilevel logistic regression model. In total, 5 % of the men and 7 % of the women reported death wishes. Both intermediate (OR 1.89, 95 % CI 1.42; 2.52) and high functional disability (OR 3.22, 95 % CI 2.34; 4.42) were associated with death wishes. No sex differences could be shown. Results remained after adding depressive symptoms to the model. Functional disability was independently associated with death wishes in older adults. Results can help inform clinicians who care for older persons with functional impairment.

  9. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  10. Preliminary Results on the Experimental Investigation of the Structure Functions of Bound Nucleons

    Energy Technology Data Exchange (ETDEWEB)

    Bodek, Arie [Univ. of Rochester, NY (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-08-01

    We present preliminary results on an experimental study of the nuclear modification of the longitudinal ($\\sigma_L$) and transverse ($\\sigma_T$) structure functions of nucleons bound in nuclear targets. The origin of these modifications (commonly referred as as the EMC effect) is not fully understood. Our measurements of R= $\\sigma_L / \\sigma_T$ for nuclei ($R_A$) and for deuterium ($R_D$) indicate that nuclear modifications of the structure functions of bound nucleons are different for the longitudinal and transverse structure functions, and that contrary to expectation from several theoretical models, $R_A< R_D$.

  11. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  12. OR-Benchmark: An Open and Reconfigurable Digital Watermarking Benchmarking Framework

    OpenAIRE

    Wang, Hui; Ho, Anthony TS; Li, Shujun

    2015-01-01

    Benchmarking digital watermarking algorithms is not an easy task because different applications of digital watermarking often have very different sets of requirements and trade-offs between conflicting requirements. While there have been some general-purpose digital watermarking benchmarking systems available, they normally do not support complicated benchmarking tasks and cannot be easily reconfigured to work with different watermarking algorithms and testing conditions. In this paper, we pr...

  13. Action-Oriented Benchmarking: Concepts and Tools

    Energy Technology Data Exchange (ETDEWEB)

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  14. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  15. Derivation of aquatic screening benchmarks for 1,2-dibromoethane.

    Science.gov (United States)

    Kszos, L A; Talmage, S S; Morris, L G W; Konetsky, B K; Rottero, T

    2003-07-01

    Ethylene dibromide (1,2-dibromoethane or EDB) was primarily used in the United States as an additive in leaded gasoline and as a soil and grain fumigant for worm and insect control until it was banned in 1983. Historical releases of EDB have resulted in detectable EDB in groundwater and drinking wells, and recently concentrations up to 16 microg/L were detected in ground water at two fuel spill plumes in the vicinity of the Massachusetts Military Reservation Base on Cape Cod, Massachusetts. Because the ground water in this area is used to flood cranberry bogs for the purposes of harvesting, the U.S. Air Force sponsored the development of aquatic screening benchmarks for EDB. Acute toxicity tests with Pimephales promelas (fathead minnow), Daphnia magna, and Ceriodaphnia dubia were conducted to provide data needed for development of screening benchmarks. Using a closed test-system to prevent volatilization of EDB, the 48-h LC50S (concentration that kills 50% of the test organisms) for P. promelas, D. magna, and C. dubia were 4.3 mg/L, 6.5 mg/L, and 3.6 mg/L, respectively. The screening benchmark for aquatic organisms, derived as the Tier II chronic water quality criteria, is 0.031 mg EDB/L. The sediment screening benchmark, based on equilibrium partitioning, is 2.45 mg EDB/kg of organic carbon in the sediment. The screening benchmarks developed here are an important component of an ecological risk assessment, during which perhaps hundreds of chemicals must be evaluated for their potential to cause ecological harm.

  16. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  17. Comparison of functional results of two fixation systems using single-row suturing of rotator cuff.

    Science.gov (United States)

    Muniesa-Herrero, M P; Torres-Campos, A; Urgel-Granados, A; Blanco-Llorca, J A; Floría-Arnal, L J; Roncal-Boj, J C; Castro-Sauras, A

    2018-03-21

    Arthroscopic repair of rotator cuff disorders is a technically demanding but successful procedure. Many anchor and suture alternatives are now available. The choice of the implant by the surgeon is less important than the configuration of the suture used to fix the tendon, however it is necessary to know if there are differences in the results, using each one of them. The aim of the study is to evaluate if there are differences between the knotted and non-knotted implant in terms of functional and satisfaction results. A retrospective study was carried out on 83 patients operated between 2010 and 2014 in our center using 2anchoring systems with and without knotting (39 versus 44 patients respectively), with single row in complete rupture of the rotator cuff. At the end of the follow-up, an average score was obtained on the Constant scale of 74.6 points. 98% of the patients considered the result of the surgery satisfactory. Statistically, there were no significant differences between the 2groups in terms of functionality, satisfaction or reincorporation to activities. The functional results of the single-row cuff suture are satisfactory, although biomechanical studies show advantages in favor of sutures that reproduce a transoseo system. It our series of patients the presence of knotting does not show per se a significant functional difference being both superimposable techniques in absolute values of functionality and patient satisfaction. Copyright © 2018 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    Science.gov (United States)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  19. PREMIUM - Benchmark on the quantification of the uncertainty of the physical models in the system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Skorek, Tomasz; Crecy, Agnes de

    2013-01-01

    PREMIUM (Post BEMUSE Reflood Models Input Uncertainty Methods) is an activity launched with the aim to push forward the methods of quantification of physical models uncertainties in thermal-hydraulic codes. It is endorsed by OECD/NEA/CSNI/WGAMA. The benchmark PREMIUM is addressed to all who applies uncertainty evaluation methods based on input uncertainties quantification and propagation. The benchmark is based on a selected case of uncertainty analysis application to the simulation of quench front propagation in an experimental test facility. Application to an experiment enables evaluation and confirmation of the quantified probability distribution functions on the basis of experimental data. The scope of the benchmark comprises a review of the existing methods, selection of potentially important uncertain input parameters, preliminary quantification of the ranges and distributions of the identified parameters, evaluation of the probability density function using experimental results of tests performed on FEBA test facility and confirmation/validation of the performed quantification on the basis of blind calculation of Reflood 2-D PERICLES experiment. (authors)

  20. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  1. Quality assurance of radiotherapy in the ongoing EORTC 1219-DAHANCA-29 trial for HPV/p16 negative squamous cell carcinoma of the head and neck: Results of the benchmark case procedure

    DEFF Research Database (Denmark)

    Christiaens, Melissa; Collette, Sandra; Overgaard, Jens

    2017-01-01

    BACKGROUND AND PURPOSE: The phase III EORTC 1219-DAHANCA 29 intergroup trial evaluates the influence of nimorazole in patients with locally advanced head and neck cancer when treated with accelerated radiotherapy (RT) in combination with chemotherapy. This article describes the results of the RT....... The introduction of more objective quantitative analysis methods, such as the HD and DSI, in future trials might strengthen the evaluation by experts....

  2. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  3. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  4. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  5. Lesson learned from the SARNET wall condensation benchmarks

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Merli, F.; Oriolo, F.; Paci, S.; Kljenak, I.; Kostka, P.; Vyskocil, L.; Travis, J.R.; Lehmkuhl, J.; Kelm, S.; Chin, Y.-S.; Bucci, M.

    2014-01-01

    Highlights: • The results of the benchmarking activity on wall condensation are reported. • The work was performed in the frame of SARNET. • General modelling techniques for condensation are discussed. • Results of University of Pisa and of other benchmark participants are discussed. • The lesson learned is drawn. - Abstract: The prediction of condensation in the presence of noncondensable gases has received continuing attention in the frame of the Severe Accident Research Network of Excellence, both in the first (2004–2008) and in the second (2009–2013) EC integrated projects. Among the different reasons for considering so relevant this basic phenomenon, coped with by classical treatments dated in the first decades of the last century, there is the interest for developing updated CFD models for reactor containment analysis, requiring validating at a different level the available modelling techniques. In the frame of SARNET, benchmarking activities were undertaken taking advantage of the work performed at different institutions in setting up and developing models for steam condensation in conditions of interest for nuclear reactor containment. Four steps were performed in the activity, involving: (1) an idealized problem freely inspired at the actual conditions occurring in an experimental facility, CONAN, installed at the University of Pisa; (2) a first comparison with experimental data purposely collected by the CONAN facility; (3) a second comparison with data available from experimental campaigns performed in the same apparatus before the inclusion of the activities in SARNET; (4) a third exercise involving data obtained at lower mixture velocity than in previous campaigns, aimed at providing conditions closer to those addressed in reactor containment analyses. The last step of the benchmarking activity required to change the configuration of the experimental apparatus to achieve the lower flow rates involved in the new test specifications. The

  6. Guidance flap choice for lip cancer: Principles, timing and esthetic-functional results

    Directory of Open Access Journals (Sweden)

    Attilio Carlo Salgarelli

    2016-01-01

    Conclusion: We share the widespread view that a surgeon who performs a reconstruction using the minimal tissue components required to close the lesion will achieve the best results. Reconstruction does not influence prognosis and overall should be oriented to the defect. Careful, clean, and safe resection of lip carcinoma, with creation of healthy margins, can be followed by functional and esthetic lip reconstruction.

  7. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  8. Scapular allograft reconstruction after total scapulectomy: surgical technique and functional results

    NARCIS (Netherlands)

    Capanna, R.; Totti, F.; Geest, I.C.M. van der; Muller, D.A.

    2015-01-01

    HYPOTHESIS: Scapular allograft reconstruction after total scapulectomy preserving the rotator cuff muscles is an oncologically safe procedure and results in good functional outcome with a low complication rate. METHODS: The data of 6 patients who underwent scapular allograft reconstruction after a

  9. Inhibition of the Pim1 oncogene results in diminished visual function.

    Directory of Open Access Journals (Sweden)

    Jun Yin

    Full Text Available Our objective was to profile genetic pathways whose differential expression correlates with maturation of visual function in zebrafish. Bioinformatic analysis of transcriptomic data revealed Jak-Stat signalling as the pathway most enriched in the eye, as visual function develops. Real-time PCR, western blotting, immunohistochemistry and in situ hybridization data confirm that multiple Jak-Stat pathway genes are up-regulated in the zebrafish eye between 3-5 days post-fertilisation, times associated with significant maturation of vision. One of the most up-regulated Jak-Stat genes is the proto-oncogene Pim1 kinase, previously associated with haematological malignancies and cancer. Loss of function experiments using Pim1 morpholinos or Pim1 inhibitors result in significant diminishment of visual behaviour and function. In summary, we have identified that enhanced expression of Jak-Stat pathway genes correlates with maturation of visual function and that the Pim1 oncogene is required for normal visual function.

  10. Exchange Rate Exposure Management: The Benchmarking Process of Industrial Companies

    DEFF Research Database (Denmark)

    Aabo, Tom

    Based on a cross-case study of Danish industrial companies the paper analyzes the benchmarking of the optimal hedging strategy. A stock market approach is pursued but a serious question mark is put on the validity of the obtained information seen from a corporate value-adding point of view...... of practices and strategies that have been established in each company fairly independently over time. The paper argues that hedge benchmarks are useful in their creation process (by forcing a comprehensive analysis) as well as in their final status (by the establishment of a consistent hedging strategy....... The conducted interviews show that empirical reasons behind actual hedging strategies vary considerably - some in accordance with mainstream finance theory, some resting on asymmetric information. The diversity of attitudes seems to be partly a result of different competitive environments, partly a result...

  11. Static benchmarking of the NESTLE advanced nodal code

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1997-01-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k eff and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well

  12. Functional and aesthetic results of various lip-splitting incisions: A clinical analysis of 60 cases.

    Science.gov (United States)

    Rapidis, A D; Valsamis, S; Anterriotis, D A; Skouteris, C A

    2001-11-01

    This study retrospectively evaluated the functional and aesthetic results of various types of lip-splitting incisions in a group of patients in whom this approach was used to treat intraoral tumors. Between 1992 and 1998, 87 consecutive patients were subjected to either mandibulotomy or mandibulectomy using a lip-splitting incision. During this period, 4 types of incisions were sequentially used: straight midline incision, lateral lip-splitting incision, midline splitting with extension around the contour of the chin, and the chevron chin-contour incision. Sixty patients with a follow-up of at least 6 months were included in the study. The patients were asked to answer a questionnaire regarding the degree of satisfaction with the cosmetic result of the procedure and were clinically assessed for sensory and functional impairment resulting from the incision. The remaining 27 patients were lost to follow-up or had died of their disease. The lateral lip-splitting incision caused the fewest postoperative problems in patients subjected to either mandibulotomy or mandibulectomy. The best overall results were achieved by the chevron-chin contour incision. The incision that followed the contour of the chin and the straight midline incision showed less satisfactory results. The chevron chin-contour incision, along with meticulous soft tissue closure, produces the best aesthetic and functional results. Copyright 2001 American Association of Oral and Maxillofacial Surgeons

  13. Quality benchmarking methodology: Case study of finance and culture industries in Latvia

    Directory of Open Access Journals (Sweden)

    Ieva Zemīte

    2011-01-01

    Full Text Available Political, socio-economic and cultural changes that have taken place in the world during the last years have influenced all the spheres. Constant improvements are necessary to sustain in rival and shrinking markets. This sets high quality standards for the service industries. Therefore it is important to conduct comparison of quality criteria to ascertain which practices are achieving superior performance levels. At present companies in Latvia do not carry out mutual benchmarking, and as a result of that do not know how they rank against their peers in terms of quality, as well as they do not see benefits in sharing of information and in benchmarking.The purpose of this paper is to determine the criteria of qualitative benchmarking, and to investigate the use of the benchmarking quality in service industries, particularly: finance and culture sectors in Latvia in order to determine the key driving factors of quality, to explore internal and foreign benchmarks, and to reveal the full potential of inputs’ reduction and efficiency growth for the aforementioned industries.Case study and other tools are used to define the readiness of the company for benchmarking. Certain key factors are examined for their impact on quality criteria. The results are based on the research conducted in professional associations in defined fields (insurance and theatre.Originality/value – this is the first study that adopts the benchmarking models for measuring quality criteria and readiness for mutual comparison in insurance and theatre industries in Latvia.

  14. New results for time reversed symplectic dynamic systems and quadratic functionals

    Directory of Open Access Journals (Sweden)

    Roman Simon Hilscher

    2012-05-01

    Full Text Available In this paper, we examine time scale symplectic (or Hamiltonian systems and the associated quadratic functionals which contain a forward shift in the time variable. Such systems and functionals have a close connection to Jacobi systems for calculus of variations and optimal control problems on time scales. Our results, among which we consider the Reid roundabout theorem, generalize the corresponding classical theory for time reversed discrete symplectic systems, as well as they complete the recently developed theory of time scale symplectic systems.

  15. The institutionalization of benchmarking in the Danish construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard; Gottlieb, Stefan Christoffer

    and disseminated to the construction industry. The fourth chapter demonstrates how benchmarking was concretized into a benchmarking system and articulated to address several political focus areas for the construction industry. BEC accordingly became a political arena where many local perspectives and strategic...... interests had to be managed. The fifth chapter is about the operationalization of benchmarking and demonstrates how the concretizing and implementation of benchmarking gave rise to reactions from different actors with different and diverse interests in the benchmarking initiative. Political struggles...

  16. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  17. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  18. Neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) MCNP ''Benchmark CAD Model'' with the ATTILA discrete ordinance code

    International Nuclear Information System (INIS)

    Youssef, M.Z.; Feder, R.; Davis, I.

    2007-01-01

    The ITER IT has adopted the newly developed FEM, 3-D, and CAD-based Discrete Ordinates code, ATTILA for the neutronics studies contingent on its success in predicting key neutronics parameters and nuclear field according to the stringent QA requirements set forth by the Management and Quality Program (MQP). ATTILA has the advantage of providing a full flux and response functions mapping everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. The ITER neutronics community had agreed to use a standard CAD model of ITER (40 degree sector, denoted ''Benchmark CAD Model'') to compare results for several responses selected for calculation benchmarking purposes to test the efficiency and accuracy of the CAD-MCNP approach developed by each party. Since ATTILA seems to lend itself as a powerful design tool with minimal turnaround time, it was decided to benchmark this model with ATTILA as well and compare the results to those obtained with the CAD MCNP calculations. In this paper we report such comparison for five responses, namely: (1) Neutron wall load on the surface of the 18 shield blanket module (SBM), (2) Neutron flux and nuclear heating rate in the divertor cassette, (3) nuclear heating rate in the winding pack of the inner leg of the TF coil, (4) Radial flux profile across dummy port plug and shield plug placed in the equatorial port, and (5) Flux at seven point locations situated behind the equatorial port plug. (orig.)

  19. Productivity benchmarks for operative service units.

    Science.gov (United States)

    Helkiö, P; Aantaa, R; Virolainen, P; Tuominen, R

    2016-04-01

    Easily accessible reliable information is crucial for strategic and tactical decision-making on operative processes. We report development of an analysis tool and resulting metrics for benchmarking purposes at a Finnish university hospital. The analysis tool is based on data collected in a resource management system and an in-house cost-reporting database. The exercise reports key metrics for four operative service units and six surgical units from 2014 and the change from year 2013. Productivity, measured as total costs per total hours, ranged from 658 to 957 €/h and utilization of the total available resource hours at the service unit level ranged from 66% to 74%. The lowest costs were in a unit running only regular working hour shifts, whereas the highest costs were in a unit operating on 24/7 basis. The tool includes additional metrics on operating room (OR) scheduling and monthly data to support more detailed analysis. This report provides the hospital management with an improved and detailed overview of its operative service units and the surgical process and related costs. The operating costs are associated with on call duties, size of operative service units, and the requirements of the surgeries. This information aids in making mid- to long range decisions on managing OR capacity. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  20. Benchmark Analysis of EBR-II Shutdown Heat Removal Tests

    International Nuclear Information System (INIS)

    2017-08-01

    This publication presents the results and main achievements of an IAEA coordinated research project to verify and validate system and safety codes used in the analyses of liquid metal thermal hydraulics and neutronics phenomena in sodium cooled fast reactors. The publication will be of use to the researchers and professionals currently working on relevant fast reactors programmes. In addition, it is intended to support the training of the next generation of analysts and designers through international benchmark exercises

  1. Simulator for SUPO, a Benchmark Aqueous Homogeneous Reactor (AHR)

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Determan, John C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-14

    A simulator has been developed for SUPO (Super Power) an aqueous homogeneous reactor (AHR) that operated at Los Alamos National Laboratory (LANL) from 1951 to 1974. During that period SUPO accumulated approximately 600,000 kWh of operation. It is considered the benchmark for steady-state operation of an AHR. The SUPO simulator was developed using the process that resulted in a simulator for an accelerator-driven subcritical system, which has been previously reported.

  2. Benchmarking and performance improvement at Rocky Flats Technology Site

    International Nuclear Information System (INIS)

    Elliott, C.; Doyle, G.; Featherman, W.L.

    1997-03-01

    The Rocky Flats Environmental Technology Site has initiated a major work process improvement campaign using the tools of formalized benchmarking and streamlining. This paper provides insights into some of the process improvement activities performed at Rocky Flats from November 1995 through December 1996. It reviews the background, motivation, methodology, results, and lessons learned from this ongoing effort. The paper also presents important gains realized through process analysis and improvement including significant cost savings, productivity improvements, and an enhanced understanding of site work processes

  3. Rural and urban transit district benchmarking : effectiveness and efficiency guidance document.

    Science.gov (United States)

    2011-05-01

    Rural and urban transit systems have sought ways to compare performance across agencies, : identifying successful service delivery strategies and applying these concepts to achieve : successful results within their agency. Benchmarking is a method us...

  4. A thermo mechanical benchmark calculation of a hexagonal can in the BTI accident with INCA code

    International Nuclear Information System (INIS)

    Zucchini, A.

    1988-01-01

    The thermomechanical behaviour of an hexagonal can in a benchmark problem (simulating the conditions of a BTI accident in a fuel assembly) is examined by means of the INCA code and the results systematically compared with those of ADINA

  5. Toxicological benchmarks for potential contaminants of concern for effects on soil and litter invertebrates and heterotrophic process

    Energy Technology Data Exchange (ETDEWEB)

    Will, M.E.; Suter, G.W. II

    1995-09-01

    An important step in ecological risk assessments is screening the chemicals occur-ring on a site for contaminants of potential concern. Screening may be accomplished by comparing reported ambient concentrations to a set of toxicological benchmarks. Multiple endpoints for assessing risks posed by soil-borne contaminants to organisms directly impacted by them have been established. This report presents benchmarks for soil invertebrates and microbial processes and addresses only chemicals found at United States Department of Energy (DOE) sites. No benchmarks for pesticides are presented. After discussing methods, this report presents the results of the literature review and benchmark derivation for toxicity to earthworms (Sect. 3), heterotrophic microbes and their processes (Sect. 4), and other invertebrates (Sect. 5). The final sections compare the benchmarks to other criteria and background and draw conclusions concerning the utility of the benchmarks.

  6. Reactor based plutonium disposition - physics and fuel behaviour benchmark studies of an OECD/NEA experts group

    International Nuclear Information System (INIS)

    D'Hondt, P.; Gehin, J.; Na, B.C.; Sartori, E.; Wiesenack, W.

    2001-01-01

    One of the options envisaged for disposing of weapons grade plutonium, declared surplus for national defence in the Russian Federation and Usa, is to burn it in nuclear power reactors. The scientific/technical know-how accumulated in the use of MOX as a fuel for electricity generation is of great relevance for the plutonium disposition programmes. An Expert Group of the OECD/Nea is carrying out a series of benchmarks with the aim of facilitating the use of this know-how for meeting this objective. This paper describes the background that led to establishing the Expert Group, and the present status of results from these benchmarks. The benchmark studies cover a theoretical reactor physics benchmark on a VVER-1000 core loaded with MOX, two experimental benchmarks on MOX lattices and a benchmark concerned with MOX fuel behaviour for both solid and hollow pellets. First conclusions are outlined as well as future work. (author)

  7. ZZ-PBMR-400, OECD/NEA PBMR Coupled Neutronics/Thermal Hydraulics Transient Benchmark - The PBMR-400 Core Design

    International Nuclear Information System (INIS)

    Reitsma, Frederik

    2007-01-01

    efficiency: ≥ 41%; Emergency planning zone: 400 meters. The PBMR functions under a direct Brayton cycle with primary coolant helium flowing downward through the core and exiting at 900 C. The helium then enters the turbine relinquishing energy to drive the electric generator and compressors. After leaving the turbine, the helium then passes consecutively through the LP primary side of the recuperator, then the pre-cooler, the low pressure compressor, inter-cooler, high pressure compressor and then on to the HP secondary side of the recuperator before re-entering the reactor vessel at 500 C. Power is adjusted by regulating the mass flow rate of gas inside the primary circuit. This is achieved by a combination of compressor bypass and system pressure changes. Increasing the pressure results in an increase in mass flow rate, which results in an increase in the power removed from the core. Power reduction is achieved by removing gas from the circuit. A Helium Inventory Control System is used to provide an increase or decrease in system pressure. The benchmark is divided into Phases and Exercises as follows: Phase I: Steady State Benchmark Calculational Cases; Exercise 1: Neutronics Solution with Fixed Cross Sections ; Exercise 2: Thermal Hydraulic solution with given power / heat sources; Exercise 3: Combined neutronics thermal hydraulics calculation - starting condition for the transients. Phase II: Transient benchmark: Exercise 1: De-pressurised Loss of Forced Cooling (DLOFC) without SCRAM; Exercise 2 : De-pressurised Loss of Forced Cooling (DLOFC) with SCRAM; Exercise 3: Pressurised Loss of Forced Cooling (PLOFC) with SCRAM; Exercise 4 : 100-40-100 Load Follow; Exercise 5: Fast Reactivity Insertion - Control Rod Withdrawal (CRW) and Control Rod Ejection (CRE) scenarios at hot full power conditions; Exercise 6 : Cold Helium Inlet

  8. Benchmark Simulation Model No 2 – finalisation of plant layout and default control strategy

    DEFF Research Database (Denmark)

    Nopens, I.; Benedetti, L.; Jeppsson, U.

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in...... be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given....

  9. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Importantly, as pointed out by Sanner and Slotine (1992) and Lewis (1996), rather than making explicit223 assumptions about the exact functions that should be in Y (q), we can use a neural network approach where224 each neuron is a different function of q... plausible neurons and been227 used in both the Recurrent Error-driven Adaptive Control Hierarchy (REACH) model of human motor228 control (DeWolf 2014) and quadcopter control (Komer 2015).229 These considerations suggest that there is a neuromorphic...

  10. Observer-based FDI for Gain Fault Detection in Ship Propulsion Benchmark

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    2001-01-01

    A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault.......A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault....

  11. Observer-based FDI for Gain Fault Detection in Ship Propulsion Benchmark

    DEFF Research Database (Denmark)

    Lootsma, T.F.; Izadi-Zamanabadi, Roozbeh; Nijmeijer, H.

    2001-01-01

    A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault......A geometric approach for input-affine nonlinear systems is briefly described and then applied to a ship propulsion benchmark. The obtained results are used to design a diagnostic nonlinear observer for successful FDI of the diesel engine gain fault...

  12. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  13. Performance Against WELCOA's Worksite Health Promotion Benchmarks Across Years Among Selected US Organizations.

    Science.gov (United States)

    Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L

    2018-05-01

    The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.

  14. Decision making with consonant belief functions: Discrepancy resulting with the probability transformation method used

    Directory of Open Access Journals (Sweden)

    Cinicioglu Esma Nur

    2014-01-01

    Full Text Available Dempster−Shafer belief function theory can address a wider class of uncertainty than the standard probability theory does, and this fact appeals the researchers in operations research society for potential application areas. However, the lack of a decision theory of belief functions gives rise to the need to use the probability transformation methods for decision making. For representation of statistical evidence, the class of consonant belief functions is used which is not closed under Dempster’s rule of combination but is closed under Walley’s rule of combination. In this research, it is shown that the outcomes obtained using both Dempster’s and Walley’s rules do result in different probability distributions when pignistic transformation is used. However, when plausibility transformation is used, they do result in the same probability distribution. This result shows that the choice of the combination rule and probability transformation method may have a significant effect on decision making since it may change the choice of the decision alternative selected. This result is illustrated via an example of missile type identification.

  15. Functional and cosmetic results of fingertip replantation: anastomosing only the digital artery.

    Science.gov (United States)

    Matsuzaki, Hironori; Yoshizu, Takae; Maki, Yutaka; Tsubokawa, Naoto

    2004-10-01

    In fingertip amputations, conventional stump plasty provides an almost acceptable functional result. However, replanting fingertips can preserve the nail and minimize loss of function. We investigated the functional and cosmetic results of fingertip replantation at the terminal branch of the digital artery. Outcomes were nailbed width and distal-segment length; sensory recovery; and range of motion (ROM) of thumb-interphalangeal (IP) or finger-distal interphalangeal (DIP) joints, and total active motion (TAM) of the replanted finger. Of 15 fingertips replanted after only arterial anastomosis, 13 were successful, and 12 were studied. After a median of 1.3 years, mean nailbed widths and distal-segment lengths were 95.4% and 93.0%, respectively, of the contralateral finger. Average TAM and ROM of the thumb-IP or finger-DIP joints were 92.0% and 83.0% of normal, respectively. Semmes-Weinstein results were blue (3.22 to 3.61) in 4 fingers and purple (3.84 to 4.31) in 8; the mean result from the 2-point discrimination test was 5.9 mm (range, 3 to 11 mm). Thus, amputated fingertips should be aggressively replanted.

  16. Shielding Integral Benchmark Archive and Database (SINBAD)

    Energy Technology Data Exchange (ETDEWEB)

    Kirk, Bernadette Lugue [ORNL; Grove, Robert E [ORNL; Kodeli, I. [International Atomic Energy Agency (IAEA); Sartori, Enrico [ORNL; Gulliford, J. [OECD Nuclear Energy Agency

    2011-01-01

    The Shielding Integral Benchmark Archive and Database (SINBAD) collection of benchmarks was initiated in the early 1990 s. SINBAD is an international collaboration between the Organization for Economic Cooperation and Development s Nuclear Energy Agency Data Bank (OECD/NEADB) and the Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL). SINBAD is a major attempt to compile experiments and corresponding computational models with the goal of preserving institutional knowledge and expertise that need to be handed down to future scientists. SINBAD is also a learning tool for university students and scientists who need to design experiments or gain expertise in modeling and simulation. The SINBAD database is currently divided into three categories fission, fusion, and accelerator benchmarks. Where possible, each experiment is described and analyzed using deterministic or probabilistic (Monte Carlo) radiation transport software.

  17. The functional results of acute nerve grafting in traumatic sciatic nerve injuries.

    Science.gov (United States)

    Vayvada, Haluk; Demirdöver, Cenk; Menderes, Adnan; Yılmaz, Mustafa; Karaca, Can

    2013-03-01

    The sciatic and peroneal nerves are the most frequently injured in lower extremities, followed by tibial and femoral nerves. The aim of this study is to evaluate the functional results of acute nerve grafting in traumatic sciatic nerve injuries. A total of 9 patients with sciatic nerve defect were treated with primary nerve grafting. The mean age was 31.7 years. The etiologic factors were gunshot wounds, traffic accident, and penetrating trauma. All of the patients had sciatic nerve defects ranging from 3.4 to 13.6 cm. The follow-up period ranged between 25 and 84 months. The tibial nerve motor function was "good" or "very good" (M3-M4) in 5 patients (55.6%). The plantar flexion was not sufficient for the rest of the patients. The peroneal nerve motor function was also "good" and "very good" in 3 patients (33.3%). The functional results of the acute nerve grafting of the sciatic nerve within the first week after the injury are poorer than reported in the related literature. This protocol should only be applied to select patients who have adequate soft tissue coverage and healthy nerve endings.

  18. Finnish contribution to the CB4 burnup credit benchmark

    International Nuclear Information System (INIS)

    Wasastjerna, F.

    2001-01-01

    The CB4 phase of the WWER burnup credit benchmark series studies the effect of flat and realistic axial burnup profiles on the multiplication factor of a conceptual WWER cask loaded with spent fuel. The benchmark was calculated at VTT Energy with MCNP4C, using mainly ENDF/B-V1 cross sections. According to the calculation results the effect of the axial homogenization on the k eff estimate is complex. At low burnups the use of a axial profile overestimates k eff but at high burnups the reverse is the case. Ignoring fission products leads to conservative k eff and the effect of axial homogenization on the multiplication factor is similar to a reduction of the burnup (Authors)

  19. Benchmarks for single-phase flow in fractured porous media

    Science.gov (United States)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  20. CEA-IPSN Participation in the MSLB Benchmark

    International Nuclear Information System (INIS)

    Royer, E.; Raimond, E.; Caruge, D.

    2001-01-01

    The OECD/NEA Main Steam Line Break (MSLB) Benchmark allows the comparison of state-of-the-art and best-estimate models used to compute reactivity accidents. The three exercises of the MSLB benchmark are defined with the aim of analyzing the space and time effects in the core and their modeling with computational tools. Point kinetics (exercise 1) simulation results in a return to power (RTP) after scram, whereas 3-D kinetics (exercises 2 and 3) does not display any RTP. The objective is to understand the reasons for the conservative solution of point kinetics and to assess the benefits of best-estimate models. First, the core vessel mixing model is analyzed; second, sensitivity studies on point kinetics are compared to 3-D kinetics; third, the core thermal hydraulics model and coupling with neutronics is presented; finally, RTP and a suitable model for MSLB are discussed

  1. Healthcare Analytics: Creating a Prioritized Improvement System with Performance Benchmarking.

    Science.gov (United States)

    Kolker, Eugene; Kolker, Evelyne

    2014-03-01

    The importance of healthcare improvement is difficult to overstate. This article describes our collaborative work with experts at Seattle Children's to create a prioritized improvement system using performance benchmarking. We applied analytics and modeling approaches to compare and assess performance metrics derived from U.S. News and World Report benchmarking data. We then compared a wide range of departmental performance metrics, including patient outcomes, structural and process metrics, survival rates, clinical practices, and subspecialist quality. By applying empirically simulated transformations and imputation methods, we built a predictive model that achieves departments' average rank correlation of 0.98 and average score correlation of 0.99. The results are then translated into prioritized departmental and enterprise-wide improvements, following a data to knowledge to outcomes paradigm. These approaches, which translate data into sustainable outcomes, are essential to solving a wide array of healthcare issues, improving patient care, and reducing costs.

  2. BENCHMARKING AND CONFIGURATION OF OPENSOURCE MANUFACTURING EXECUTION SYSTEM (MES APPLICATION

    Directory of Open Access Journals (Sweden)

    Ganesha Nur Laksmana

    2013-05-01

    Full Text Available Information now is an important element to every growing industry in the world. Inorder to keep up with other competitors, endless improvements in optimizing overall efficiency areneeded. There still exist barriers that separate departments in PT. XYZ and cause limitation to theinformation sharing in the system. Open-Source Manufacturing Execution System (MES presentsas an IT-based application that offers wide variety of customization to eliminate stovepipes bysharing information between departments. Benchmarking is used to choose the best Open-SourceMES Application; and Dynamic System Development Method (DSDM is adopted as this workguideline. As a result, recommendations of the chosen Open-Source MES Application arerepresented.Keywords: Manufacturing Execution System (MES; Open Source; Dynamic SystemDevelopment Method (DSDM; Benchmarking; Configuration

  3. Conclusion of the I.C.T. benchmark exercise

    International Nuclear Information System (INIS)

    Giacometti, A.

    1991-01-01

    The ICT Benchmark exercise made within the RIV working group of ESARDA on reprocessing data supplied by COGEMA for 53 routines reprocessing input batches made of 110 irradiated fuel assemblies from KWO Nuclear Power Plant was finally evaluated. The conclusions are: all seven different ICT methods applied verified the operator data on plutonium within about one percent; anomalies intentionally introduced to the operator data were detected in 90% of the cases; the nature of the introduced anomalies, which were unknown to the participants, was completely resolved for the safeguards relevant cases; the false alarm rate was in a few percent range. The ICT Benchmark results shows that this technique is capable of detecting and resolving anomalies in the reprocessing input data to the order of a percent

  4. Higgs Pair Production: Choosing Benchmarks With Cluster Analysis

    CERN Document Server

    Carvalho, Alexandra; Dorigo, Tommaso; Goertz, Florian; Gottardo, Carlo A.; Tosi, Mia

    2016-01-01

    New physics theories often depend on a large number of free parameters. The precise values of those parameters in some cases drastically affect the resulting phenomenology of fundamental physics processes, while in others finite variations can leave it basically invariant at the level of detail experimentally accessible. When designing a strategy for the analysis of experimental data in the search for a signal predicted by a new physics model, it appears advantageous to categorize the parameter space describing the model according to the corresponding kinematical features of the final state. A multi-dimensional test statistic can be used to gauge the degree of similarity in the kinematics of different models; a clustering algorithm using that metric may then allow the division of the space into homogeneous regions, each of which can be successfully represented by a benchmark point. Searches targeting those benchmark points are then guaranteed to be sensitive to a large area of the parameter space. In this doc...

  5. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  6. Communication of Pulmonary Function Test Results: A Survey of Patient's Preferences.

    Directory of Open Access Journals (Sweden)

    Debbie Zagami

    Full Text Available Physician-patient communication in patients suffering from common chronic respiratory disease should encompass discussion about pulmonary function test (PFT results, diagnosis, disease education, smoking cessation and optimising inhaler technique. Previous studies have identified that patients with chronic respiratory disease/s often express dissatisfaction about physician communication. Currently there is a paucity of data regarding patient awareness of their PFT results (among those who have undergone PFTs previously or patient preferences about PFT result communication.We undertook a three-month prospective study on outpatients referred to two Pulmonary Function Laboratories. If subjects had undergone PFTs previously, the awareness of their previous test results was evaluated. All subjects were asked about their preferences for PFT result communication. Subjects were determined to have chronic respiratory disease based on their past medical history.300 subjects (50% male with a median age (± SD of 65 (± 14 years participated in the study. 99% of the study participants stated that they were at least moderately interested in knowing their PFT results. 72% (217/300 of the subjects had undergone at least one PFT in the past, 48% of whom stated they had not been made aware of their results. Fewer subjects with chronic respiratory disease preferred that only a doctor discuss their PFT results with them (28% vs. 41%, p = 0.021.Our study demonstrates that while almost all subjects want to be informed of their PFT results, this does not occur in a large number of patients. Many subjects are agreeable for their PFT results to be communicated to them by clinicians other than doctors. Further research is required to develop an efficient method of conveying PFT results that will improve patient satisfaction and health outcomes.

  7. Dosimetry results for Big Ten and related benchmarks

    International Nuclear Information System (INIS)

    Hansen, G.E.; Gilliam, D.M.

    1980-01-01

    Measured average reaction cross sections for the Big Ten central flux spectrum are given together with calculated values based on the U.S. Evaluated Nuclear Data File ENDF/B-IV. Central reactivity coefficients for 233 U, 235 U, 239 Pu, 6 Li and 10 B are given to check consistency of bias between measured and calculated reaction cross sections for these isotopes. Spectral indexes for the Los Alamos 233 U, 235 U and 239 Pu metal critical assemblies are updated, utilizing the Big Ten measurements and interassembly calibrations, and their implications for inelastic scattering are reiterated

  8. Toxicological benchmarks for wildlife: 1996 Revision

    International Nuclear Information System (INIS)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II.

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets

  9. Toxicological benchmarks for wildlife: 1996 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  10. Monte Carlo burnup simulation of the TAKAHAMA-3 benchmark experiment

    International Nuclear Information System (INIS)

    Dalle, Hugo M.

    2009-01-01

    High burnup PWR fuel is currently being studied at CDTN/CNEN-MG. Monte Carlo burnup code system MONTEBURNS is used to characterize the neutronic behavior of the fuel. In order to validate the code system and calculation methodology to be used in this study the Japanese Takahama-3 Benchmark was chosen, as it is the single burnup benchmark experimental data set freely available that partially reproduces the conditions of the fuel under evaluation. The burnup of the three PWR fuel rods of the Takahama-3 burnup benchmark was calculated by MONTEBURNS using the simplest infinite fuel pin cell model and also a more complex representation of an infinite heterogeneous fuel pin cells lattice. Calculations results for the mass of most isotopes of Uranium, Neptunium, Plutonium, Americium, Curium and some fission products, commonly used as burnup monitors, were compared with the Post Irradiation Examinations (PIE) values for all the three fuel rods. Results have shown some sensitivity to the MCNP neutron cross-section data libraries, particularly affected by the temperature in which the evaluated nuclear data files were processed. (author)

  11. Compilation report of VHTRC temperature coefficient benchmark calculations

    International Nuclear Information System (INIS)

    Yasuda, Hideshi; Yamane, Tsuyoshi

    1995-11-01

    A calculational benchmark problem has been proposed by JAERI to an IAEA Coordinated Research Program, 'Verification of Safety Related Neutronic Calculation for Low-enriched Gas-cooled Reactors' to investigate the accuracy of calculation results obtained by using codes of the participating countries. This benchmark is made on the basis of assembly heating experiments at a pin-in block type critical assembly, VHTRC. Requested calculation items are the cell parameters, effective multiplication factor, temperature coefficient of reactivity, reaction rates, fission rate distribution, etc. Seven institutions from five countries have joined the benchmark works. Calculation results are summarized in this report with some remarks by the authors. Each institute analyzed the problem by applying the calculation code system which was prepared for the HTGR development of individual country. The values of the most important parameter, k eff , by all institutes showed good agreement with each other and with the experimental ones within 1%. The temperature coefficient agreed within 13%. The values of several cell parameters calculated by several institutes did not agree with the other's ones. It will be necessary to check the calculation conditions again for getting better agreement. (J.P.N.)

  12. Differentiability of Palmer's linearization Theorem and converse result for density functions

    OpenAIRE

    Castañeda, Alvaro; Robledo, Gonzalo

    2014-01-01

    We study differentiability properties in a particular case of the Palmer's linearization Theorem, which states the existence of an homeomorphism $H$ between the solutions of a linear ODE system having exponential dichotomy and a quasilinear system. Indeed, if the linear system is uniformly asymptotically stable, sufficient conditions ensuring that $H$ is a $C^{2}$ preserving orientation diffeomorphism are given. As an application, we generalize a converse result of density functions for a non...

  13. Energy efficiency benchmarking of energy-intensive industries in Taiwan

    International Nuclear Information System (INIS)

    Chan, David Yih-Liang; Huang, Chi-Feng; Lin, Wei-Chun; Hong, Gui-Bing

    2014-01-01

    Highlights: • Analytical tool was applied to estimate the energy efficiency indicator of energy intensive industries in Taiwan. • The carbon dioxide emission intensity in selected energy-intensive industries is also evaluated in this study. • The obtained energy efficiency indicator can serve as a base case for comparison to the other regions in the world. • This analysis results can serve as a benchmark for selected energy-intensive industries. - Abstract: Taiwan imports approximately 97.9% of its primary energy as rapid economic development has significantly increased energy and electricity demands. Increased energy efficiency is necessary for industry to comply with energy-efficiency indicators and benchmarking. Benchmarking is applied in this work as an analytical tool to estimate the energy-efficiency indicators of major energy-intensive industries in Taiwan and then compare them to other regions of the world. In addition, the carbon dioxide emission intensity in the iron and steel, chemical, cement, textile and pulp and paper industries are evaluated in this study. In the iron and steel industry, the energy improvement potential of blast furnace–basic oxygen furnace (BF–BOF) based on BPT (best practice technology) is about 28%. Between 2007 and 2011, the average specific energy consumption (SEC) of styrene monomer (SM), purified terephthalic acid (PTA) and low-density polyethylene (LDPE) was 9.6 GJ/ton, 5.3 GJ/ton and 9.1 GJ/ton, respectively. The energy efficiency of pulping would be improved by 33% if BAT (best available technology) were applied. The analysis results can serve as a benchmark for these industries and as a base case for stimulating changes aimed at more efficient energy utilization

  14. Functional results of endoscopic laser surgery in advanced head and neck tumors

    Science.gov (United States)

    Sadick, Haneen; Baker-Schreyer, Antonio; Bergler, Wolfgang; Maurer, Joachim; Hoermann, Karl

    1998-01-01

    Functional results following lasersurgery of minor laryngeal carcinomas were very encouraging. The indication for lasersurgical intervention was then extended to larger carcinomas of the larynx and hypopharynx. The purpose of this study was to assess vocal function and swallowing ability after endoscopic lasersurgery and to compare the results with conventional surgical procedures. From January 1994 to December 1996, 72 patients with advanced squamous cell carcinoma of the larynx and hypopharynx were examined prospectively. The patients underwent endoscopic lasersurgery instead of laryngopharyngectomy. The voice quality was evaluated pre- and postoperatively by subjective assessment, registration of voice parameters and sonegraphic classification. The swallowing ability was judged according to individual scores. The necessity of tracheostomy and nasogastric tube were registered and the duration of hospitalization was documented. The results showed that laryngeal phonation and swallowing ability were significantly better 12 months after lasersurgery compared to the preoperative findings whereas the recurrence rate was similar or even better after conventional pharyngolaryngectomy. Lasersurgery as an alternative surgical procedure to laryngectomy enables patients to retain a sufficient voice function and swallowing ability.

  15. Velocity-Autocorrelation Function in Liquids, Deduced from Neutron Incoherent Scattering Results

    DEFF Research Database (Denmark)

    Carneiro, Kim

    1976-01-01

    The Fourier transform p(ω) of the velocity-autocorrelation function is derived from neutron incoherent scattering results, obtained from the two liquids Ar and H2. The quality and significance of the results are discussed with special emphasis on the long-time t-3/2 tail, found in computer...... simulations and recent theories. The available experimental data from Na, Ar, and H2 close to their normal melting points are consistent with calculations which take into account the contribution to p(ω) from the tail at low frequencies....

  16. Tailoring a psychophysical discrimination experiment upon assessment of the psychometric function: Predictions and results

    Science.gov (United States)

    Vilardi, Andrea; Tabarelli, Davide; Ricci, Leonardo

    2015-02-01

    Decision making is a widespread research topic and plays a crucial role in neuroscience as well as in other research and application fields of, for example, biology, medicine and economics. The most basic implementation of decision making, namely binary discrimination, is successfully interpreted by means of signal detection theory (SDT), a statistical model that is deeply linked to physics. An additional, widespread tool to investigate discrimination ability is the psychometric function, which measures the probability of a given response as a function of the magnitude of a physical quantity underlying the stimulus. However, the link between psychometric functions and binary discrimination experiments is often neglected or misinterpreted. Aim of the present paper is to provide a detailed description of an experimental investigation on a prototypical discrimination task and to discuss the results in terms of SDT. To this purpose, we provide an outline of the theory and describe the implementation of two behavioural experiments in the visual modality: upon the assessment of the so-called psychometric function, we show how to tailor a binary discrimination experiment on performance and decisional bias, and to measure these quantities on a statistical base. Attention is devoted to the evaluation of uncertainties, an aspect which is also often overlooked in the scientific literature.

  17. Treatment of proximal humeral fractures using anatomical locking plate: correlation of functional and radiographic results

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Tenor Junior

    2016-06-01

    Full Text Available ABSTRACT OBJECTIVE: To correlate the functional outcomes and radiographic indices of proximal humerus fractures treated using an anatomical locking plate for the proximal humerus. METHODS: Thirty-nine patients with fractures of the proximal humerus who had been treated using an anatomical locking plate were assessed after a mean follow-up of 27 months. These patients were assessed using the University of California Los Angeles (UCLA score and their range of motion was evaluated using the method of the American Academy of Orthopedic Surgeons on the operated shoulder and comparative radiographs on both shoulders. The correlation between radiographic measurements and functional outcomes was established. RESULTS: We found that 64% of the results were good or excellent, according to the UCLA score, with the following means: elevation of 124°; lateral rotation of 44°; and medial rotation of thumb to T9. The type of fracture according to Neer's classification and the patient's age had significant correlations with the range of motion, such that the greater the number of parts in the fracture and the greater the patient's age were, the worse the results also were. Elevation and UCLA score were found to present associations with the anatomical neck-shaft angle in anteroposterior view; fractures fixed with varus deviations greater than 15° showed the worst results (p < 0.001. CONCLUSION: The variation in the neck-shaft angle measurements in anteroposterior view showed a significant correlation with the range of motion; varus deviations greater than 15° were not well tolerated. This parameter may be one of the predictors of functional results from proximal humerus fractures treated using a locking plate.

  18. COMPARISON OF THERMOELASTIC RESULTS IN TWO TYPES OF FUNCTIONALLY GRADED BRAKE DISCS

    Directory of Open Access Journals (Sweden)

    Z.N. Ismarrubie

    2012-06-01

    Full Text Available A thermoelastic simulation of functionally graded (FG brake discs is performed using finite element (FE ANSYS. The material properties of two types of FG brake discs are assumed to vary in both radial and thickness directions according to a power law distribution. The brake discs are in contact with one hollow pure pad disc. Dry contact friction is considered as the heat source. The proper thicknesses of pad discs are found to have full-contact status. The behaviour of the thermoelastic results for thickness and radial FG brake discs are compared. The results show that the behaviour of temperature and vertical displacement in these two types of FG brake discs are the same. However, the variations of radial displacement for different grading indices are not the same. The behaviour of other results are quite similar. Thus, it can be concluded that the variation direction of material properties in FG brake discs can affect the results.

  19. The effect of functional status of the ovaries on the embryological results of controlled ovarian hyperstimulation

    Directory of Open Access Journals (Sweden)

    Grzegorz Mrugacz

    2013-12-01

    Full Text Available Introduction: Controlled ovarian hyperstimulation is an integral part of infertility treatment. Despite many years of use, some aspects of controlled ovarian stimulation have not yet been clarified, especially the role of the functional status of the ovaries before hormonal stimulation. Aim of the research: To assess the effect of the functional status of the ovaries on the embryological results of controlled ovarian hyperstimulation. Material and methods: The retrospective study included female patients treated for infertility. The patients were divided into two groups depending on the ultrasonographic appearance of the ovaries before controlled ovarian hyperstimulation. Patients with small antral follicles 0.05. The numbers of A, C, D quality embryos were comparable between the groups (p > 0.05. There were more B quality embryos in group I than II (p > 0.05. The embryo growth rate was significantly faster in group I than II. Conclusions: The results of the present study indicate that the functional status of the ovaries before controlled ovarian hyperstimulation plays a pivotal role in treatment outcome.

  20. An enhanced RNA alignment benchmark for sequence alignment programs

    Directory of Open Access Journals (Sweden)

    Steger Gerhard

    2006-10-01

    Full Text Available Abstract Background The performance of alignment programs is traditionally tested on sets of protein sequences, of which a reference alignment is known. Conclusions drawn from such protein benchmarks do not necessarily hold for the RNA alignment problem, as was demonstrated in the first RNA alignment benchmark published so far. For example, the twilight zone – the similarity range where alignment quality drops drastically – starts at 60 % for RNAs in comparison to 20 % for proteins. In this study we enhance the previous benchmark. Results The RNA sequence sets in the benchmark database are taken from an increased number of RNA families to avoid unintended impact by using only a few families. The size of sets varies from 2 to 15 sequences to assess the influence of the number of sequences on program performance. Alignment quality is scored by two measures: one takes into account only nucleotide matches, the other measures structural conservation. The performance order of parameters – like nucleotide substitution matrices and gap-costs – as well as of programs is rated by rank tests. Conclusion Most sequence alignment programs perform equally well on RNA sequence sets with high sequence identity, that is with an average pairwise sequence identity (APSI above 75 %. Parameters for gap-open and gap-extension have a large influence on alignment quality lower than APSI ≤ 75 %; optimal parameter combinations are shown for several programs. The use of different 4 × 4 substitution matrices improved program performance only in some cases. The performance of iterative programs drastically increases with increasing sequence numbers and/or decreasing sequence identity, which makes them clearly superior to programs using a purely non-iterative, progressive approach. The best sequence alignment programs produce alignments of high quality down to APSI > 55 %; at lower APSI the use of sequence+structure alignment programs is recommended.

  1. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Science.gov (United States)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  2. A resource for benchmarking the usefulness of protein structure models.

    KAUST Repository

    Carbajo, Daniel

    2012-08-02

    BACKGROUND: Increasingly, biologists and biochemists use computational tools to design experiments to probe the function of proteins and/or to engineer them for a variety of different purposes. The most effective strategies rely on the knowledge of the three-dimensional structure of the protein of interest. However it is often the case that an experimental structure is not available and that models of different quality are used instead. On the other hand, the relationship between the quality of a model and its appropriate use is not easy to derive in general, and so far it has been analyzed in detail only for specific application. RESULTS: This paper describes a database and related software tools that allow testing of a given structure based method on models of a protein representing different levels of accuracy. The comparison of the results of a computational experiment on the experimental structure and on a set of its decoy models will allow developers and users to assess which is the specific threshold of accuracy required to perform the task effectively. CONCLUSIONS: The ModelDB server automatically builds decoy models of different accuracy for a given protein of known structure and provides a set of useful tools for their analysis. Pre-computed data for a non-redundant set of deposited protein structures are available for analysis and download in the ModelDB database. IMPLEMENTATION, AVAILABILITY AND REQUIREMENTS: Project name: A resource for benchmarking the usefulness of protein structure models. Project home page: http://bl210.caspur.it/MODEL-DB/MODEL-DB_web/MODindex.php.Operating system(s): Platform independent. Programming language: Perl-BioPerl (program); mySQL, Perl DBI and DBD modules (database); php, JavaScript, Jmol scripting (web server). Other requirements: Java Runtime Environment v1.4 or later, Perl, BioPerl, CPAN modules, HHsearch, Modeller, LGA, NCBI Blast package, DSSP, Speedfill (Surfnet) and PSAIA. License: Free. Any restrictions to use by

  3. First benchmark of the Unstructured Grid Adaptation Working Group

    Science.gov (United States)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  4. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  5. Academic Productivity in Psychiatry: Benchmarks for the H-Index.

    Science.gov (United States)

    MacMaster, Frank P; Swansburg, Rose; Rittenbach, Katherine

    2017-08-01

    Bibliometrics play an increasingly critical role in the assessment of faculty for promotion and merit increases. Bibliometrics is the statistical analysis of publications, aimed at evaluating their impact. The objective of this study is to describe h-index and citation benchmarks in academic psychiatry. Faculty lists were acquired from online resources for all academic departments of psychiatry listed as having residency training programs in Canada (as of June 2016). Potential authors were then searched on Web of Science (Thomson Reuters) for their corresponding h-index and total number of citations. The sample included 1683 faculty members in academic psychiatry departments. Restricted to those with a rank of assistant, associate, or full professor resulted in 1601 faculty members (assistant = 911, associate = 387, full = 303). h-index and total citations differed significantly by academic rank. Both were highest in the full professor rank, followed by associate, then assistant. The range in each, however, was large. This study provides the initial benchmarks for the h-index and total citations in academic psychiatry. Regardless of any controversies or criticisms of bibliometrics, they are increasingly influencing promotion, merit increases, and grant support. As such, benchmarking by specialties is needed in order to provide needed context.

  6. [Benchmarking in patient identification: An opportunity to learn].

    Science.gov (United States)

    Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C

    To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    Directory of Open Access Journals (Sweden)

    A. Glerum

    2018-03-01

    Full Text Available ASPECT (Advanced Solver for Problems in Earth's ConvecTion is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  8. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  9. Clinical and functional outcome of the Thrust Plate Prosthesis: short- and medium-term results.

    Science.gov (United States)

    Steens, W; Rosenbaum, D; Goetze, C; Gosheger, G; van den Daele, R; Steinbeck, J

    2003-08-01

    The purpose of this study was to objectively assess the functional outcome after implantation of a Thrust Plate Prosthesis. This retrospective study compared the gait patterns of 33 patients to a control group. Few studies have been published about this type of prosthesis describing clinical and radiographic outcome. Even though the evaluation of the functional outcome is a commonly accepted way to measure the success of an implant it has not been reported in previous studies. Beside clinical (SF-36, and Harris Hip Score) and radiographic evaluation subjects were examined by three dimensional gait analysis and surface electromyography from seven leg and trunk muscles bilaterally. The average Harris Hip Score was 85.7 points, and the SF-36 only differed significantly from controls regarding physical functioning. The radiography showed considerable radiolucencies under the Thrust Plate. Kinematic parameters indicated a slight impairment of the operated limb. The analysis revealed a decreased hip (28.2%) and knee (51.2%) range of motion during gait. The joint moments on the operated side were reduced in hip (72%) and knee abduction (59%) in comparison to controls. The average electromyographic parameters indicated a significantly higher mean and peak amplitude of the tensor fasciae latae (mean 56%, peak 54%), and gluteus medius (mean 33%, peak 21%) and a lower peak activity of the gluteus maximus (19%). The results indicate a generally good functional outcome even though a slightly asymmetrical loading was observed. No major limitations in physical functioning and health-related quality of life was seen. The radiographic signs of loosening might indicate difficulties in achieving the proximal load transfer of this implant. The data provided in this study may serve to establish the Thrust Plate Prosthesis as an alternative procedure in total hip replacement in younger patients.

  10. Computational Benchmark for Estimation of Reactivity Margin from Fission Products and Minor Actinides in PWR Burnup Credit

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, J.C.

    2001-08-02

    This report proposes and documents a computational benchmark problem for the estimation of the additional reactivity margin available in spent nuclear fuel (SNF) from fission products and minor actinides in a burnup-credit storage/transport environment, relative to SNF compositions containing only the major actinides. The benchmark problem/configuration is a generic burnup credit cask designed to hold 32 pressurized water reactor (PWR) assemblies. The purpose of this computational benchmark is to provide a reference configuration for the estimation of the additional reactivity margin, which is encouraged in the U.S. Nuclear Regulatory Commission (NRC) guidance for partial burnup credit (ISG8), and document reference estimations of the additional reactivity margin as a function of initial enrichment, burnup, and cooling time. Consequently, the geometry and material specifications are provided in sufficient detail to enable independent evaluations. Estimates of additional reactivity margin for this reference configuration may be compared to those of similar burnup-credit casks to provide an indication of the validity of design-specific estimates of fission-product margin. The reference solutions were generated with the SAS2H-depletion and CSAS25-criticality sequences of the SCALE 4.4a package. Although the SAS2H and CSAS25 sequences have been extensively validated elsewhere, the reference solutions are not directly or indirectly based on experimental results. Consequently, this computational benchmark cannot be used to satisfy the ANS 8.1 requirements for validation of calculational methods and is not intended to be used to establish biases for burnup credit analyses.

  11. Randomized benchmarking and process tomography for gate errors in a solid-state qubit.

    Science.gov (United States)

    Chow, J M; Gambetta, J M; Tornberg, L; Koch, Jens; Bishop, Lev S; Houck, A A; Johnson, B R; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-03-06

    We present measurements of single-qubit gate errors for a superconducting qubit. Results from quantum process tomography and randomized benchmarking are compared with gate errors obtained from a double pi pulse experiment. Randomized benchmarking reveals a minimum average gate error of 1.1+/-0.3% and a simple exponential dependence of fidelity on the number of gates. It shows that the limits on gate fidelity are primarily imposed by qubit decoherence, in agreement with theory.

  12. Development of an ICSBEP Benchmark Evaluation, Nearly 20 Years of Experience

    International Nuclear Information System (INIS)

    Briggs, J. Blair; Bess, John D.

    2011-01-01

    The basic structure of all ICSBEP benchmark evaluations is essentially the same and includes (1) a detailed description of the experiment; (2) an evaluation of the experiment, including an exhaustive effort to quantify the effects of uncertainties on measured quantities; (3) a concise presentation of benchmark-model specifications; (4) sample calculation results; and (5) a summary of experimental references. Computer code input listings and other relevant information are generally preserved in appendixes. Details of an ICSBEP evaluation is presented.

  13. Systematic benchmarking of microarray data classification: assessing the role of non-linearity and dimensionality reduction

    OpenAIRE

    Pochet, Nathalie; De Smet, Frank; Suykens, Johan; De Moor, Bart

    2004-01-01

    MOTIVATION: Microarrays are capable of determining the expression levels of thousands of genes simultaneously. In combination with classification methods, this technology can be useful to support clinical management decisions for individual patients, e.g. in oncology. The aim of this paper is to systematically benchmark the role of non-linear versus linear techniques and dimensionality reduction methods. RESULTS: A systematic benchmarking study is performed by comparing linear versions of sta...

  14. Frontal Anterior Laryngectomy with Epiglottic Reconstruction (Tucker’s Operation: Oncologic and Functional Results

    Directory of Open Access Journals (Sweden)

    Recep Yağız

    2012-03-01

    Full Text Available Objective: To evaluate functional and oncological results of patients who were treated with frontal anterior laryngectomy with epiglottic reconstruction (Tucker’s operation.Material and Methods: From September 1985 to November 2009, 58 patients whose early glottic carcinomas were operated on with Tucker’s operation. The time of decannulation, nasogastric tube removal, hospitalization and oncological results were analyzed. Acoustic analysis and Voice Handicap Index (VHI were used to evaluate vocal function.Results: The mean time for decannulation and nasogastric tube removal were 11.8±7.6 and 15.4±4.4 days, respectively. The mean duration of hospital stay was 19.3±6.1 days. It was found that early decannulation significantly reduced patient decannulation and hospitalization time. The 5-year overall and cause-specific actuarial survival rates were 81.5% and 96.9%, respectively. The 10-year overall and cause-specific survival rates were 67% and 92.3%, respectively. The 5-year local and nodal control rates were 95.4% and 95.2%, respectively. The mean values for jitter, shimmer and noise-to-harmonic ratio were 8.10±5.59%, 16.60±5.81% and 0.51±0.23, respectively, and these scores showed a significant increase. Total VHI score and subscale scores except VHI-emotional noted that patients had a mild level of vocal disability. Conclusion: Tucker’s operation is one of the preferred techniques in the treatment of early glottic carcinoma with its high oncologic success rate and satisfactory functional results.

  15. Prospective study comparing laparoscopic and open adenomectomy: Surgical and functional results.

    Science.gov (United States)

    Garcia-Segui, A; Angulo, J C

    Open adenomectomy (OA) is the surgery of choice for large volume benign prostatic hyperplasia, and laparoscopic adenomectomy (LA) represents a minimally invasive alternative. We present a long-term, prospective study comparing both techniques. The study consecutively included 199 patients with benign prostatic hyperplasia and prostate volumes>80g who were followed for more than 12 months. The patients underwent OA (n=97) or LA (n=102). We recorded and compared demographic and perioperative data, functional results and complications using a descriptive statistical analysis. The mean age was 69.2±7.7 years (range 42-87), and the mean prostate volume (measured by TRUS) was 112.1±32.7mL (range 78-260). There were no baseline differences among the groups in terms of age, ASA scale, prostate volume, PSA levels, Qmax, IPSS, QoL or treatments prior to the surgery. The surgical time (P<.0001) and catheter time (P<.0002) were longer in the LA group. Operative bleeding (P<.0001), transfusion rate (P=.0015) and mean stay (P<.0001) were significantly lower in the LA group. The LA group had a lower rate of complications (P=.04), but there were no significant differences between the groups in terms of major complications (Clavien score≥3) (P=.13) or in the rate of late complications (at one year) (P=.66). There were also no differences between the groups in the functional postoperative results: IPSS (P=.17), QoL (P=.3) and Qmax (P=.17). LA is a reasonable, safe and effective alternative that results in less bleeding, fewer transfusions, shorter hospital stays and lower morbidity than OA. LA has similar functional results to OA, at the expense of longer surgical times and longer catheter times. Copyright © 2016 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Benchmarking European Gas Transmission System Operators

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter; Trinkner, Urs

    This is the final report for the pan-European efficiency benchmarking of gas transmission system operations commissioned by the Netherlands Authority for Consumers and Markets (ACM), Den Haag, on behalf of the Council of European Energy Regulators (CEER) under the supervision of the authors....

  17. Benchmarking 2009: Trends in Education Philanthropy

    Science.gov (United States)

    Bearman, Jessica; Kilgore, Gin

    2009-01-01

    "Benchmarking 2009: Trends in Education Philanthropy" is Grantmakers for Education's (GFE) second annual study of grantmaking trends and priorities among members. As a national network dedicated to improving education outcomes through philanthropy, GFE members are mindful of their role in fostering greater knowledge in the field. They believe it's…

  18. Resolution for the Loviisa benchmark problem

    International Nuclear Information System (INIS)

    Garcia, C.R.; Quintero, R.; Milian, D.

    1992-01-01

    In the present paper, the Loviisa benchmark problem for cycles 11 and 8, and reactor blocks 1 and 2 from Loviisa NPP, is calculated. This problem user law leakage reload patterns and was posed at the second thematic group of TIC meeting held in Rheinsberg GDR, march 1989. SPPS-1 coarse mesh code has been used for the calculations

  19. Parton Distribution Benchmarking with LHC Data

    NARCIS (Netherlands)

    Ball, Richard D.; Carrazza, Stefano; Debbio, Luigi Del; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C. -P.

    2012-01-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross

  20. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  1. Why and How to Benchmark XML Databases

    NARCIS (Netherlands)

    A.R. Schmidt; F. Waas; M.L. Kersten (Martin); D. Florescu; M.J. Carey; I. Manolescu; R. Busse

    2001-01-01

    textabstractBenchmarks belong to the very standard repertory of tools deployed in database development. Assessing the capabilities of a system, analyzing actual and potential bottlenecks, and, naturally, comparing the pros and cons of different systems architectures have become indispensable tasks

  2. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  3. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  4. Benchmarks in Tacit Knowledge Skills Instruction

    DEFF Research Database (Denmark)

    Tackney, Charles T.; Strömgren, Ole; Sato, Toyoko

    2006-01-01

    experience more empowering of essential tacit knowledge skills than that found in educational institutions in other national settings. We specify the program forms and procedures for consensus-based governance and group work (as benchmarks) that demonstrably instruct undergraduates in the tacit skill...... dimensions of knowledge thought to be essential for success following graduation....

  5. Determination of Benchmarks Stability within Ahmadu Bello ...

    African Journals Online (AJOL)

    Heights of six geodetic benchmarks over a total distance of 8.6km at the Ahmadu Bello University (ABU), Zaria, Nigeria were recomputed and analysed using least squares adjustment technique. The network computations were tied to two fix primary reference pillars situated outside the campus. The two-tail Chi-square ...

  6. Benchmarking and performance management in health care

    OpenAIRE

    Buttigieg, Sandra; ; EHMA Annual Conference : Public Health Care : Who Pays, Who Provides?

    2012-01-01

    Current economic conditions challenge health care providers globally. Healthcare organizations need to deliver optimal financial, operational, and clinical performance to sustain quality of service delivery. Benchmarking is one of the most potent and under-utilized management tools available and an analytic tool to understand organizational performance. Additionally, it is required for financial survival and organizational excellence.

  7. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  8. EVALUATION OF FUNCTIONAL RESULTS OF MEDIAL OPENING WEDGE HIGH TIBIAL OSTEOTOMY FOR UNICOMPARTMENTAL OSTEOARTHRITIS VARUS KNEE

    Directory of Open Access Journals (Sweden)

    Shyam Sundar Bakki

    2017-01-01

    Full Text Available BACKGROUND Osteoarthritis commonly affects the medial compartment of knee giving rise to varus deformity in majority of cases. Significant varus deformity further aggravates the pathology due to medialisation of the weight bearing line osteotomy of the proximal tibia realigns this weight bearing axis, thereby relieving pressure on the damaged medial compartment. OWHTO is a promising option in this scenario because it is associated with high accuracy in correcting the deformity and less number of complications when compared to lateral closing wedge HTO or UKA. In this study, we evaluate the functional outcome of HTO in patients with unicompartmental osteoarthritis. MATERIALS AND METHODS This is a prospective study of patients who attended the orthopaedic outpatient clinic in Government Hospital, Kakinada, between August 2013 to August 2015. The patients were evaluated by clinical examination and weight bearing radiographs. The patients who were found to have unicompartmental osteoarthritis with knee pain not relieved by conservative management and who satisfy the inclusion criteria were selected. RESULTS Excellent results can be achieved by appropriate selection criteria and planning with long limb weight bearing radiographs. There is an excellent relief of pain, which can be achieved within first few months postoperatively, which is assessed by VAS score. The KSS- knee score is excellent in 35%, good in 40%, fair in 20% and poor in 5%. The KSS- function score is excellent in 30%, good in 45%, fair in 20% and poor in 5%. There is significant improvement in the range of movement of the knee joint postoperatively. CONCLUSION In this study, we conclude that medial OWHTO is the preferred modality for unicompartmental OA in those aged <60 years and in developing nations like India where squatting is an important function, it has major role as it can restore near normal knee function without disturbing anatomy.

  9. Restoration of Femoral Condylar Anatomy for Achieving Optimum Functional Expectations: Component Design and Early Results

    Directory of Open Access Journals (Sweden)

    Sridhar Durbhakula

    2016-09-01

    Full Text Available BACKGROUND: Many total knee arthroplasty (TKA systems are used across a variety of markets in which outcome will be influenced by patient morphology and normal activities of daily living, for that patient population. Femoral component sizing in primary total knee arthroplasty is of paramount importance for optimizing complication free post-operative function across all patients. The purpose of this study was to report the early results of a primary TKA system in support of the component design characteristics for achievement of increased functional expectations. METHODS: A prospective, continuous series of 176 primary posterior stabilized (PS TKAs were performed in 172 patients by a single surgeon. Femoral component size distribution was assessed and all patients were followed for a minimum of two-years post-operatively. Total Hospital for Special Surgery (HSS scores and range of motion (ROM was assessed for the entire cohort and by gender. RESULTS: There were no patients lost to follow-up. Two patients required incision and drainage for superficial wound infection of the indicated knees. There was no radiographic evidence of component failure. As expected, femoral component size frequency use was skewed by gender with the larger sizes in males. There were no pre- or post-operative clinical or functional differences by gender and at the recent follow-up (avg. 3.8 years. In addition, there was an average significant increase in change of HSS score (p<0.01 and ROM (P<0.01 when compared to pre-operative baseline. CONCLUSIONS: The design characteristic for component sizing and functional expectations were confirmed in the reported Western population cohort series. Further continued use and study of this primary TKA system is warranted across all ethnic cultures.

  10. Waveform Agile Sensing Approach for Tracking Benchmark in the Presence of ECM using IMMPDAF

    Directory of Open Access Journals (Sweden)

    G. S. Satapathi

    2017-04-01

    Full Text Available This paper presents an efficient approach based on waveform agile sensing, to enhance the performance of benchmark target tracking in the presence of strong interference. The waveform agile sensing library consists of different waveforms such as linear frequency modulation (LFM, Gaussian frequency modulation (GFM and stepped frequency modulation (SFM waveforms. Improved performance is accomplished through a waveform agile sensing technique. In this method, the selection of waveform to be transmitted at each scan is determined, by jointly computing ambiguity function of waveform and Cramer-Rao Lower Bound (CRLB matrix of measurement errors. Electronic counter measures (ECM comprises of stand-off jammer (SOJ and self-screening jammer (SSJ. Interacting multiple model probability data association filter (IMMPDAF is employed for tracking benchmark trajectories. Experimental results demonstrate that, waveform agile sensing approach require only 39.98 percent lower mean average power compared to earlier studies. Further, it is observed that the position and velocity root mean square error values are decreasing as the number of waveforms are increasing from 5 to 50.

  11. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    International Nuclear Information System (INIS)

    Mendiratta, O.P.; Ploetz, D.K.

    2000-01-01

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste processing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999

  12. Benchmarking the Remote-Handled Waste Facility at the West Valley Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    O. P. Mendiratta; D. K. Ploetz

    2000-02-29

    ABSTRACT Facility decontamination activities at the West Valley Demonstration Project (WVDP), the site of a former commercial nuclear spent fuel reprocessing facility near Buffalo, New York, have resulted in the removal of radioactive waste. Due to high dose and/or high contamination levels of this waste, it needs to be handled remotely for processing and repackaging into transport/disposal-ready containers. An initial conceptual design for a Remote-Handled Waste Facility (RHWF), completed in June 1998, was estimated to cost $55 million and take 11 years to process the waste. Benchmarking the RHWF with other facilities around the world, completed in November 1998, identified unique facility design features and innovative waste pro-cessing methods. Incorporation of the benchmarking effort has led to a smaller yet fully functional, $31 million facility. To distinguish it from the June 1998 version, the revised design is called the Rescoped Remote-Handled Waste Facility (RRHWF) in this topical report. The conceptual design for the RRHWF was completed in June 1999. A design-build contract was approved by the Department of Energy in September 1999.

  13. FUNCTIONAL RESULTS OF SURGICAL TREATMENT FOR ISTHMIC SPONDYLOLISTHESIS USING ANTERIOR AND POSTERIOR EXPOSURES

    Directory of Open Access Journals (Sweden)

    V. V. Rudenko

    2013-01-01

    Full Text Available Objective - to compare results of spondylolisthesis treatment using different surgical technologies. Material and methods: 84 patients (aged from 19 till 67 with spondylolisthesis of 1-3 degree (H.W Meyerding were operated. Two methods of surgical exposures were used for decompression and stabilization. Anterior decompression and stabilization exposures from retroperitoneal access were used for the first group of patients. The second group was operated using posteriolateral interbody fusion with transpedicular screw fixation. The following results were estimated after operation: the level of patients’ postoperative adaptation period and the rate of neurological and orthopedic rehabilitation during the postoperative period. Conclusions. The obtained functional results show no difference for both groups where posterior and anterior exposures were used for spondylolisthesis surgical treatment of 1-3 degree.

  14. Iraq: Politics, Elections, and Benchmarks

    National Research Council Canada - National Science Library

    Katzman, Kenneth

    2009-01-01

    Iraq's political system, the result of a U.S.-supported election process, is increasingly exhibiting peaceful competition but continues to be riven by sectarianism and ethnic and factional infighting...

  15. [Functional results of cryosurgical procedures in rhegmatogenous retinal detachment including macula region - our experience].

    Science.gov (United States)

    Chrapek, O; Sín, M; Jirková, B; Jarkovský, J; Rehák, J

    2013-10-01

    Aim of this study is to evaluate retrospectively functional results of cryosurgical treatment of uncomplicated, idiopathic rhegmatogenous retinal detachment including macula region in phakic patients operated on at the Department of Ophthalmology, Faculty Hospital, Palacký University, Olomouc, Czech Republic, E.U., during the period 2002 -2013, and to evaluate the significance of the macula detachment duration for the final visual acuity. In the study group were included 56 eyes of 56 patients operated in the years 2003 - 2012 at the Department of Ophthalmology, Faculty Hospital, Palacký University, Olomouc. All patients were phakic and in all of them, the retinal detachment including the macula region was diagnosed. The mean follow-up period of the patients was 8,75 months. The initial and final visual acuity testing were performed. Comparing the initial and final visual acuity we rated the level of the visual acuity change. The result was stated as improved, if the visual acuity improved by 1 or more lines on the ETDRS chart. The result was rated as stabilized, if the visual acuity remained the same or it changed by 1 line of the ETDRS chart only. The result was evaluated as worsened, if the visual acuity decreased by 1 or more lines of the ETDRS chart. In the followed-up group, the authors compared visual acuity levels in patients with the macula detachment duration 10 days and 11 days. For the statistical evaluation of achieved results, the Mann - Whitney U test was used. The visual acuity improved in 49 (87 %), did not changed in 5 (9 %) and worsened in 2 (4 %) patients. The patients with macula detachment duration 10 days achieved statistically significant better visual acuity than patients with macula detachment duration 11 days. Patients with macula detachment duration 10 days have better prognosis for functional result than patients with macula detachment duration 11 days.

  16. Shielding benchmark tests of JENDL-3

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hasegawa, Akira; Ueki, Kohtaro; Yamano, Naoki; Sasaki, Kenji; Matsumoto, Yoshihiro; Takemura, Morio; Ohtani, Nobuo; Sakurai, Kiyoshi.

    1994-03-01

    The integral test of neutron cross sections for major shielding materials in JENDL-3 has been performed by analyzing various shielding benchmark experiments. For the fission-like neutron source problem, the following experiments are analyzed: (1) ORNL Broomstick experiments for oxygen, iron and sodium, (2) ASPIS deep penetration experiments for iron, (3) ORNL neutron transmission experiments for iron, stainless steel, sodium and graphite, (4) KfK leakage spectrum measurements from iron spheres, (5) RPI angular neutron spectrum measurements in a graphite block. For D-T neutron source problem, the following two experiments are analyzed: (6) LLNL leakage spectrum measurements from spheres of iron and graphite, and (7) JAERI-FNS angular neutron spectrum measurements on beryllium and graphite slabs. Analyses have been performed using the radiation transport codes: ANISN(1D Sn), DIAC(1D Sn), DOT3.5(2D Sn) and MCNP(3D point Monte Carlo). The group cross sections for Sn transport calculations are generated with the code systems PROF-GROUCH-G/B and RADHEAT-V4. The point-wise cross sections for MCNP are produced with NJOY. For comparison, the analyses with JENDL-2 and ENDF/B-IV have been also carried out. The calculations using JENDL-3 show overall agreement with the experimental data as well as those with ENDF/B-IV. Particularly, JENDL-3 gives better results than JENDL-2 and ENDF/B-IV for sodium. It has been concluded that JENDL-3 is very applicable for fission and fusion reactor shielding analyses. (author)

  17. Isprs Benchmark for Multi-Platform Photogrammetry

    Science.gov (United States)

    Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.-J.; Bäumker, M.; Zurhorst, A.

    2015-03-01

    Airborne high resolution oblique imagery systems and RPAS/UAVs are very promising technologies that will keep on influencing the development of geomatics in the future years closing the gap between terrestrial and classical aerial acquisitions. These two platforms are also a promising solution for National Mapping and Cartographic Agencies (NMCA) as they allow deriving complementary mapping information. Although the interest for the registration and integration of aerial and terrestrial data is constantly increasing, only limited work has been truly performed on this topic. Several investigations still need to be undertaken concerning algorithms ability for automatic co-registration, accurate point cloud generation and feature extraction from multiplatform image data. One of the biggest obstacles is the non-availability of reliable and free datasets to test and compare new algorithms and procedures. The Scientific Initiative "ISPRS benchmark for multi-platform photogrammetry", run in collaboration with EuroSDR, aims at collecting and sharing state-of-the-art multi-sensor data (oblique airborne, UAV-based and terrestrial images) over an urban area. These datasets are used to assess different algorithms and methodologies for image orientation and dense matching. As ground truth, Terrestrial Laser Scanning (TLS), Aerial Laser Scanning (ALS) as well as topographic networks and GNSS points were acquired to compare 3D coordinates on check points (CPs) and evaluate cross sections and residuals on generated point cloud surfaces. In this paper, the acquired data, the pre-processing steps, the evaluation procedures as well as some preliminary results achieved with commercial software will be presented.

  18. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  19. Surgical treatment of avulsion fractures at the tibial insertion of the posterior cruciate ligament: functional result

    Directory of Open Access Journals (Sweden)

    Marcos Alexandre Barros

    2015-12-01

    Full Text Available ABSTRACT OBJECTIVE: To objectively and subjectively evaluate the functional result from before to after surgery among patients with a diagnosis of an isolated avulsion fracture of the posterior cruciate ligament who were treated surgically. METHOD: Five patients were evaluated by means of reviewing the medical files, applying the Lysholm questionnaire, physical examination and radiological examination. For the statistical analysis, a significance level of 0.10 and 95% confidence interval were used. RESULTS: According to the Lysholm criteria, all the patients were classified as poor (<64 points before the operation and evolved to a mean of 96 points six months after the operation. We observed that 100% of the posterior drawer cases became negative, taking values less than 5 mm to be negative. CONCLUSION: Surgical methods with stable fixation for treating avulsion fractures at the tibial insertion of the posterior cruciate ligament produce acceptable functional results from the surgical and radiological points of view, with a significance level of 0.042.

  20. Benchmarking healthcare logistics processes: a comparative case study of Danish and US hospitals

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Andersen, Bjørn; Jacobsen, Peter

    2017-01-01

    initiatives prevalent in manufacturing industries such as lean, business process reengineering and benchmarking have seen an increase in use in healthcare. This study investigates how logistics processes in a hospital can be benchmarked to improve process performance. A comparative case study of the bed...... logistics process and the pharmaceutical distribution process was conducted at a Danish and a US hospital. The case study results identified decision criteria for designing efficient and effective healthcare logistics processes. The most important decision criteria were related to quality, security...... of supply and employee engagement. Based on these decision criteria, performance indicators were developed to enable benchmarking of logistics processes in healthcare. The study contributes to the limited literature on healthcare logistics benchmarking. Furthermore, managers in healthcare logistics...