WorldWideScience

Sample records for split-c benchmark moldyn

  1. Benchmark and physics testing of LIFE-4C. Summary

    International Nuclear Information System (INIS)

    Liu, Y.Y.

    1984-06-01

    LIFE-4C is a steady-state/transient analysis code developed for performance evaluation of carbide [(U,Pu)C and UC] fuel elements in advanced LMFBRs. This paper summarizes selected results obtained during a crucial step in the development of LIFE-4C - benchmark and physics testing

  2. SiC MOSFETs based split output half bridge inverter

    DEFF Research Database (Denmark)

    Li, Helong; Munk-Nielsen, Stig; Beczkowski, Szymon

    2014-01-01

    output. The double pulse test shows the devices' current during commutation process and the reduced switching losses of SiC MOSFETs compared to that of the traditional half bridge. The efficiency comparison is presented with experimental results of half bridge power inverter with split output...... and traditional half bridge inverter, from switching frequency 10 kHz to 100 kHz. The experimental results comparison shows that the half bridge with split output has an efficiency improvement of more than 0.5% at 100 kHz switching frequency....

  3. Ternary electrocatalysts for oxidizing ethanol to carbon dioxide: making ir capable of splitting C-C bond.

    Science.gov (United States)

    Li, Meng; Cullen, David A; Sasaki, Kotaro; Marinkovic, Nebojsa S; More, Karren; Adzic, Radoslav R

    2013-01-09

    Splitting the C-C bond is the main obstacle to electrooxidation of ethanol (EOR) to CO(2). We recently demonstrated that the ternary PtRhSnO(2) electrocatalyst can accomplish that reaction at room temperature with Rh having a unique capability to split the C-C bond. In this article, we report the finding that Ir can be induced to split the C-C bond as a component of the ternary catalyst. We characterized and compared the properties of several carbon-supported nanoparticle (NP) electrocatalysts comprising a SnO(2) NP core decorated with multimetallic nanoislands (MM' = PtIr, PtRh, IrRh, PtIrRh) prepared using a seeded growth approach. An array of characterization techniques were employed to establish the composition and architecture of the synthesized MM'/SnO(2) NPs, while electrochemical and in situ infrared reflection absorption spectroscopy studies elucidated trends in activity and the nature of the reaction intermediates and products. Both EOR reactivity and selectivity toward CO(2) formation of several of these MM'/SnO(2)/C electrocatalysts are significantly higher compared to conventional Pt/C and Pt/SnO(2)/C catalysts. We demonstrate that the PtIr/SnO(2)/C catalyst with high Ir content shows outstanding catalytic properties with the most negative EOR onset potential and reasonably good selectivity toward ethanol complete oxidation to CO(2).

  4. 3-D extension C5G7 MOX benchmark calculation using threedant code

    International Nuclear Information System (INIS)

    Kim, H.Ch.; Han, Ch.Y.; Kim, J.K.; Na, B.Ch.

    2005-01-01

    It pursued the benchmark on deterministic 3-D MOX fuel assembly transport calculations without spatial homogenization (C5G7 MOX Benchmark Extension). The goal of this benchmark is to provide a more through test results for the abilities of current available 3-D methods to handle the spatial heterogeneities of reactor core. The benchmark requires solutions in the form of normalized pin powers as well as the eigenvalue for each of the control rod configurations; without rod, with A rods, and with B rods. In this work, the DANTSYS code package was applied to analyze the 3-D Extension C5G7 MOX Benchmark problems. The THREEDANT code within the DANTSYS code package, which solves the 3-D transport equation in x-y-z, and r-z-theta geometries, was employed to perform the benchmark calculations. To analyze the benchmark with the THREEDANT code, proper spatial and angular approximations were made. Several calculations were performed to investigate the effects of the different spatial approximations on the accuracy. The results from these sensitivity studies were analyzed and discussed. From the results, it is found that the 4*4 grid per pin cell is sufficiently refined so that very little benefit is obtained by increasing the mesh size. (authors)

  5. Pericles and Attila results for the C5G7 MOX benchmark problems

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.

    2002-01-01

    Recently the Nuclear Energy Agency has published a new benchmark entitled, 'C5G7 MOX Benchmark.' This benchmark is to test the ability of current transport codes to treat reactor core problems without spatial homogenization. The benchmark includes both a two- and three-dimensional problem. We have calculated results for these benchmark problems with our Pericles and Attila codes. Pericles is a one-,two-, and three-dimensional unstructured grid discrete-ordinates code and was used for the twodimensional benchmark problem. Attila is a three-dimensional unstructured tetrahedral mesh discrete-ordinate code and was used for the three-dimensional problem. Both codes use discontinuous finite element spatial differencing. Both codes use diffusion synthetic acceleration (DSA) for accelerating the inner iterations.

  6. Split-face vitamin C consumer preference study.

    Science.gov (United States)

    Baumann, Leslie; Duque, Deysi K; Schirripa, Michael J

    2014-10-01

    Vitamin C is commonly used to treat aged skin. It has shown regenerative effects on skin wrinkles, texture, strength, and evenness of tone through its roles as an antioxidant, tyrosinase inhibitor, and inducer of collagen synthesis. Available vitamin C formulations on the anti-aging skin care market vary by their pH, packaging, and vehicle, which may decrease absorption, and therefore, the efficacy of the product. The purpose of this study was to assess the subjective efficacy, wearability, tolerance and overall preference of two professional vitamin C topical serums and sunscreens in Caucasian females using a split face method. A virtual split-face study of 39 Caucasian women compared two popular vitamin C and SPF product combinations - C-ESTA® Face Serum and Marini Physical Protectant SPF 45 (Jan Marini Skin, San Jose, CA; Products A) and CE Ferulic® and Physical Fusion UV Defense SPF 50 (Products B; SkinCeuticals Inc, Garland, TX). The products were assigned to each subject's left or right side of the face, and subjects rated and compared products through 5 online surveys at baseline, 24 hours, days 3, 7, and 14. Over 86% of the 35 subjects who completed the study preferred the smell and 83% preferred the feel and application of vitamin C Serum A over Serum B. Seventy-one percent of subjects preferred the feel and application of Sunscreen A over Sunscreen B. Results also showed a significant skin texture improvement and skin tone with Products A vs Product B. Products A trended higher for multiple additional categories. Products A exhibited superior anti-aging benefits than Products B. Subjects preferred the smell, feel, and application of Products A and experienced significantly less irritation than Products B. Overall, Products A were preferred over Products B with subjects willing to pay more for Products A over Products B.

  7. Attila calculations for the 3-D C5G7 benchmark extension

    International Nuclear Information System (INIS)

    Wareing, T.A.; McGhee, J.M.; Barnett, D.A.; Failla, G.A.

    2005-01-01

    The performance of the Attila radiation transport software was evaluated for the 3-D C5G7 MOX benchmark extension, a follow-on study to the MOX benchmark developed by the 'OECD/NEA Expert Group on 3-D Radiation Transport Benchmarks'. These benchmarks were designed to test the ability of modern deterministic transport methods to model reactor problems without spatial homogenization. Attila is a general purpose radiation transport software package with an integrated graphical user interface (GUI) for analysis, set-up and postprocessing. Attila provides solutions to the discrete-ordinates form of the linear Boltzmann transport equation on a fully unstructured, tetrahedral mesh using linear discontinuous finite-element spatial differencing in conjunction with diffusion synthetic acceleration of inner iterations. The results obtained indicate that Attila can accurately solve the benchmark problem without spatial homogenization. (authors)

  8. 10kV SiC MOSFET split output power module

    DEFF Research Database (Denmark)

    Beczkowski, Szymon; Li, Helong; Uhrenfeldt, Christian

    2015-01-01

    The poor body diode performance of the first generation of 10kV SiC MOSFETs and the parasitic turn-on phenomenon limit the performance of SiC based converters. Both these problems can potentially be mitigated using a split output topology. In this paper we present a comparison between a classical...

  9. Pescara benchmark: overview of modelling, testing and identification

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S; Brancaleoni, F; Gabriele, S; Spina, D; Bregant, L; Carminelli, A; Catania, G; Sorrentino, S; Di Evangelista, A; Valente, C; Zuccarino, L

    2011-01-01

    The 'Pescara benchmark' is part of the national research project 'BriViDi' (BRIdge VIbrations and DIagnosis) supported by the Italian Ministero dell'Universita e Ricerca. The project is aimed at developing an integrated methodology for the structural health evaluation of railway r/c, p/c bridges. The methodology should provide for applicability in operating conditions, easy data acquisition through common industrial instrumentation, robustness and reliability against structural and environmental uncertainties. The Pescara benchmark consisted in lab tests to get a consistent and large experimental data base and subsequent data processing. Special tests were devised to simulate the train transit effects in actual field conditions. Prestressed concrete beams of current industrial production both sound and damaged at various severity corrosion levels were tested. The results were collected either in a deterministic setting and in a form suitable to deal with experimental uncertainties. Damage identification was split in two approaches: with or without a reference model. In the first case f.e. models were used in conjunction with non conventional updating techniques. In the second case, specialized output-only identification techniques capable to deal with time-variant and possibly non linear systems were developed. The lab tests allowed validating the above approaches and the performances of classical modal based damage indicators.

  10. Example Plant Model for an International Benchmark Study on DI and C PSA

    International Nuclear Information System (INIS)

    Shin, Sung Min; Park, Jinkyun; Jang, Wondea; Kang, Hyun Gook

    2016-01-01

    In this context the risk quantification due to these digitalized safety systems became more important. Although there are many challenges to address about this issue, many countries agreed with the necessity of research on reliability quantification of DI and C system. Based on the agreement of several countries, one of internal research association is planning a benchmark study on this issue by sharing an example digitalized plant model and let each participating member develop its own probabilistic safety assessment (PSA) model of digital I and C systems. Although the DI and C systems are being applied to NPPs, of which modeling method to quantify its reliability still ambiguous. Therefore, an internal research association is planning a benchmark study to address this issue by sharing an example digitalized plant model and let each member develop their own PSA model for DI and C systems

  11. Rashba splitting of 100 meV in Au-intercalated graphene on SiC

    Energy Technology Data Exchange (ETDEWEB)

    Marchenko, D.; Varykhalov, A.; Sánchez-Barriga, J.; Rader, O. [Helmholtz-Zentrum Berlin für Materialien und Energie, Elektronenspeicherring BESSY II, Albert-Einstein-Straße 15, 12489 Berlin (Germany); Seyller, Th. [Institut für Physik, Technische Universität Chemnitz, Reichenhainer Strasse 70, 09126 Chemnitz (Germany)

    2016-04-25

    Intercalation of Au can produce giant Rashba-type spin-orbit splittings in graphene, but this has not yet been achieved on a semiconductor substrate. For graphene/SiC(0001), Au intercalation yields two phases with different doping. We observe a 100 meV Rashba-type spin-orbit splitting at 0.9 eV binding energy in the case of p-type graphene after Au intercalation. We show that this giant splitting is due to hybridization and much more limited in energy and momentum space than for Au-intercalated graphene on Ni.

  12. Oxidative addition of the ethane C-C bond to Pd. An ab initio benchmark and DFT validation study

    NARCIS (Netherlands)

    De Jong, G.T.; Geerke, D.P.; Diefenbach, A.; Sola, M.; Bickelhaupt, F.M.

    2005-01-01

    We have computed a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the ethane C-C bond to the palladium atom and have used this to evaluate the performance of 24 popular density functionals, covering LDA, GGA, meta-GGA, and hybrid density

  13. Benchmarking by HbA1c in a national diabetes quality register--does measurement bias matter?

    Science.gov (United States)

    Carlsen, Siri; Thue, Geir; Cooper, John Graham; Røraas, Thomas; Gøransson, Lasse Gunnar; Løvaas, Karianne; Sandberg, Sverre

    2015-08-01

    Bias in HbA1c measurement could give a wrong impression of the standard of care when benchmarking diabetes care. The aim of this study was to evaluate how measurement bias in HbA1c results may influence the benchmarking process performed by a national diabetes register. Using data from 2012 from the Norwegian Diabetes Register for Adults, we included HbA1c results from 3584 patients with type 1 diabetes attending 13 hospital clinics, and 1366 patients with type 2 diabetes attending 18 GP offices. Correction factors for HbA1c were obtained by comparing the results of the hospital laboratories'/GP offices' external quality assurance scheme with the target value from a reference method. Compared with the uncorrected yearly median HbA1c values for hospital clinics and GP offices, EQA corrected HbA1c values were within ±0.2% (2 mmol/mol) for all but one hospital clinic whose value was reduced by 0.4% (4 mmol/mol). Three hospital clinics reduced the proportion of patients with poor glycemic control, one by 9% and two by 4%. For most participants in our study, correcting for measurement bias had little effect on the yearly median HbA1c value or the percentage of patients achieving glycemic goals. However, at three hospital clinics correcting for measurement bias had an important effect on HbA1c benchmarking results especially with regard to percentages of patients achieving glycemic targets. The analytical quality of HbA1c should be taken into account when comparing benchmarking results.

  14. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  15. C-NAP1 and rootletin restrain DNA damage-induced centriole splitting and facilitate ciliogenesis.

    Science.gov (United States)

    Conroy, Pauline C; Saladino, Chiara; Dantas, Tiago J; Lalor, Pierce; Dockery, Peter; Morrison, Ciaran G

    2012-10-15

    Cilia are found on most human cells and exist as motile cilia or non-motile primary cilia. Primary cilia play sensory roles in transducing various extracellular signals, and defective ciliary functions are involved in a wide range of human diseases. Centrosomes are the principal microtubule-organizing centers of animal cells and contain two centrioles. We observed that DNA damage causes centriole splitting in non-transformed human cells, with isolated centrioles carrying the mother centriole markers CEP170 and ninein but not kizuna or cenexin. Loss of centriole cohesion through siRNA depletion of C-NAP1 or rootletin increased radiation-induced centriole splitting, with C-NAP1-depleted isolated centrioles losing mother markers. As the mother centriole forms the basal body in primary cilia, we tested whether centriole splitting affected ciliogenesis. While irradiated cells formed apparently normal primary cilia, most cilia arose from centriolar clusters, not from isolated centrioles. Furthermore, C-NAP1 or rootletin knockdown reduced primary cilium formation. Therefore, the centriole cohesion apparatus at the proximal end of centrioles may provide a target that can affect primary cilium formation as part of the DNA damage response.

  16. 3-D extension C5G7 MOX benchmark results using PARTISN

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A. [Los Alamos National Laboratory, CCS-4 Transport Methods Group, Los Alamos, NM (United States)

    2005-07-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k{sub eff} eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded.

  17. 3-D extension C5G7 MOX benchmark results using PARTISN

    International Nuclear Information System (INIS)

    Dahl, J.A.

    2005-01-01

    We have participated in the Expert Group of 3-D Radiation Transport Benchmarks' proposed 3-dimensional Extension C5G7 MOX problems using the discrete ordinate transport code PARTISN. The computational mesh was created using the FRAC-IN-THE-BOX code, which produces a volume fraction Cartesian mesh from combinatorial geometry descriptions. k eff eigenvalues, maximum pin powers, and average fuel assembly powers are reported and compared to a benchmark quality Monte Carlo solution. We also present a two dimensional mesh convergence study examining the affects of using volume fractions to approximate the water-pin cell interface. It appears that the control rod pin cell must be meshed twice as fine as a fuel pin cell in order to achieve the same spatial error when using the volume fraction method to define water channel-pin cell interfaces. It is noted that the previous PARTISN results provided to the OECD/NEA Expert Group on 3-dimensional Radiation Benchmarks contained a cross section error, and therefore should be disregarded

  18. Analysis and modeling of wafer-level process variability in 28 nm FD-SOI using split C-V measurements

    Science.gov (United States)

    Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard

    2018-07-01

    This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.

  19. Benchmarking

    OpenAIRE

    Meylianti S., Brigita

    1999-01-01

    Benchmarking has different meaning to different people. There are five types of benchmarking, namely internal benchmarking, competitive benchmarking, industry / functional benchmarking, process / generic benchmarking and collaborative benchmarking. Each type of benchmarking has its own advantages as well as disadvantages. Therefore it is important to know what kind of benchmarking is suitable to a specific application. This paper will discuss those five types of benchmarking in detail, includ...

  20. Evaluating C-RAN Fronthaul Functional Splits in Terms of Network Level Energy and Cost Savings

    DEFF Research Database (Denmark)

    Checko, Aleksandra; Popovska Avramova, Andrijana; Berger, Michael Stübert

    2016-01-01

    The placement of the complete baseband processing in a centralized pool results in high data rate requirement and inflexibility of the fronthaul network, which challenges the energy and cost effectiveness of the cloud radio access network (C-RAN). Recently, redesign of the C-RAN through functional...... split in the baseband processing chain has been proposed to overcome these challenges. This paper evaluates, by mathematical and simulation methods, different splits with respect to network level energy and cost efficiency having in the mind the expected quality of service.The proposed mathematical...... model quantifies the multiplexing gains and the trade-offs between centralization and decentralization concerning the cost of the pool, fronthaul network capacity and resource utilization. The event-based simulation captures the influence of the traffic load dynamics and traffic type variation...

  1. Verification of the shift Monte Carlo code with the C5G7 reactor benchmark

    International Nuclear Information System (INIS)

    Sly, N. C.; Mervin, B. T.; Mosher, S. W.; Evans, T. M.; Wagner, J. C.; Maldonado, G. I.

    2012-01-01

    Shift is a new hybrid Monte Carlo/deterministic radiation transport code being developed at Oak Ridge National Laboratory. At its current stage of development, Shift includes a parallel Monte Carlo capability for simulating eigenvalue and fixed-source multigroup transport problems. This paper focuses on recent efforts to verify Shift's Monte Carlo component using the two-dimensional and three-dimensional C5G7 NEA benchmark problems. Comparisons were made between the benchmark eigenvalues and those output by the Shift code. In addition, mesh-based scalar flux tally results generated by Shift were compared to those obtained using MCNP5 on an identical model and tally grid. The Shift-generated eigenvalues were within three standard deviations of the benchmark and MCNP5-1.60 values in all cases. The flux tallies generated by Shift were found to be in very good agreement with those from MCNP. (authors)

  2. Mobility-Aware Modeling and Analysis of Dense Cellular Networks With $C$ -Plane/ $U$ -Plane Split Architecture

    KAUST Repository

    Ibrahim, Hazem

    2016-09-19

    The unrelenting increase in the population of mobile users and their traffic demands drive cellular network operators to densify their network infrastructure. Network densification shrinks the footprint of base stations (BSs) and reduces the number of users associated with each BS, leading to an improved spatial frequency reuse and spectral efficiency, and thus, higher network capacity. However, the densification gain comes at the expense of higher handover rates and network control overhead. Hence, user’s mobility can diminish or even nullifies the foreseen densification gain. In this context, splitting the control plane ( C -plane) and user plane ( U -plane) is proposed as a potential solution to harvest densification gain with reduced cost in terms of handover rate and network control overhead. In this paper, we use stochastic geometry to develop a tractable mobility-aware model for a two-tier downlink cellular network with ultra-dense small cells and C -plane/ U -plane split architecture. The developed model is then used to quantify the effect of mobility on the foreseen densification gain with and without C -plane/ U -plane split. To this end, we shed light on the handover problem in dense cellular environments, show scenarios where the network fails to support certain mobility profiles, and obtain network design insights.

  3. Benchmarking ENDF/B-VII.0

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2006-01-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2 O, H 2 O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  4. Systematic Expansion of Active Spaces beyond the CASSCF Limit: A GASSCF/SplitGAS Benchmark Study.

    Science.gov (United States)

    Vogiatzis, Konstantinos D; Li Manni, Giovanni; Stoneburner, Samuel J; Ma, Dongxia; Gagliardi, Laura

    2015-07-14

    The applicability and accuracy of the generalized active space self-consistent field, (GASSCF), and (SplitGAS) methods are presented. The GASSCF method enables the exploration of larger active spaces than with the conventional complete active space SCF, (CASSCF), by fragmentation of a large space into subspaces and by controlling the interspace excitations. In the SplitGAS method, the GAS configuration interaction, CI, expansion is further partitioned in two parts: the principal, which includes the most important configuration state functions, and an extended, containing less relevant but not negligible ones. An effective Hamiltonian is then generated, with the extended part acting as a perturbation to the principal space. Excitation energies of ozone, furan, pyrrole, nickel dioxide, and copper tetrachloride dianion are reported. Various partitioning schemes of the GASSCF and SplitGAS CI expansions are considered and compared with the complete active space followed by second-order perturbation theory, (CASPT2), and multireference CI method, (MRCI), or available experimental data. General guidelines for the optimum applicability of these methods are discussed together with their current limitations.

  5. Ten key short-term sectoral benchmarks to limit warming to 1.5º C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Hoehne, N.E.; Schaeffer, M.; Cantzler, Jasmin; Hare, William; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher-Ur-Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  6. Ten key short-term sectoral benchmarks to limit warming to 1.5°C

    NARCIS (Netherlands)

    Kuramochi, Takeshi; Höhne, Niklas; Schaeffer, Michiel; Cantzler, Jasmin; Hare, Bill; Deng, Yvonne; Sterl, Sebastian; Hagemann, Markus; Rocha, Marcia; Yanguas-Parra, Paola Andrea; Mir, Goher Ur Rehman; Wong, Lindee; El-Laboudy, Tarik; Wouters, Karlien; Deryng, Delphine; Blok, Kornelis

    2018-01-01

    This article identifies and quantifies the 10 most important benchmarks for climate action to be taken by 2020–2025 to keep the window open for a 1.5°C-consistent GHG emission pathway. We conducted a comprehensive review of existing emissions scenarios, scanned all sectors and the respective

  7. Conclusion of the I.C.T. benchmark exercise

    International Nuclear Information System (INIS)

    Giacometti, A.

    1991-01-01

    The ICT Benchmark exercise made within the RIV working group of ESARDA on reprocessing data supplied by COGEMA for 53 routines reprocessing input batches made of 110 irradiated fuel assemblies from KWO Nuclear Power Plant was finally evaluated. The conclusions are: all seven different ICT methods applied verified the operator data on plutonium within about one percent; anomalies intentionally introduced to the operator data were detected in 90% of the cases; the nature of the introduced anomalies, which were unknown to the participants, was completely resolved for the safeguards relevant cases; the false alarm rate was in a few percent range. The ICT Benchmark results shows that this technique is capable of detecting and resolving anomalies in the reprocessing input data to the order of a percent

  8. Pool critical assembly pressure vessel facility benchmark

    International Nuclear Information System (INIS)

    Remec, I.; Kam, F.B.K.

    1997-07-01

    This pool critical assembly (PCA) pressure vessel wall facility benchmark (PCA benchmark) is described and analyzed in this report. Analysis of the PCA benchmark can be used for partial fulfillment of the requirements for the qualification of the methodology for pressure vessel neutron fluence calculations, as required by the US Nuclear Regulatory Commission regulatory guide DG-1053. Section 1 of this report describes the PCA benchmark and provides all data necessary for the benchmark analysis. The measured quantities, to be compared with the calculated values, are the equivalent fission fluxes. In Section 2 the analysis of the PCA benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed for three ENDF/B-VI-based multigroup libraries: BUGLE-93, SAILOR-95, and BUGLE-96. An excellent agreement of the calculated (C) and measures (M) equivalent fission fluxes was obtained. The arithmetic average C/M for all the dosimeters (total of 31) was 0.93 ± 0.03 and 0.92 ± 0.03 for the SAILOR-95 and BUGLE-96 libraries, respectively. The average C/M ratio, obtained with the BUGLE-93 library, for the 28 measurements was 0.93 ± 0.03 (the neptunium measurements in the water and air regions were overpredicted and excluded from the average). No systematic decrease in the C/M ratios with increasing distance from the core was observed for any of the libraries used

  9. Hybrid Discrete Differential Evolution Algorithm for Lot Splitting with Capacity Constraints in Flexible Job Scheduling

    Directory of Open Access Journals (Sweden)

    Xinli Xu

    2013-01-01

    Full Text Available A two-level batch chromosome coding scheme is proposed to solve the lot splitting problem with equipment capacity constraints in flexible job shop scheduling, which includes a lot splitting chromosome and a lot scheduling chromosome. To balance global search and local exploration of the differential evolution algorithm, a hybrid discrete differential evolution algorithm (HDDE is presented, in which the local strategy with dynamic random searching based on the critical path and a random mutation operator is developed. The performance of HDDE was experimented with 14 benchmark problems and the practical dye vat scheduling problem. The simulation results showed that the proposed algorithm has the strong global search capability and can effectively solve the practical lot splitting problems with equipment capacity constraints.

  10. Crystal-field splitting in coadsorbate systems: c (2x2) CO/K/Ni (100)

    NARCIS (Netherlands)

    Hasselström, J.; Föhlisch, A.; Denecke, R.; Nilsson, A.; Groot, F.M.F. de

    2000-01-01

    It is demonstrated how the crystal field splitting (CFS) fine structure can be used to characterize a coadsor-bate system. We have applied K 2p x-ray absorption spectroscopy (XAS) to the c(2x2) CO/K/Ni(100) system. The CFS fine structure is shown to be sensitive to the the local atomic

  11. Symmetric splitting of very light systems

    International Nuclear Information System (INIS)

    Grotowski, K.; Majka, Z.; Planeta, R.

    1985-01-01

    Fission reactions that produce fragments close to one half the mass of the composite system are traditionally observed in heavy nuclei. In light systems, symmetric splitting is rarely observed and poorly understood. It would be interesting to verify the existence of the symmetric splitting of compound nuclei with A 12 C + 40 Ca, 141 MeV 9 Be + 40 Ca and 153 MeV 6 Li + 40 Ca. The out-of-plane correlation of symmetric products was also measured for the reaction 186 MeV 12 C + 40 Ca. The coincidence measurements of the 12 C + 40 Ca system demonstrated that essentially all of the inclusive yield of symmetric products around 40 0 results from a binary decay. To characterize the dependence of the symmetric splitting process on the excitation energy of the 12 C + 40 C system, inclusive measurements were made at bombarding energies of 74, 132, 162, and 185 MeV

  12. 40 CFR 141.172 - Disinfection profiling and benchmarking.

    Science.gov (United States)

    2010-07-01

    ... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...

  13. Review for session K - benchmarks

    International Nuclear Information System (INIS)

    McCracken, A.K.

    1980-01-01

    Eight of the papers to be considered in Session K are directly concerned, at least in part, with the Pool Critical Assembly (P.C.A.) benchmark at Oak Ridge. The remaining seven papers in this session, the subject of this review, are concerned with a variety of topics related to the general theme of Benchmarks and will be considered individually

  14. Isospin splittings of baryons

    International Nuclear Information System (INIS)

    Varga, Kalman; Genovese, Marco; Richard, Jean-Marc; Silvestre-Brac, Bernard

    1998-01-01

    We discuss the isospin-breaking mass differences among baryons, with particular attention in the charm sector to the Σ c + -Σ c 0 , Σ c ++ -Σ c 0 , and Ξ c + -Ξ c 0 splittings. Simple potential models cannot accommodate the trend of the available data on charm baryons. More precise measurements would offer the possibility of testing how well potential models describe the non-perturbative limit of QCD

  15. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles.

  16. A Proposal on the Geometry Splitting Strategy to Enhance the Calculation Efficiency in Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Han, Gi Yeong; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho; Kim, Jong Kyung

    2014-01-01

    In this study, how the geometry splitting strategy affects the calculation efficiency was analyzed. In this study, a geometry splitting method was proposed to increase the calculation efficiency in Monte Carlo simulation. First, the analysis of the neutron distribution characteristics in a deep penetration problem was performed. Then, considering the neutron population distribution, a geometry splitting method was devised. Using the proposed method, the FOMs with benchmark problems were estimated and compared with the conventional geometry splitting strategy. The results show that the proposed method can considerably increase the calculation efficiency in using geometry splitting method. It is expected that the proposed method will contribute to optimizing the computational cost as well as reducing the human errors in Monte Carlo simulation. Geometry splitting in Monte Carlo (MC) calculation is one of the most popular variance reduction techniques due to its simplicity, reliability and efficiency. For the use of the geometry splitting, the user should determine locations of geometry splitting and assign the relative importance of each region. Generally, the splitting parameters are decided by the user's experience. However, in this process, the splitting parameters can ineffectively or erroneously be selected. In order to prevent it, there is a recommendation to help the user eliminate guesswork, which is to split the geometry evenly. And then, the importance is estimated by a few iterations for preserving population of particle penetrating each region. However, evenly geometry splitting method can make the calculation inefficient due to the change in mean free path (MFP) of particles

  17. Embryo splitting

    Directory of Open Access Journals (Sweden)

    Karl Illmensee

    2010-04-01

    Full Text Available Mammalian embryo splitting has successfully been established in farm animals. Embryo splitting is safely and efficiently used for assisted reproduction in several livestock species. In the mouse, efficient embryo splitting as well as single blastomere cloning have been developed in this animal system. In nonhuman primates embryo splitting has resulted in several pregnancies. Human embryo splitting has been reported recently. Microsurgical embryo splitting under Institutional Review Board approval has been carried out to determine its efficiency for blastocyst development. Embryo splitting at the 6–8 cell stage provided a much higher developmental efficiency compared to splitting at the 2–5 cell stage. Embryo splitting may be advantageous for providing additional embryos to be cryopreserved and for patients with low response to hormonal stimulation in assisted reproduction programs. Social and ethical issues concerning embryo splitting are included regarding ethics committee guidelines. Prognostic perspectives are presented for human embryo splitting in reproductive medicine.

  18. Multipole induced splitting of metal-cage vibrations in crystalline endohedral D2d-M2@C84 dimetallofullerenes.

    Science.gov (United States)

    Krause, M; Popov, V N; Inakuma, M; Tagmatarchis, N; Shinohara, H; Georgi, P; Dunsch, L; Kuzmany, H

    2004-01-22

    Metal-carbon cage vibrations of crystalline endohedral D2d-M2@C84 (M=Sc,Y,Dy) dimetallofullerenes were analyzed by temperature dependent Raman scattering and a dynamical force field model. Three groups of metal-carbon cage modes were found at energies of 35-200 cm(-1) and assigned to metal-cage stretching and deformation vibrations. They exhibit a textbook example for the splitting of molecular vibrations in a crystal field. Induced dipole-dipole and quadrupole-quadrupole interactions account quantitatively for the observed mode splitting. Based on the metal-cage vibrational structure it is demonstrated that D2d-Y2@C84 dimetallofullerene retains a monoclinic crystal structure up to 550 K and undergoes a transition from a disordered to an ordered orientational state at a temperature of approximately 150 K.

  19. Photochemical water splitting mediated by a C1 shuttle

    KAUST Repository

    Alderman, N. P.

    2016-10-31

    The possibility of performing photochemical water splitting in a two-stage system, separately releasing the H and O components, has been probed with two separate catalysts and in combination with a formaldehyde/formate shuttling redox couple. In the first stage, formaldehyde releases hydrogen vigorously in the presence of an Na[Fe(CN)]·10HO catalyst, selectively affording the formate anion. In the second stage, the formate anion is hydro-genated back to formaldehyde by water and in the presence of a BiWO photocatalyst whilst releasing oxygen. Both stages operate at room temperature and under visible light irradiation. The two separate photocatalysts are compatible since water splitting can also be obtained in one-pot experiments with simultaneous H/O evolution.

  20. Photochemical water splitting mediated by a C1 shuttle

    KAUST Repository

    Alderman, N. P.; Sommers, J. M.; Viasus, C. J.; Wang, C. H T; Peneau, V.; Gambarotta, S.; Vidjayacoumar, B.; Al-Bahily, K. A.

    2016-01-01

    The possibility of performing photochemical water splitting in a two-stage system, separately releasing the H and O components, has been probed with two separate catalysts and in combination with a formaldehyde/formate shuttling redox couple. In the first stage, formaldehyde releases hydrogen vigorously in the presence of an Na[Fe(CN)]·10HO catalyst, selectively affording the formate anion. In the second stage, the formate anion is hydro-genated back to formaldehyde by water and in the presence of a BiWO photocatalyst whilst releasing oxygen. Both stages operate at room temperature and under visible light irradiation. The two separate photocatalysts are compatible since water splitting can also be obtained in one-pot experiments with simultaneous H/O evolution.

  1. Splitting in Dual-Phase 590 high strength steel plates

    International Nuclear Information System (INIS)

    Yang Min; Chao, Yuh J.; Li Xiaodong; Tan Jinzhu

    2008-01-01

    Charpy V-notch impact tests on 5.5 mm thick, hot-rolled Dual-Phase 590 (DP590) steel plate were evaluated at temperatures ranging from 90 deg. C to -120 deg. C. Similar tests on 2.0 mm thick DP590 HDGI steel plate were also conducted at room temperature. Splitting or secondary cracks was observed on the fractured surfaces. The mechanisms of the splitting were then investigated. Fracture surfaces were analyzed by optical microscope (OM) and scanning electron microscope (SEM). Composition of the steel plates was determined by electron probe microanalysis (EPMA). Micro Vickers hardness of the steel plates was also surveyed. Results show that splitting occurred on the main fractured surfaces of hot-rolled steel specimens at various testing temperatures. At temperatures above the ductile-brittle-transition-temperature (DBTT), -95 deg. C, where the fracture is predominantly ductile, the length and amount of splitting decreased with increasing temperature. At temperatures lower than the DBTT, where the fracture is predominantly brittle, both the length and width of the splitting are insignificant. Splitting in HDGI steel plates only appeared in specimens of T-L direction. The analysis revealed that splitting in hot-rolled plate is caused by silicate and carbide inclusions while splitting in HDGI plate results from strip microstructure due to its high content of manganese and low content of silicon. The micro Vickers hardness of either the inclusions or the strip microstructures is higher than that of the respective base steel

  2. The ground state tunneling splitting and the zero point energy of malonaldehyde: a quantum Monte Carlo determination.

    Science.gov (United States)

    Viel, Alexandra; Coutinho-Neto, Maurício D; Manthe, Uwe

    2007-01-14

    Quantum dynamics calculations of the ground state tunneling splitting and of the zero point energy of malonaldehyde on the full dimensional potential energy surface proposed by Yagi et al. [J. Chem. Phys. 1154, 10647 (2001)] are reported. The exact diffusion Monte Carlo and the projection operator imaginary time spectral evolution methods are used to compute accurate benchmark results for this 21-dimensional ab initio potential energy surface. A tunneling splitting of 25.7+/-0.3 cm-1 is obtained, and the vibrational ground state energy is found to be 15 122+/-4 cm-1. Isotopic substitution of the tunneling hydrogen modifies the tunneling splitting down to 3.21+/-0.09 cm-1 and the vibrational ground state energy to 14 385+/-2 cm-1. The computed tunneling splittings are slightly higher than the experimental values as expected from the potential energy surface which slightly underestimates the barrier height, and they are slightly lower than the results from the instanton theory obtained using the same potential energy surface.

  3. Effects of benchmarking on the quality of type 2 diabetes care: results of the OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study in Greece

    Science.gov (United States)

    Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.

    2015-01-01

    Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642

  4. WWER in-core fuel management benchmark definition

    International Nuclear Information System (INIS)

    Apostolov, T.; Alekova, G.; Prodanova, R.; Petrova, T.; Ivanov, K.

    1994-01-01

    Two benchmark problems for WWER-440, including design parameters, operating conditions and measured quantities are discussed in this paper. Some benchmark results for infinitive multiplication factor -K eff , natural boron concentration - C β and relative power distribution - K q obtained by use of the code package are represented. (authors). 5 refs., 3 tabs

  5. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  6. Oxidative addition of the chloromethane C-Cl bond to Pd, an ab initio benchmark and DFT validation study

    NARCIS (Netherlands)

    de Jong, G.T.; Bickelhaupt, F.M.

    2006-01-01

    We have computed a state-of-the-art benchmark potential energy surface (PES) for the archetypal oxidative addition of the chloromethane C-Cl bond to the palladium atom and have used this to evaluate the performance of 26 popular density functionals, covering LDA, GGA, meta-GGA, and hybrid density

  7. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. H.B. Robinson-2 pressure vessel benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  9. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Benchmarking local healthcare-associated infections: Available benchmarks and interpretation challenges

    Directory of Open Access Journals (Sweden)

    Aiman El-Saed

    2013-10-01

    Full Text Available Summary: Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI, which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons. Keywords: Benchmarking, Comparison, Surveillance, Healthcare-associated infections

  11. Cross sections, benchmarks, etc.: What is data testing all about

    International Nuclear Information System (INIS)

    Wagschal, J.; Yeivin, Y.

    1985-01-01

    In order to determine the consistency of two distinct measurements of a physical quantity, the discrepancy d between the two should be compared with its own standard deviation, σ = √(σ/sub 1//sup 2/+σ/sub 2//sup 2/). To properly test a given cross-section library by a set of benchmark (integral) measurements, the quantity corresponding to (d/σ)/sup 2/ is the quadratic d/sup dagger/C/sup -1/d. Here d is the vector of which the components are the discrepancies between the calculated values of the integral parameters and their corresponding measured values, and C is the uncertainty matrix of these discrepancies. This quadratic form is the only true measure of the joint consistency of the library and benchmarks. On the other hand, the very matrix C is essentially all one needs to adjust the library by the benchmarks. Therefore, any argument against adjustment simultaneously disqualifies all serious attempts to test cross-section libraries against integral benchmarks

  12. The Split Nucleus of Comet Wilson (C/1986 P1 = 1987 VII).

    Science.gov (United States)

    Meech, Karen J.; Knopp, Graham P.; Farnham, Tony L.; Green, Daniel

    1995-07-01

    We present CCD observations of Comet Wilson (C/1986 P1 = 1987 VII) from 26 nights during the time period 1986 October to 1991 February, which brackets perihelion, During the observing run of 1988 February, the comet was observed to have split into two fragments. Our broadband CCD photometry, along with photometry from the International Cometary Quarterly, shows a steady decline in brightness of Comet Wilson post-perihelion, with an outburst between heliocentric distances r = 2.8 and 3.3 AU during 1987 October and November. By r ≈ 7 AU, the fragment had faded with respect to the parent and was no longer centrally condensed. A brightness limit of mR ≈ 25, when the comet was at r = 12.65 AU, constrains the primary nucleus to have a maximum radius between 5 and 7 km, assuming an albedo of 0.04. The accuracy of direct orbital solutions for the parent body and fragment to determine the time of splitting was limited by the presence of significant nongravitational forces and the limited fragment orbital coverage. We used the relative position of the fragment with respect to the parent to calculate a time of splitting which was consistent with the time of the observed outburst. We discuss the possible causes of the splitting. The coma of Comet Wilson was observed to have a surface-brightness profile which fell off as p-1 (characteristic of a canonical steady-state coma under the influence of radiation pressure) for all of the data with the exception of the data taken during 1987 November when the gradient was p-1.3 . This steeper slope was probably caused by the injection of new material into the coma during the outburst. During 1986 October, there was a break in the surface-brightness profile slope which may be interpreted as the distance at which grains are swept into the tail. The profiles suggested grain velocities of a few x 10 2 to 10 m sec -1 for grains between 1 and a few hundred micrometers. Finson-Probstein dust modeling showed that ejection of grains began

  13. C5 Benchmark Problem with Discrete Ordinate Radiation Transport Code DENOVO

    Energy Technology Data Exchange (ETDEWEB)

    Yesilyurt, Gokhan [ORNL; Clarno, Kevin T [ORNL; Evans, Thomas M [ORNL; Davidson, Gregory G [ORNL; Fox, Patricia B [ORNL

    2011-01-01

    The C5 benchmark problem proposed by the Organisation for Economic Co-operation and Development/Nuclear Energy Agency was modeled to examine the capabilities of Denovo, a three-dimensional (3-D) parallel discrete ordinates (S{sub N}) radiation transport code, for problems with no spatial homogenization. Denovo uses state-of-the-art numerical methods to obtain accurate solutions to the Boltzmann transport equation. Problems were run in parallel on Jaguar, a high-performance supercomputer located at Oak Ridge National Laboratory. Both the two-dimensional (2-D) and 3-D configurations were analyzed, and the results were compared with the reference MCNP Monte Carlo calculations. For an additional comparison, SCALE/KENO-V.a Monte Carlo solutions were also included. In addition, a sensitivity analysis was performed for the optimal angular quadrature and mesh resolution for both the 2-D and 3-D infinite lattices of UO{sub 2} fuel pin cells. Denovo was verified with the C5 problem. The effective multiplication factors, pin powers, and assembly powers were found to be in good agreement with the reference MCNP and SCALE/KENO-V.a Monte Carlo calculations.

  14. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  15. SplitDist—Calculating Split-Distances for Sets of Trees

    DEFF Research Database (Denmark)

    Mailund, T

    2004-01-01

    We present a tool for comparing a set of input trees, calculating for each pair of trees the split-distances, i.e., the number of splits in one tree not present in the other.......We present a tool for comparing a set of input trees, calculating for each pair of trees the split-distances, i.e., the number of splits in one tree not present in the other....

  16. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  17. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  18. Experimental Test for Benchmark 1--Deck Lid Inner Panel

    International Nuclear Information System (INIS)

    Xu Siguang; Lanker, Terry; Zhang, Jimmy; Wang Chuantao

    2005-01-01

    The Benchmark 1 deck lid inner is designed for both aluminum and steel based on a General Motor Corporation's current vehicle product. The die is constructed with a soft tool material. The die successfully produced aluminum and steel panels without splits and wrinkles. Detailed surface strains and thickness measurement were made at selected sections to include a wide range of deformation patterns from uniaxial tension mode to bi-axial tension mode. The springback measurements were done by using CMM machine along the part's hem edge which is critical to correct dimensional accuracy. It is expected that the data obtained will provide a useful source for forming and springback study on future automotive panels

  19. The new deterministic 3-D radiation transport code Multitrans: C5G7 MOX fuel assembly benchmark

    International Nuclear Information System (INIS)

    Kotiluoto, P.

    2003-01-01

    The novel deterministic three-dimensional radiation transport code MultiTrans is based on combination of the advanced tree multigrid technique and the simplified P3 (SP3) radiation transport approximation. In the tree multigrid technique, an automatic mesh refinement is performed on material surfaces. The tree multigrid is generated directly from stereo-lithography (STL) files exported by computer-aided design (CAD) systems, thus allowing an easy interface for construction and upgrading of the geometry. The deterministic MultiTrans code allows fast solution of complicated three-dimensional transport problems in detail, offering a new tool for nuclear applications in reactor physics. In order to determine the feasibility of a new code, computational benchmarks need to be carried out. In this work, MultiTrans code is tested for a seven-group three-dimensional MOX fuel assembly transport benchmark without spatial homogenization (NEA C5G7 MOX). (author)

  20. Splitting strings on integrable backgrounds

    Energy Technology Data Exchange (ETDEWEB)

    Vicedo, Benoit

    2011-05-15

    We use integrability to construct the general classical splitting string solution on R x S{sup 3}. Namely, given any incoming string solution satisfying a necessary self-intersection property at some given instant in time, we use the integrability of the worldsheet {sigma}-model to construct the pair of outgoing strings resulting from a split. The solution for each outgoing string is expressed recursively through a sequence of dressing transformations, the parameters of which are determined by the solutions to Birkhoff factorization problems in an appropriate real form of the loop group of SL{sub 2}(C). (orig.)

  1. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  2. Three-dimensional RAMA fluence methodology benchmarking

    International Nuclear Information System (INIS)

    Baker, S. P.; Carter, R. G.; Watkins, K. E.; Jones, D. B.

    2004-01-01

    This paper describes the benchmarking of the RAMA Fluence Methodology software, that has been performed in accordance with U. S. Nuclear Regulatory Commission Regulatory Guide 1.190. The RAMA Fluence Methodology has been developed by TransWare Enterprises Inc. through funding provided by the Electric Power Research Inst., Inc. (EPRI) and the Boiling Water Reactor Vessel and Internals Project (BWRVIP). The purpose of the software is to provide an accurate method for calculating neutron fluence in BWR pressure vessels and internal components. The Methodology incorporates a three-dimensional deterministic transport solution with flexible arbitrary geometry representation of reactor system components, previously available only with Monte Carlo solution techniques. Benchmarking was performed on measurements obtained from three standard benchmark problems which include the Pool Criticality Assembly (PCA), VENUS-3, and H. B. Robinson Unit 2 benchmarks, and on flux wire measurements obtained from two BWR nuclear plants. The calculated to measured (C/M) ratios range from 0.93 to 1.04 demonstrating the accuracy of the RAMA Fluence Methodology in predicting neutron flux, fluence, and dosimetry activation. (authors)

  3. Closed-loop neuromorphic benchmarks

    CSIR Research Space (South Africa)

    Stewart, TC

    2015-11-01

    Full Text Available Benchmarks   Terrence C. Stewart 1* , Travis DeWolf 1 , Ashley Kleinhans 2 , Chris Eliasmith 1   1 University of Waterloo, Canada, 2 Council for Scientific and Industrial Research, South Africa   Submitted to Journal:   Frontiers in Neuroscience   Specialty... Eliasmith 1 1Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada 2Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research, Pretoria, South Africa Correspondence*: Terrence C. Stewart Centre...

  4. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry

    International Nuclear Information System (INIS)

    Ait Abderrahim, H.; D'Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The 'Concrete Benchmark' experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the 'Concrete Benchmark' experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs

  5. Hydrogen generation due to water splitting on Si - terminated 4H-Sic(0001) surfaces

    Science.gov (United States)

    Li, Qingfang; Li, Qiqi; Yang, Cuihong; Rao, Weifeng

    2018-02-01

    The chemical reactions of hydrogen gas generation via water splitting on Si-terminated 4H-SiC surfaces with or without C/Si vacancies were studied by using first-principles. We studied the reaction mechanisms of hydrogen generation on the 4H-SiC(0001) surface. Our calculations demonstrate that there are major rearrangements in surface when H2O approaches the SiC(0001) surface. The first H splitting from water can occur with ground-state electronic structures. The second H splitting involves an energy barrier of 0.65 eV. However, the energy barrier for two H atoms desorbing from the Si-face and forming H2 gas is 3.04 eV. In addition, it is found that C and Si vacancies can form easier in SiC(0001)surfaces than in SiC bulk and nanoribbons. The C/Si vacancies introduced can enhance photocatalytic activities. It is easier to split OH on SiC(0001) surface with vacancies compared to the case of clean SiC surface. H2 can form on the 4H-SiC(0001) surface with C and Si vacancies if the energy barriers of 1.02 and 2.28 eV are surmounted, respectively. Therefore, SiC(0001) surface with C vacancy has potential applications in photocatalytic water-splitting.

  6. Hydrogen production via thermochemical water-splitting by lithium redox reaction

    International Nuclear Information System (INIS)

    Nakamura, Naoya; Miyaoka, Hiroki; Ichikawa, Takayuki; Kojima, Yoshitsugu

    2013-01-01

    Highlights: •Hydrogen production via water-splitting by lithium redox reactions possibly proceeds below 800 °C. •Entropy control by using nonequilibrium technique successfully reduces the reaction temperature. •The operating temperature should be further reduced by optimizing the nonequilibrium condition to control the cycle. -- Abstracts: Hydrogen production via thermochemical water-splitting by lithium redox reactions was investigated as energy conversion technique. The reaction system consists of three reactions, which are hydrogen generation by the reaction of lithium and lithium hydroxide, metal separation by thermolysis of lithium oxide, and oxygen generation by hydrolysis of lithium peroxide. The hydrogen generation reaction completed at 500 °C. The metal separation reaction is thermodynamically difficult because it requires about 3400 °C in equilibrium condition. However, it was indicated from experimental results that the reaction temperature was drastically reduced to 800 °C by using nonequilibrium technique. The hydrolysis reaction was exothermic reaction, and completed by heating up to 300 °C. Therefore, it was expected that the water-splitting by lithium redox reactions was possibly operated below 800 °C under nonequilibrium condition

  7. Direct selenylation of mixed Ni/Fe metal-organic frameworks to NiFe-Se/C nanorods for overall water splitting

    Science.gov (United States)

    Xu, Bo; Yang, He; Yuan, Lincheng; Sun, Yiqiang; Chen, Zhiming; Li, Cuncheng

    2017-10-01

    Development of low-cost, highly active bifunctional catalyst for efficient overall water splitting based on earth-abundant metals is still a great challenging task. In this work, we report a NiFe-Se/C composite nanorod as efficient non-precious-metal electrochemical catalyst derived from direct selenylation of a mixed Ni/Fe metal-organic framework. The as-obtained catalyst requires low overpotential to drive 10 mA cm-2 for HER (160 mV) and OER (240 mV) in 1.0 M KOH, respectively, and its catalytic activity is maintained for at least 20 h. Moreover, water electrolysis using this catalyst achieves high water splitting current density of 10 mA cm-2 at cell voltage of 1.68 V.

  8. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  9. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1

    International Nuclear Information System (INIS)

    Van Der Marck, S. C.

    2012-01-01

    Three nuclear data libraries have been tested extensively using criticality safety benchmark calculations. The three libraries are the new release of the US library ENDF/B-VII.1 (2011), the new release of the Japanese library JENDL-4.0 (2011), and the OECD/NEA library JEFF-3.1 (2006). All calculations were performed with the continuous-energy Monte Carlo code MCNP (version 4C3, as well as version 6-beta1). Around 2000 benchmark cases from the International Handbook of Criticality Safety Benchmark Experiments (ICSBEP) were used. The results were analyzed per ICSBEP category, and per element. Overall, the three libraries show similar performance on most criticality safety benchmarks. The largest differences are probably caused by elements such as Be, C, Fe, Zr, W. (authors)

  10. Updating the ACT College Readiness Benchmarks. ACT Research Report Series 2013 (6)

    Science.gov (United States)

    Allen, Jeff

    2013-01-01

    The ACT College Readiness Benchmarks are the ACT® College Readiness Assessment scores associated with a 50% chance of earning a B or higher grade in typical first-year credit-bearing college courses. The Benchmarks also correspond to an approximate 75% chance of earning a C or higher grade in these courses. There are four Benchmarks, corresponding…

  11. MOF-derived Co-doped nickel selenide/C electrocatalysts supported on Ni foam for overall water splitting

    KAUST Repository

    Ming, Fangwang; Liang, Hanfeng; Shi, Huanhuan; Xu, Xun; Mei, Gui; Wang, Zhoucheng

    2016-01-01

    It is of prime importance to develop dual-functional electrocatalysts with good activity for overall water splitting, which remains a great challenge. Herein, we report the synthesis of a Co-doped nickel selenide (a mixture of NiSe and NiSe)/C hybrid nanostructure supported on Ni foam using a metal-organic framework as the precursor. The resulting catalyst exhibits excellent catalytic activity toward the oxygen evolution reaction (OER), which only requires an overpotential of 275 mV to drive a current density of 30 mA cm. This overpotential is much lower than those reported for precious metal free OER catalysts. The hybrid is also capable of catalyzing the hydrogen evolution reaction (HER) efficiently. A current density of -10 mA cm can be achieved at 90 mV. In addition, such a hybrid nanostructure can achieve 10 and 30 mA cm at potentials of 1.6 and 1.71 V, respectively, along with good durability when functioning as both the cathode and the anode for overall water splitting in basic media.

  12. MOF-derived Co-doped nickel selenide/C electrocatalysts supported on Ni foam for overall water splitting

    KAUST Repository

    Ming, Fangwang

    2016-09-01

    It is of prime importance to develop dual-functional electrocatalysts with good activity for overall water splitting, which remains a great challenge. Herein, we report the synthesis of a Co-doped nickel selenide (a mixture of NiSe and NiSe)/C hybrid nanostructure supported on Ni foam using a metal-organic framework as the precursor. The resulting catalyst exhibits excellent catalytic activity toward the oxygen evolution reaction (OER), which only requires an overpotential of 275 mV to drive a current density of 30 mA cm. This overpotential is much lower than those reported for precious metal free OER catalysts. The hybrid is also capable of catalyzing the hydrogen evolution reaction (HER) efficiently. A current density of -10 mA cm can be achieved at 90 mV. In addition, such a hybrid nanostructure can achieve 10 and 30 mA cm at potentials of 1.6 and 1.71 V, respectively, along with good durability when functioning as both the cathode and the anode for overall water splitting in basic media.

  13. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  14. Concrete benchmark experiment: ex-vessel LWR surveillance dosimetry; Experience ``Benchmark beton`` pour la dosimetrie hors cuve dans les reacteurs a eau legere

    Energy Technology Data Exchange (ETDEWEB)

    Ait Abderrahim, H.; D`Hondt, P.; Oeyen, J.; Risch, P.; Bioux, P.

    1993-09-01

    The analysis of DOEL-1 in-vessel and ex-vessel neutron dosimetry, using the DOT 3.5 Sn code coupled with the VITAMIN-C cross-section library, showed the same C/E values for different detectors at the surveillance capsule and the ex-vessel cavity positions. These results seem to be in contradiction with those obtained in several Benchmark experiments (PCA, PSF, VENUS...) when using the same computational tools. Indeed a strong decreasing radial trend of the C/E was observed, partly explained by the overestimation of the iron inelastic scattering. The flat trend seen in DOEL-1 could be explained by compensating errors in the calculation such as the backscattering due to the concrete walls outside the cavity. The `Concrete Benchmark` experiment has been designed to judge the ability of this calculation methods to treat the backscattering. This paper describes the `Concrete Benchmark` experiment, the measured and computed neutron dosimetry results and their comparison. This preliminary analysis seems to indicate an overestimation of the backscattering effect in the calculations. (authors). 5 figs., 1 tab., 7 refs.

  15. FENDL neutronics benchmark: Specifications for the calculational neutronics and shielding benchmark

    International Nuclear Information System (INIS)

    Sawan, M.E.

    1994-12-01

    During the IAEA Advisory Group Meeting on ''Improved Evaluations and Integral Data Testing for FENDL'' held in Garching near Munich, Germany in the period 12-16 September 1994, the Working Group II on ''Experimental and Calculational Benchmarks on Fusion Neutronics for ITER'' recommended that a calculational benchmark representative of the ITER design should be developed. This report describes the neutronics and shielding calculational benchmark available for scientists interested in performing analysis for this benchmark. (author)

  16. Mixed-strain housing for female C57BL/6, DBA/2, and BALB/c mice: validating a split-plot design that promotes refinement and reduction.

    Science.gov (United States)

    Walker, Michael; Fureix, Carole; Palme, Rupert; Newman, Jonathan A; Ahloy Dallaire, Jamie; Mason, Georgia

    2016-01-27

    Inefficient experimental designs are common in animal-based biomedical research, wasting resources and potentially leading to unreplicable results. Here we illustrate the intrinsic statistical power of split-plot designs, wherein three or more sub-units (e.g. individual subjects) differing in a variable of interest (e.g. genotype) share an experimental unit (e.g. a cage or litter) to which a treatment is applied (e.g. a drug, diet, or cage manipulation). We also empirically validate one example of such a design, mixing different mouse strains -- C57BL/6, DBA/2, and BALB/c -- within cages varying in degree of enrichment. As well as boosting statistical power, no other manipulations are needed for individual identification if co-housed strains are differentially pigmented, so also sparing mice from stressful marking procedures. The validation involved housing 240 females from weaning to 5 months of age in single- or mixed- strain trios, in cages allocated to enriched or standard treatments. Mice were screened for a range of 26 commonly-measured behavioural, physiological and haematological variables. Living in mixed-strain trios did not compromise mouse welfare (assessed via corticosterone metabolite output, stereotypic behaviour, signs of aggression, and other variables). It also did not alter the direction or magnitude of any strain- or enrichment-typical difference across the 26 measured variables, or increase variance in the data: indeed variance was significantly decreased by mixed- strain housing. Furthermore, using Monte Carlo simulations to quantify the statistical power benefits of this approach over a conventional design demonstrated that for our effect sizes, the split- plot design would require significantly fewer mice (under half in most cases) to achieve a power of 80%. Mixed-strain housing allows several strains to be tested at once, and potentially refines traditional marking practices for research mice. Furthermore, it dramatically illustrates the

  17. Benchmarking and Performance Measurement.

    Science.gov (United States)

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  18. Benchmarking for On-Scalp MEG Sensors.

    Science.gov (United States)

    Xie, Minshu; Schneiderman, Justin F; Chukharkin, Maxim L; Kalabukhov, Alexei; Riaz, Bushra; Lundqvist, Daniel; Whitmarsh, Stephen; Hamalainen, Matti; Jousmaki, Veikko; Oostenveld, Robert; Winkler, Dag

    2017-06-01

    We present a benchmarking protocol for quantitatively comparing emerging on-scalp magnetoencephalography (MEG) sensor technologies to their counterparts in state-of-the-art MEG systems. As a means of validation, we compare a high-critical-temperature superconducting quantum interference device (high T c SQUID) with the low- T c SQUIDs of an Elekta Neuromag TRIUX system in MEG recordings of auditory and somatosensory evoked fields (SEFs) on one human subject. We measure the expected signal gain for the auditory-evoked fields (deeper sources) and notice some unfamiliar features in the on-scalp sensor-based recordings of SEFs (shallower sources). The experimental results serve as a proof of principle for the benchmarking protocol. This approach is straightforward, general to various on-scalp MEG sensors, and convenient to use on human subjects. The unexpected features in the SEFs suggest on-scalp MEG sensors may reveal information about neuromagnetic sources that is otherwise difficult to extract from state-of-the-art MEG recordings. As the first systematically established on-scalp MEG benchmarking protocol, magnetic sensor developers can employ this method to prove the utility of their technology in MEG recordings. Further exploration of the SEFs with on-scalp MEG sensors may reveal unique information about their sources.

  19. Evaluating efficiency of split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes

    Energy Technology Data Exchange (ETDEWEB)

    Mun, Jun Ki; Son, Sang Jun; Kim, Dae Ho; Seo, Seok Jin [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2015-12-15

    The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1 cm superior and ending 1 cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10 MV coplanar 360° arcs. Each arc had 30° and 30° collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4 Gy to pelvic lymph nodes and 63- 70 Gy to the prostate in 28 fractions. D{sub mean} of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2 cm grid. All plans were normalized to the prostate PTV{sub 100%} = 90% or 95%. A comparison of D{sub mean} of whole rectum, upperr ectum, lower rectum, and bladder, V{sub 50%} of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the D

  20. Benchmarking in the Netherlands

    International Nuclear Information System (INIS)

    1999-01-01

    In two articles an overview is given of the activities in the Dutch industry and energy sector with respect to benchmarking. In benchmarking operational processes of different competitive businesses are compared to improve your own performance. Benchmark covenants for energy efficiency between the Dutch government and industrial sectors contribute to a growth of the number of benchmark surveys in the energy intensive industry in the Netherlands. However, some doubt the effectiveness of the benchmark studies

  1. Criticality Benchmark Results Using Various MCNP Data Libraries

    International Nuclear Information System (INIS)

    Frankle, Stephanie C.

    1999-01-01

    A suite of 86 criticality benchmarks has been recently implemented in MCNPtrademark as part of the nuclear data validation effort. These benchmarks have been run using two sets of MCNP continuous-energy neutron data: ENDF/B-VI based data through Release 2 (ENDF60) and the ENDF/B-V based data. New evaluations were completed for ENDF/B-VI for a number of the important nuclides such as the isotopes of H, Be, C, N, O, Fe, Ni, 235,238 U, 237 Np, and 239,240 Pu. When examining the results of these calculations for the five manor categories of 233 U, intermediate-enriched 235 U (IEU), highly enriched 235 U (HEU), 239 Pu, and mixed metal assembles, we find the following: (1) The new evaluations for 9 Be, 12 C, and 14 N show no net effect on k eff ; (2) There is a consistent decrease in k eff for all of the solution assemblies for ENDF/B-VI due to 1 H and 16 O, moving k eff further from the benchmark value for uranium solutions and closer to the benchmark value for plutonium solutions; (3) k eff decreased for the ENDF/B-VI Fe isotopic data, moving the calculated k eff further from the benchmark value; (4) k eff decreased for the ENDF/B-VI Ni isotopic data, moving the calculated k eff closer to the benchmark value; (5) The W data remained unchanged and tended to calculate slightly higher than the benchmark values; (6) For metal uranium systems, the ENDF/B-VI data for 235 U tends to decrease k eff while the 238 U data tends to increase k eff . The net result depends on the energy spectrum and material specifications for the particular assembly; (7) For more intermediate-energy systems, the changes in the 235,238 U evaluations tend to increase k eff . For the mixed graphite and normal uranium-reflected assembly, a large increase in k eff due to changes in the 238 U evaluation moved the calculated k eff much closer to the benchmark value. (8) There is little change in k eff for the uranium solutions due to the new 235,238 U evaluations; and (9) There is little change in k eff

  2. Molecular Line Emission from Multifluid Shock Waves. I. Numerical Methods and Benchmark Tests

    Science.gov (United States)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-05-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are Lt magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  3. MOLECULAR LINE EMISSION FROM MULTIFLUID SHOCK WAVES. I. NUMERICAL METHODS AND BENCHMARK TESTS

    International Nuclear Information System (INIS)

    Ciolek, Glenn E.; Roberge, Wayne G.

    2013-01-01

    We describe a numerical scheme for studying time-dependent, multifluid, magnetohydrodynamic shock waves in weakly ionized interstellar clouds and cores. Shocks are modeled as propagating perpendicular to the magnetic field and consist of a neutral molecular fluid plus a fluid of ions and electrons. The scheme is based on operator splitting, wherein time integration of the governing equations is split into separate parts. In one part, independent homogeneous Riemann problems for the two fluids are solved using Godunov's method. In the other, equations containing the source terms for transfer of mass, momentum, and energy between the fluids are integrated using standard numerical techniques. We show that, for the frequent case where the thermal pressures of the ions and electrons are << magnetic pressure, the Riemann problems for the neutral and ion-electron fluids have a similar mathematical structure which facilitates numerical coding. Implementation of the scheme is discussed and several benchmark tests confirming its accuracy are presented, including (1) MHD wave packets ranging over orders of magnitude in length- and timescales, (2) early evolution of multifluid shocks caused by two colliding clouds, and (3) a multifluid shock with mass transfer between the fluids by cosmic-ray ionization and ion-electron recombination, demonstrating the effect of ion mass loading on magnetic precursors of MHD shocks. An exact solution to an MHD Riemann problem forming the basis for an approximate numerical solver used in the homogeneous part of our scheme is presented, along with derivations of the analytic benchmark solutions and tests showing the convergence of the numerical algorithm.

  4. Implementation and verification of global optimization benchmark problems

    Science.gov (United States)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  5. Improving the back surface field on an amorphous silicon carbide (a-SiC:H) thin film photocathode for solar water splitting

    NARCIS (Netherlands)

    Perez Rodriguez, P.; Cardenas-Morcoso, Drialys; Digdaya, I.A.; Mangel Raventos, A.; Procel Moya, P.A.; Isabella, O.; Gimenez, Sixto; Zeman, M.; Smith, W.A.; Smets, A.H.M.

    2018-01-01

    Amorphous silicon carbide (a-SiC:H) is a promising material for photoelectrochemical water splitting owing to its relatively small band-gap energy and high chemical and optoelectrical stability. This work studies the interplay between charge-carrier separation and collection, and their injection

  6. WLUP benchmarks

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2002-01-01

    The IAEA-WIMS Library Update Project (WLUP) is on the end stage. The final library will be released on 2002. It is a result of research and development made by more than ten investigators during 10 years. The organization of benchmarks for testing and choosing the best set of data has been coordinated by the author of this paper. It is presented the organization, name conventions, contents and documentation of WLUP benchmarks, and an updated list of the main parameters for all cases. First, the benchmarks objectives and types are given. Then, comparisons of results from different WIMSD libraries are included. Finally it is described the program QVALUE for analysis and plot of results. Some examples are given. The set of benchmarks implemented on this work is a fundamental tool for testing new multigroup libraries. (author)

  7. The hydrogen tunneling splitting in malonaldehyde: A full-dimensional time-independent quantum mechanical method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Feng; Ren, Yinghui; Bian, Wensheng, E-mail: bian@iccas.ac.cn [Beijing National Laboratory for Molecular Sciences, Institute of Chemistry, Chinese Academy of Sciences, Beijing 100190 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2016-08-21

    The accurate time-independent quantum dynamics calculations on the ground-state tunneling splitting of malonaldehyde in full dimensionality are reported for the first time. This is achieved with an efficient method developed by us. In our method, the basis functions are customized for the hydrogen transfer process which has the effect of greatly reducing the size of the final Hamiltonian matrix, and the Lanczos method and parallel strategy are used to further overcome the memory and central processing unit time bottlenecks. The obtained ground-state tunneling splitting of 24.5 cm{sup −1} is in excellent agreement with the benchmark value of 23.8 cm{sup −1} computed with the full-dimensional, multi-configurational time-dependent Hartree approach on the same potential energy surface, and we estimate that our reported value has an uncertainty of less than 0.5 cm{sup −1}. Moreover, the role of various vibrational modes strongly coupled to the hydrogen transfer process is revealed.

  8. Stress analysis and optimization of Nd:YAG pulsed laser processing of notches for fracture splitting of a C70S6 connecting rod

    International Nuclear Information System (INIS)

    Kou, Shuqing; Gao, Yan; Zhao, Yong; Lin, Baojun

    2017-01-01

    The pulsed laser pre-processing of a notch as the fracture initiation source for the splitting process is the key mechanism of an advanced fracture splitting technology for C70S6 connecting rods. This study investigated the stress field of Nd:YAG pulsed laser grooving, which affects the rapid fracture initiation at the notch root and the controlled crack extension in the critical fracture splitting quality, to improve manufacturing quality. Thermal elastic-plastic incremental theory was applied to build the finite element analysis model of the stress field of pulsed laser grooving for fracture splitting based on the Rotary-Gauss body heat source. The corresponding numerical simulation of the stress field was conducted. The changes and distributions of the stress during pulsed laser grooving were examined, the influence rule of the primary technological parameters on the residual stress was analyzed, and the analysis results were validated by the corresponding cutting experiment. Results showed that the residual stress distribution was concentrated in the Heat-affected zone (HAZ) near the fracture splitting notch, which would cause micro-cracks in the HAZ. The stress state of the notch root in the fracture initiation direction was tensile stress, which was beneficial to the fracture initiation and the crack rapid extension in the subsequent fracture splitting process. However, the uneven distribution of the stress could lead to fracture splitting defects, and thus the residual stress should be lowered to a reasonable range. Decreasing the laser pulse power, increasing the processing speed, and lowering the pulse width can lower the residual stress. Along with the actual production, the reasonable main technological parameters were obtained.

  9. Stress analysis and optimization of Nd:YAG pulsed laser processing of notches for fracture splitting of a C70S6 connecting rod

    Energy Technology Data Exchange (ETDEWEB)

    Kou, Shuqing; Gao, Yan; Zhao, Yong; Lin, Baojun [Jilin University, Changchun (China)

    2017-05-15

    The pulsed laser pre-processing of a notch as the fracture initiation source for the splitting process is the key mechanism of an advanced fracture splitting technology for C70S6 connecting rods. This study investigated the stress field of Nd:YAG pulsed laser grooving, which affects the rapid fracture initiation at the notch root and the controlled crack extension in the critical fracture splitting quality, to improve manufacturing quality. Thermal elastic-plastic incremental theory was applied to build the finite element analysis model of the stress field of pulsed laser grooving for fracture splitting based on the Rotary-Gauss body heat source. The corresponding numerical simulation of the stress field was conducted. The changes and distributions of the stress during pulsed laser grooving were examined, the influence rule of the primary technological parameters on the residual stress was analyzed, and the analysis results were validated by the corresponding cutting experiment. Results showed that the residual stress distribution was concentrated in the Heat-affected zone (HAZ) near the fracture splitting notch, which would cause micro-cracks in the HAZ. The stress state of the notch root in the fracture initiation direction was tensile stress, which was beneficial to the fracture initiation and the crack rapid extension in the subsequent fracture splitting process. However, the uneven distribution of the stress could lead to fracture splitting defects, and thus the residual stress should be lowered to a reasonable range. Decreasing the laser pulse power, increasing the processing speed, and lowering the pulse width can lower the residual stress. Along with the actual production, the reasonable main technological parameters were obtained.

  10. Algebraic techniques for diagonalization of a split quaternion matrix in split quaternionic mechanics

    International Nuclear Information System (INIS)

    Jiang, Tongsong; Jiang, Ziwu; Zhang, Zhaozhong

    2015-01-01

    In the study of the relation between complexified classical and non-Hermitian quantum mechanics, physicists found that there are links to quaternionic and split quaternionic mechanics, and this leads to the possibility of employing algebraic techniques of split quaternions to tackle some problems in complexified classical and quantum mechanics. This paper, by means of real representation of a split quaternion matrix, studies the problem of diagonalization of a split quaternion matrix and gives algebraic techniques for diagonalization of split quaternion matrices in split quaternionic mechanics

  11. Benchmarking electricity distribution

    Energy Technology Data Exchange (ETDEWEB)

    Watts, K. [Department of Justice and Attorney-General, QLD (Australia)

    1995-12-31

    Benchmarking has been described as a method of continuous improvement that involves an ongoing and systematic evaluation and incorporation of external products, services and processes recognised as representing best practice. It is a management tool similar to total quality management (TQM) and business process re-engineering (BPR), and is best used as part of a total package. This paper discusses benchmarking models and approaches and suggests a few key performance indicators that could be applied to benchmarking electricity distribution utilities. Some recent benchmarking studies are used as examples and briefly discussed. It is concluded that benchmarking is a strong tool to be added to the range of techniques that can be used by electricity distribution utilities and other organizations in search of continuous improvement, and that there is now a high level of interest in Australia. Benchmarking represents an opportunity for organizations to approach learning from others in a disciplined and highly productive way, which will complement the other micro-economic reforms being implemented in Australia. (author). 26 refs.

  12. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  13. RUNE benchmarks

    DEFF Research Database (Denmark)

    Peña, Alfredo

    This report contains the description of a number of benchmarks with the purpose of evaluating flow models for near-shore wind resource estimation. The benchmarks are designed based on the comprehensive database of observations that the RUNE coastal experiment established from onshore lidar...

  14. MCNP neutron benchmarks

    International Nuclear Information System (INIS)

    Hendricks, J.S.; Whalen, D.J.; Cardon, D.A.; Uhle, J.L.

    1991-01-01

    Over 50 neutron benchmark calculations have recently been completed as part of an ongoing program to validate the MCNP Monte Carlo radiation transport code. The new and significant aspects of this work are as follows: These calculations are the first attempt at a validation program for MCNP and the first official benchmarking of version 4 of the code. We believe the chosen set of benchmarks is a comprehensive set that may be useful for benchmarking other radiation transport codes and data libraries. These calculations provide insight into how well neutron transport calculations can be expected to model a wide variety of problems

  15. Benchmark fragment-based 1H, 13C, 15N and 17O chemical shift predictions in molecular crystals†

    Science.gov (United States)

    Hartman, Joshua D.; Kudla, Ryan A.; Day, Graeme M.; Mueller, Leonard J.; Beran, Gregory J. O.

    2016-01-01

    The performance of fragment-based ab initio 1H, 13C, 15N and 17O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. 1H, 13C, 15N, and 17O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same 1H, 13C, 15N, and 17O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tertbutyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2. PMID:27431490

  16. Benchmark fragment-based (1)H, (13)C, (15)N and (17)O chemical shift predictions in molecular crystals.

    Science.gov (United States)

    Hartman, Joshua D; Kudla, Ryan A; Day, Graeme M; Mueller, Leonard J; Beran, Gregory J O

    2016-08-21

    The performance of fragment-based ab initio(1)H, (13)C, (15)N and (17)O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. (1)H, (13)C, (15)N, and (17)O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same (1)H, (13)C, (15)N, and (17)O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tert-butyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2.

  17. Implementation and verification of global optimization benchmark problems

    Directory of Open Access Journals (Sweden)

    Posypkin Mikhail

    2017-12-01

    Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  18. The Development of a Benchmark Tool for NoSQL Databases

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2013-07-01

    Full Text Available The aim of this article is to describe a proposed benchmark methodology and software application targeted at measuring the performance of both SQL and NoSQL databases. These represent the results obtained during PhD research (being actually a part of a larger application intended for NoSQL database management. A reason for aiming at this particular subject is the complete lack of benchmarking tools for NoSQL databases, except for YCBS [1] and a benchmark tool made specifically to compare Redis to RavenDB. While there are several well-known benchmarking systems for classical relational databases (starting with the canon TPC-C, TPC-E and TPC-H, on the other side of databases world such tools are mostly missing and seriously needed.

  19. Liver mortality attributable to chronic hepatitis C virus infection in Denmark and Scotland - using spontaneous resolvers as the benchmark comparator

    DEFF Research Database (Denmark)

    Innes, H; Hutchinson, S J; Obel, N

    2016-01-01

    Liver mortality among individuals with chronic hepatitis C (CHC) infection is common, but the relative contribution of CHC per se versus adverse health behaviours is uncertain. We explored data on spontaneous resolvers of hepatitis C virus (HCV) as a benchmark group, to uncover the independent...... contribution of CHC on liver mortality. Using national HCV diagnosis and mortality registers from Denmark and Scotland, we calculated the liver mortality rate (LMR) for persons diagnosed with CHC infection (LMRchronic ) and spontaneously resolved infection (LMRresolved ), according to subgroups defined by: age...

  20. Polarization Insensitivity in Double-Split Ring and Triple-Split Ring Terahertz Resonators

    International Nuclear Information System (INIS)

    Wu Qian-Nan; Lan Feng; Tang Xiao-Pin; Yang Zi-Qiang

    2015-01-01

    A modified double-split ring resonator and a modified triple-split ring resonator, which offer polarization-insensitive performance, are investigated, designed and fabricated. By displacing the two gaps of the conventional double-split ring resonator away from the center, the second resonant frequency for the 0° polarized wave and the resonant frequency for the 90° polarized wave become increasingly close to each other until they are finally identical. Theoretical and experimental results show that the modified double-split ring resonator and the modified triple-split ring resonator are insensitive to different polarized waves and show strong resonant frequency dips near 433 and 444 GHz, respectively. The results of this work suggest new opportunities for the investigation and design of polarization-dependent terahertz devices based on split ring resonators. (paper)

  1. Protein trans-splicing of multiple atypical split inteins engineered from natural inteins.

    Directory of Open Access Journals (Sweden)

    Ying Lin

    Full Text Available Protein trans-splicing by split inteins has many uses in protein production and research. Splicing proteins with synthetic peptides, which employs atypical split inteins, is particularly useful for site-specific protein modifications and labeling, because the synthetic peptide can be made to contain a variety of unnatural amino acids and chemical modifications. For this purpose, atypical split inteins need to be engineered to have a small N-intein or C-intein fragment that can be more easily included in a synthetic peptide that also contains a small extein to be trans-spliced onto target proteins. Here we have successfully engineered multiple atypical split inteins capable of protein trans-splicing, by modifying and testing more than a dozen natural inteins. These included both S1 split inteins having a very small (11-12 aa N-intein fragment and S11 split inteins having a very small (6 aa C-intein fragment. Four of the new S1 and S11 split inteins showed high efficiencies (85-100% of protein trans-splicing both in E. coli cells and in vitro. Under in vitro conditions, they exhibited reaction rate constants ranging from ~1.7 × 10(-4 s(-1 to ~3.8 × 10(-4 s(-1, which are comparable to or higher than those of previously reported atypical split inteins. These findings should facilitate a more general use of trans-splicing between proteins and synthetic peptides, by expanding the availability of different atypical split inteins. They also have implications on understanding the structure-function relationship of atypical split inteins, particularly in terms of intein fragment complementation.

  2. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...... in order to obtain a unique selection...

  3. Mixed-strain housing for female C57BL/6, DBA/2, and BALB/c mice: validating a split-plot design that promotes refinement and reduction

    Directory of Open Access Journals (Sweden)

    Michael Walker

    2016-01-01

    Full Text Available Abstract Background Inefficient experimental designs are common in animal-based biomedical research, wasting resources and potentially leading to unreplicable results. Here we illustrate the intrinsic statistical power of split-plot designs, wherein three or more sub-units (e.g. individual subjects differing in a variable of interest (e.g. genotype share an experimental unit (e.g. a cage or litter to which a treatment is applied (e.g. a drug, diet, or cage manipulation. We also empirically validate one example of such a design, mixing different mouse strains -- C57BL/6, DBA/2, and BALB/c -- within cages varying in degree of enrichment. As well as boosting statistical power, no other manipulations are needed for individual identification if co-housed strains are differentially pigmented, so also sparing mice from stressful marking procedures. Methods The validation involved housing 240 females from weaning to 5 months of age in single- or mixed- strain trios, in cages allocated to enriched or standard treatments. Mice were screened for a range of 26 commonly-measured behavioural, physiological and haematological variables. Results Living in mixed-strain trios did not compromise mouse welfare (assessed via corticosterone metabolite output, stereotypic behaviour, signs of aggression, and other variables. It also did not alter the direction or magnitude of any strain- or enrichment-typical difference across the 26 measured variables, or increase variance in the data: indeed variance was significantly decreased by mixed- strain housing. Furthermore, using Monte Carlo simulations to quantify the statistical power benefits of this approach over a conventional design demonstrated that for our effect sizes, the split- plot design would require significantly fewer mice (under half in most cases to achieve a power of 80 %. Conclusions Mixed-strain housing allows several strains to be tested at once, and potentially refines traditional marking practices

  4. Bad splits in bilateral sagittal split osteotomy: systematic review of fracture patterns.

    Science.gov (United States)

    Steenen, S A; Becking, A G

    2016-07-01

    An unfavourable and unanticipated pattern of the mandibular sagittal split osteotomy is generally referred to as a 'bad split'. Few restorative techniques to manage the situation have been described. In this article, a classification of reported bad split pattern types is proposed and appropriate salvage procedures to manage the different types of undesired fracture are presented. A systematic review was undertaken, yielding a total of 33 studies published between 1971 and 2015. These reported a total of 458 cases of bad splits among 19,527 sagittal ramus osteotomies in 10,271 patients. The total reported incidence of bad split was 2.3% of sagittal splits. The most frequently encountered were buccal plate fractures of the proximal segment (types 1A-F) and lingual fractures of the distal segment (types 2A and 2B). Coronoid fractures (type 3) and condylar neck fractures (type 4) have seldom been reported. The various types of bad split may require different salvage approaches. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Deprotonation of g-C3N4 with Na ions for efficient nonsacrificial water splitting under visible light

    DEFF Research Database (Denmark)

    Guo, Feng; Chen, Jingling; Zhang, Minwei

    2016-01-01

    Developing a photocatalyst with the necessary characteristics of being cheap, efficient and robust for visible-light-driven water splitting remains a serious challenge within the photocatalysis field. Herein, an effective strategy, deprotonating g-C3N4 with Na ions from low-cost precursors...

  6. Benchmarking school nursing practice: the North West Regional Benchmarking Group

    OpenAIRE

    Littler, Nadine; Mullen, Margaret; Beckett, Helen; Freshney, Alice; Pinder, Lynn

    2016-01-01

    It is essential that the quality of care is reviewed regularly through robust processes such as benchmarking to ensure all outcomes and resources are evidence-based so that children and young people’s needs are met effectively. This article provides an example of the use of benchmarking in school nursing practice. Benchmarking has been defined as a process for finding, adapting and applying best practices (Camp, 1994). This concept was first adopted in the 1970s ‘from industry where it was us...

  7. Field-induced spin splitting and anomalous photoluminescence circular polarization in C H3N H3Pb I3 films at high magnetic field

    Science.gov (United States)

    Zhang, Chuang; Sun, Dali; Yu, Zhi-Gang; Sheng, Chuan-Xiang; McGill, Stephen; Semenov, Dmitry; Vardeny, Zeev Valy

    2018-04-01

    The organic-inorganic hybrid perovskites show excellent optical and electrical properties for photovoltaic and a myriad of other optoelectronics applications. Using high-field magneto-optical measurements up to 17.5 T at cryogenic temperatures, we have studied the spin-dependent optical transitions in the prototype C H3N H3Pb I3 , which are manifested in the field-induced circularly polarized photoluminescence emission. The energy splitting between left and right circularly polarized emission bands is measured to be ˜1.5 meV at 17.5 T, from which we obtained an exciton effective g factor of ˜1.32. Also from the photoluminescence diamagnetic shift we estimate the exciton binding energy to be ˜17 meV at low temperature. Surprisingly, the corresponding field-induced circular polarization is "anomalous" in that the photoluminescence emission of the higher split energy band is stronger than that of the lower split band. This "reversed" intensity ratio originates from the combination of long electron spin relaxation time and hole negative g factor in C H3N H3Pb I3 , which are in agreement with a model based on the k.p effective-mass approximation.

  8. Extensions of pseudo-Perron-Frobenius splitting related to generalized inverse AT,S(2

    Directory of Open Access Journals (Sweden)

    Huang Shaowu

    2018-02-01

    Full Text Available We in this paper define the outer-Perron-Frobenius splitting, which is an extension of the pseudo- Perron-Frobenius splitting defined in [A.N. Sushama, K. Premakumari, K.C. Sivakumar, Extensions of Perron-Frobenius splittings and relationships with nonnegative Moore-Penrose inverse, Linear and Multilinear Algebra 63 (2015 1-11]. We present some criteria for the convergence of the outer-Perron-Frobenius splitting. The findings of this paper generalize some known results in the literatures.

  9. Benchmarking Is Associated With Improved Quality of Care in Type 2 Diabetes

    Science.gov (United States)

    Hermans, Michel P.; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-01-01

    OBJECTIVE To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. RESEARCH DESIGN AND METHODS Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. RESULTS Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile. PMID:23846810

  10. Discrepancies in historical emissions point to a wider 2020 gap between 2 deg. C benchmarks and aggregated national mitigation pledges

    International Nuclear Information System (INIS)

    Rogelj, Joeri; Hare, William; Chen, Claudine; Meinshausen, Malte

    2011-01-01

    Aggregations of greenhouse gas mitigation pledges by countries are frequently used to indicate whether resulting global emissions in 2020 will be 'on track' to limit global temperature increase to below specific warming levels such as 1.5 or 2 deg. C. We find that historical emission levels aggregated from data that are officially reported by countries to the UNFCCC are lower than independent global emission estimates, such as the IPCC SRES scenarios. This discrepancy in historical emissions could substantially widen the gap between 2020 pledges and 2020 benchmarks, as the latter tend to be derived from scenarios that share similar historical emission levels to IPCC SRES scenarios. Three methods for resolving this discrepancy, here called 'harmonization', are presented and their influence on 'gap' estimates is discussed. Instead of a 3.4-9.2 GtCO 2 eq shortfall in emission reductions by 2020 compared with the 44 GtCO 2 eq benchmark, the actual gap might be as high as 5.4-12.5 GtCO 2 eq (a 22-88% increase of the gap) if this historical discrepancy is accounted for. Not applying this harmonization step when using 2020 emission benchmarks could lead to an underestimation of the insufficiency of current mitigation pledges.

  11. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  12. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  13. Market Structure and Stock Splits

    OpenAIRE

    David Michayluk; Paul Kofman

    2001-01-01

    Enhanced liquidity is one possible motivation for stock splits but empirical research frequently documents declines in liquidity following stock splits. Despite almost thirty years of inquiry, little is known about all the changes in a stock's trading activity following a stock split. We examine how liquidity measures change around more than 2,500 stock splits and find a pervasive decline in most measures. Large stock splits exhibit a more severe liquidity decline than small stock splits, esp...

  14. Benchmark testing calculations for 232Th

    International Nuclear Information System (INIS)

    Liu Ping

    2003-01-01

    The cross sections of 232 Th from CNDC and JENDL-3.3 were processed with NJOY97.45 code in the ACE format for the continuous-energy Monte Carlo Code MCNP4C. The K eff values and central reaction rates based on CENDL-3.0, JENDL-3.3 and ENDF/B-6.2 were calculated using MCNP4C code for benchmark assembly, and the comparisons with experimental results are given. (author)

  15. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  16. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  17. Novel microwave-assisted synthesis of porous g-C3N4/SnO2 nanocomposite for solar water-splitting

    Science.gov (United States)

    Seza, A.; Soleimani, F.; Naseri, N.; Soltaninejad, M.; Montazeri, S. M.; Sadrnezhaad, S. K.; Mohammadi, M. R.; Moghadam, H. Asgari; Forouzandeh, M.; Amin, M. H.

    2018-05-01

    Highly porous nanocomposites of graphitic-carbon nitride and tin oxide (g-C3N4/SnO2) were prepared through simple pyrolysis of urea molecules under microwave irradiation. The initial amount of tin was varied in order to investigate the effect of SnO2 content on preparation and properties of the composites. The synthesized nanocomposites were well-characterized by XRD, FE-SEM, HR-TEM, BET, FTIR, XPS, DRS, and PL. A homogeneous distribution of SnO2 nanoparticles with the size of less than 10 nm on the porous C3N4 sheets could be obtained, suggesting that in-situ synthesis of SnO2 nanoparticles was responsible for the formation of g-C3N4. The process likely occurred by the aid of the large amounts of OH groups formed on the surfaces of SnO2 nanoparticles during the polycondensation reactions of tin derivatives which could facilitate the pyrolysis of urea to carbon nitride. The porous nanocomposite prepared with initial tin amount of 0.175 g had high specific surface area of 195 m2 g-1 which showed high efficiency photoelectrochemical water-splitting ability. A maximum photocurrent density of 33 μA cm-2 was achieved at an applied potential of 0.5 V when testing this nanocomposite as photo-anode in water-splitting reactions under simulated visible light irradiation, introducing it as a promising visible light photoactive material.

  18. Benchmark calculations of power distribution within assemblies

    International Nuclear Information System (INIS)

    Cavarec, C.; Perron, J.F.; Verwaerde, D.; West, J.P.

    1994-09-01

    The main objective of this Benchmark is to compare different techniques for fine flux prediction based upon coarse mesh diffusion or transport calculations. We proposed 5 ''core'' configurations including different assembly types (17 x 17 pins, ''uranium'', ''absorber'' or ''MOX'' assemblies), with different boundary conditions. The specification required results in terms of reactivity, pin by pin fluxes and production rate distributions. The proposal for these Benchmark calculations was made by J.C. LEFEBVRE, J. MONDOT, J.P. WEST and the specification (with nuclear data, assembly types, core configurations for 2D geometry and results presentation) was distributed to correspondents of the OECD Nuclear Energy Agency. 11 countries and 19 companies answered the exercise proposed by this Benchmark. Heterogeneous calculations and homogeneous calculations were made. Various methods were used to produce the results: diffusion (finite differences, nodal...), transport (P ij , S n , Monte Carlo). This report presents an analysis and intercomparisons of all the results received

  19. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  20. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  1. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  2. DRAGON solutions to the 3D transport benchmark over a range in parameter space

    International Nuclear Information System (INIS)

    Martin, Nicolas; Hebert, Alain; Marleau, Guy

    2010-01-01

    DRAGON solutions to the 'NEA suite of benchmarks for 3D transport methods and codes over a range in parameter space' are discussed in this paper. A description of the benchmark is first provided, followed by a detailed review of the different computational models used in the lattice code DRAGON. Two numerical methods were selected for generating the required quantities for the 729 configurations of this benchmark. First, S N calculations were performed using fully symmetric angular quadratures and high-order diamond differencing for spatial discretization. To compare S N results with those of another deterministic method, the method of characteristics (MoC) was also considered for this benchmark. Comparisons between reference solutions, S N and MoC results illustrate the advantages and drawbacks of each methods for this 3-D transport problem.

  3. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of bench-marking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  4. Regulatory Benchmarking

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    2017-01-01

    Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators. The appli......Benchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  5. Benchmarking and the laboratory

    Science.gov (United States)

    Galloway, M; Nadin, L

    2001-01-01

    This article describes how benchmarking can be used to assess laboratory performance. Two benchmarking schemes are reviewed, the Clinical Benchmarking Company's Pathology Report and the College of American Pathologists' Q-Probes scheme. The Clinical Benchmarking Company's Pathology Report is undertaken by staff based in the clinical management unit, Keele University with appropriate input from the professional organisations within pathology. Five annual reports have now been completed. Each report is a detailed analysis of 10 areas of laboratory performance. In this review, particular attention is focused on the areas of quality, productivity, variation in clinical practice, skill mix, and working hours. The Q-Probes scheme is part of the College of American Pathologists programme in studies of quality assurance. The Q-Probes scheme and its applicability to pathology in the UK is illustrated by reviewing two recent Q-Probe studies: routine outpatient test turnaround time and outpatient test order accuracy. The Q-Probes scheme is somewhat limited by the small number of UK laboratories that have participated. In conclusion, as a result of the government's policy in the UK, benchmarking is here to stay. Benchmarking schemes described in this article are one way in which pathologists can demonstrate that they are providing a cost effective and high quality service. Key Words: benchmarking • pathology PMID:11477112

  6. Benchmarking for Higher Education.

    Science.gov (United States)

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  7. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...... applications. The present study analyses voluntary benchmarking in a public setting that is oriented towards learning. The study contributes by showing how benchmarking can be mobilised for learning and offers evidence of the effects of such benchmarking for performance outcomes. It concludes that benchmarking...... can enable learning in public settings but that this requires actors to invest in ensuring that benchmark data are directed towards improvement....

  10. Analysis of mobile fronthaul bandwidth and wireless transmission performance in split-PHY processing architecture.

    Science.gov (United States)

    Miyamoto, Kenji; Kuwano, Shigeru; Terada, Jun; Otaka, Akihiro

    2016-01-25

    We analyze the mobile fronthaul (MFH) bandwidth and the wireless transmission performance in the split-PHY processing (SPP) architecture, which redefines the functional split of centralized/cloud RAN (C-RAN) while preserving high wireless coordinated multi-point (CoMP) transmission/reception performance. The SPP architecture splits the base stations (BS) functions between wireless channel coding/decoding and wireless modulation/demodulation, and employs its own CoMP joint transmission and reception schemes. Simulation results show that the SPP architecture reduces the MFH bandwidth by up to 97% from conventional C-RAN while matching the wireless bit error rate (BER) performance of conventional C-RAN in uplink joint reception with only 2-dB signal to noise ratio (SNR) penalty.

  11. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  12. Benchmarking reference services: an introduction.

    Science.gov (United States)

    Marshall, J G; Buchanan, H S

    1995-01-01

    Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.

  13. Benchmarking ENDF/B-VII.1, JENDL-4.0 and JEFF-3.1.1 with MCNP6

    International Nuclear Information System (INIS)

    Marck, Steven C. van der

    2012-01-01

    Recent releases of three major world nuclear reaction data libraries, ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1, have been tested extensively using benchmark calculations. The calculations were performed with the latest release of the continuous energy Monte Carlo neutronics code MCNP, i.e. MCNP6. Three types of benchmarks were used, viz. criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 2000 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6 Li, 7 Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D2O, H2O, concrete, polyethylene and teflon). The new functionality in MCNP6 to calculate the effective delayed neutron fraction was tested by comparison with more than thirty measurements in widely varying systems. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. The performance of the three libraries, in combination with MCNP6, is shown to be good. The results for the LEU-COMP-THERM category are on average very close to the benchmark value. Also for most other categories the results are satisfactory. Deviations from the benchmark values do occur in certain benchmark series, or in isolated cases within benchmark series. Such

  14. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  15. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  16. Splitting methods for split feasibility problems with application to Dantzig selectors

    International Nuclear Information System (INIS)

    He, Hongjin; Xu, Hong-Kun

    2017-01-01

    The split feasibility problem (SFP), which refers to the task of finding a point that belongs to a given nonempty, closed and convex set, and whose image under a bounded linear operator belongs to another given nonempty, closed and convex set, has promising applicability in modeling a wide range of inverse problems. Motivated by the increasingly data-driven regularization in the areas of signal/image processing and statistical learning, in this paper, we study the regularized split feasibility problem (RSFP), which provides a unified model for treating many real-world problems. By exploiting the split nature of the RSFP, we shall gainfully employ several efficient splitting methods to solve the model under consideration. A remarkable advantage of our methods lies in their easier subproblems in the sense that the resulting subproblems have closed-form representations or can be efficiently solved up to a high precision. As an interesting application, we apply the proposed algorithms for finding Dantzig selectors, in addition to demonstrating the effectiveness of the splitting methods through some computational results on synthetic and real medical data sets. (paper)

  17. Neutronic performance of a benchmark 1-MW LPSS

    International Nuclear Information System (INIS)

    Russell, G.J.; Pitcher, E.J.; Ferguson, P.D.

    1995-01-01

    We used split-target/flux-trap-moderator geometry in our 1-MW LPSS computational benchmark performance calculations because the simulation models were readily available. Also, this target/moderator arrangement is a proven LANSCE design and a good neutronic performer. The model has four moderator viewed surfaces, each with a 13x13 cm field-of-view. For our scoping neutronic-performance calculations, we attempted to get as much engineering realism into the target-system mockup as possible. In our present model, we account for target/reflector dilution by cooling; the D 2 O coolant fractions are adequate for 1 MW of 800-MeV protons (1.25 mA). We have incorporated a proton beam entry window and target canisters into the model, as well as (partial) moderator and vacuum canisters. The model does not account for target and moderator cooling lines and baffles, entire moderator canisters, and structural material in the reflector

  18. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  19. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  20. Quantum plasmon and Rashba-like spin splitting in self-assembled Co x C60 composites with enhanced Co content (x > 15)

    Science.gov (United States)

    Lavrentiev, Vasily; Chvostova, Dagmar; Stupakov, Alexandr; Lavrentieva, Inna; Vacik, Jiri; Motylenko, Mykhaylo; Barchuk, Mykhailo; Rafaja, David; Dejneka, Alexandr

    2018-04-01

    Driving by interplay between plasmonic and magnetic effects in organic composite semiconductors is a challenging task with a huge potential for practical applications. Here, we present evidence of a quantum plasmon excited in the self-assembled Co x C60 nanocomposite films with x > 15 (interval of the Co cluster coalescence) and analyse it using the optical absorption (OA) spectra. In the case of Co x C60 film with x = 16 (LF sample), the quantum plasmon generated by the Co/CoO clusters is found as the 1.5 eV-centred OA peak. This finding is supported by the establishment of four specific C60-related OA lines detected at the photon energies E p > 2.5 eV. Increase of the Co content up to x = 29 (HF sample) leads to pronounced enhancement of OA intensity in the energy range of E p > 2.5 eV and to plasmonic peak downshift of 0.2 eV with respect to the peak position in the LF spectrum. Four pairs of the OA peaks evaluated in the HF spectrum at E p > 2.5 eV reflect splitting of the C60-related lines, suggesting great change in the microscopic conditions with increasing x. Analysis of the film nanostructure and the plasmon-induced conditions allows us to propose a Rashba-like spin splitting effect that suggests valuable sources for spin polarization.

  1. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Effect of doping (C or N) and co-doping (C+N) on the photoactive properties of magnetron sputtered titania coatings for the application of solar water-splitting.

    Science.gov (United States)

    Rahman, M; Dang, B H Q; McDonnell, K; MacElroy, J M D; Dowling, D P

    2012-06-01

    The photocatalytic splitting of water into hydrogen and oxygen using a photoelectrochemical (PEC) cell containing titanium dioxide (TiO2) photoanode is a potentially renewable source of chemical fuels. However, the size of the band gap (-3.2 eV) of the TiO2 photocatalyst leads to its relatively low photoactivity toward visible light in a PEC cell. The development of materials with smaller band gaps of approximately 2.4 eV is therefore necessary to operate PEC cells efficiently. This study investigates the effect of dopant (C or N) and co-dopant (C+N) on the physical, structural and photoactivity of TiO2 nano thick coating. TiO2 nano-thick coatings were deposited using a closed field DC reactive magnetron sputtering technique, from titanium target in argon plasma with trace addition of oxygen. In order to study the influence of doping such as C, N and C+N inclusions in the TiO2 coatings, trace levels of CO2 or N2 or CO2+N2 gas were introduced into the deposition chamber respectively. The properties of the deposited nano-coatings were determined using Spectroscopic Ellipsometry, SEM, AFM, Optical profilometry, XPS, Raman, X-ray diffraction UV-Vis spectroscopy and tri-electrode potentiostat measurements. Coating growth rate, structure, surface morphology and roughness were found to be significantly influenced by the types and amount of doping. Substitutional type of doping in all doped sample were confirmed by XPS. UV-vis measurement confirmed that doping (especially for C doped sample) facilitate photoactivity of sputtered deposited titania coating toward visible light by reducing bandgap. The photocurrent density (indirect indication of water splitting performance) of the C-doped photoanode was approximately 26% higher in comparison with un-doped photoanode. However, coating doped with nitrogen (N or N+C) does not exhibit good performance in the photoelectrochemical cell due to their higher charge recombination properties.

  3. Toxicological benchmarks for wildlife. Environmental Restoration Program

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  4. Electroweak splitting functions and high energy showering

    Science.gov (United States)

    Chen, Junmou; Han, Tao; Tweedie, Brock

    2017-11-01

    We derive the electroweak (EW) collinear splitting functions for the Standard Model, including the massive fermions, gauge bosons and the Higgs boson. We first present the splitting functions in the limit of unbroken SU(2) L × U(1) Y and discuss their general features in the collinear and soft-collinear regimes. These are the leading contributions at a splitting scale ( k T ) far above the EW scale ( v). We then systematically incorporate EW symmetry breaking (EWSB), which leads to the emergence of additional "ultra-collinear" splitting phenomena and naive violations of the Goldstone-boson Equivalence Theorem. We suggest a particularly convenient choice of non-covariant gauge (dubbed "Goldstone Equivalence Gauge") that disentangles the effects of Goldstone bosons and gauge fields in the presence of EWSB, and allows trivial book-keeping of leading power corrections in v/ k T . We implement a comprehensive, practical EW showering scheme based on these splitting functions using a Sudakov evolution formalism. Novel features in the implementation include a complete accounting of ultra-collinear effects, matching between shower and decay, kinematic back-reaction corrections in multi-stage showers, and mixed-state evolution of neutral bosons ( γ/ Z/ h) using density-matrices. We employ the EW showering formalism to study a number of important physical processes at O (1-10 TeV) energies. They include (a) electroweak partons in the initial state as the basis for vector-boson-fusion; (b) the emergence of "weak jets" such as those initiated by transverse gauge bosons, with individual splitting probabilities as large as O (35%); (c) EW showers initiated by top quarks, including Higgs bosons in the final state; (d) the occurrence of O (1) interference effects within EW showers involving the neutral bosons; and (e) EW corrections to new physics processes, as illustrated by production of a heavy vector boson ( W ') and the subsequent showering of its decay products.

  5. An Examination Of Fracture Splitting Parameters Of Crackable Connecting Rods

    Directory of Open Access Journals (Sweden)

    Zafer Özdemir

    2000-06-01

    Full Text Available Fracture splitting method is an innovative processing technique in the field of automobile engine connecting rod (con/rod manufacturing. Compared with traditional method, this technique has remarkable advantages. Manufacturing procedures, equipment and tools investment can be decreased and energy consumption reduced remarkably. Furthermore, product quality and bearing capability can also be improved. It provides a high quality, high accuracy and low cost route for producing connecting rods (con/rods. With the many advantages mentioned above, this method has attracted manufacturers attention and has been utilized in many types of con/rod manufacturing. In this article, the method and the advantages it provides, such as materials, notches for fracture splitting, fracture splitting conditions and fracture splitting equipment are discussed in detail. The paper describes an analysis of examination of fracture splitting parameters and optik-SEM fractography of C70S6 crackable connectıng rod. Force and velocity parameters are investigated. That uniform impact force distrubition starting from the starting notch causes brittle and cleavage failure mode is obtained as a result. This induces to decrease the toughness.

  6. Evaluation of Mandibular Anatomy Associated With Bad Splits in Sagittal Split Ramus Osteotomy of Mandible.

    Science.gov (United States)

    Wang, Tongyue; Han, Jeong Joon; Oh, Hee-Kyun; Park, Hong-Ju; Jung, Seunggon; Park, Yeong-Joon; Kook, Min-Suk

    2016-07-01

    This study aimed to identify risk factors associated with bad splits during sagittal split ramus osteotomy by using three-dimensional computed tomography. This study included 8 bad splits and 47 normal patients without bad splits. Mandibular anatomic parameters related to osteotomy line were measured. These included anteroposterior width of the ramus at level of lingula, distance between external oblique ridge and lingula, distance between sigmoid notch and inferior border of mandible, mandibular angle, distance between inferior outer surface of mandibular canal and inferior border of mandible under distal root of second molar (MCEM), buccolingual thickness of the ramus at level of lingula, and buccolingual thickness of the area just distal to first molar (BTM1) and second molar (BTM2). The incidence of bad splits in 625 sagittal split osteotomies was 1.28%. Compared with normal group, bad split group exhibited significantly thinner BTM2 and shorter sigmoid notch and inferior border of mandible (P bad splits. These anatomic data may help surgeons to choose the safest surgical techniques and best osteotomy sites.

  7. ProteinSplit: splitting of multi-domain proteins using prediction of ordered and disordered regions in protein sequences for virtual structural genomics

    International Nuclear Information System (INIS)

    Wyrwicz, Lucjan S; Koczyk, Grzegorz; Rychlewski, Leszek; Plewczynski, Dariusz

    2007-01-01

    The annotation of protein folds within newly sequenced genomes is the main target for semi-automated protein structure prediction (virtual structural genomics). A large number of automated methods have been developed recently with very good results in the case of single-domain proteins. Unfortunately, most of these automated methods often fail to properly predict the distant homology between a given multi-domain protein query and structural templates. Therefore a multi-domain protein should be split into domains in order to overcome this limitation. ProteinSplit is designed to identify protein domain boundaries using a novel algorithm that predicts disordered regions in protein sequences. The software utilizes various sequence characteristics to assess the local propensity of a protein to be disordered or ordered in terms of local structure stability. These disordered parts of a protein are likely to create interdomain spacers. Because of its speed and portability, the method was successfully applied to several genome-wide fold annotation experiments. The user can run an automated analysis of sets of proteins or perform semi-automated multiple user projects (saving the results on the server). Additionally the sequences of predicted domains can be sent to the Bioinfo.PL Protein Structure Prediction Meta-Server for further protein three-dimensional structure and function prediction. The program is freely accessible as a web service at http://lucjan.bioinfo.pl/proteinsplit together with detailed benchmark results on the critical assessment of a fully automated structure prediction (CAFASP) set of sequences. The source code of the local version of protein domain boundary prediction is available upon request from the authors

  8. EGS4 benchmark program

    International Nuclear Information System (INIS)

    Yasu, Y.; Hirayama, H.; Namito, Y.; Yashiro, S.

    1995-01-01

    This paper proposes EGS4 Benchmark Suite which consists of three programs called UCSAMPL4, UCSAMPL4I and XYZDOS. This paper also evaluates optimization methods of recent RISC/UNIX systems, such as IBM, HP, DEC, Hitachi and Fujitsu, for the benchmark suite. When particular compiler option and math library were included in the evaluation process, system performed significantly better. Observed performance of some of the RISC/UNIX systems were beyond some so-called Mainframes of IBM, Hitachi or Fujitsu. The computer performance of EGS4 Code System on an HP9000/735 (99MHz) was defined to be the unit of EGS4 Unit. The EGS4 Benchmark Suite also run on various PCs such as Pentiums, i486 and DEC alpha and so forth. The performance of recent fast PCs reaches that of recent RISC/UNIX systems. The benchmark programs have been evaluated with correlation of industry benchmark programs, namely, SPECmark. (author)

  9. California commercial building energy benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the

  10. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    The objective of this study was to identify usage of foodservice performance measures, important activities in foodservice benchmarking, and benchmarking attitudes, beliefs, and practices by foodservice directors...

  11. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  12. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  13. Regional Competitive Intelligence: Benchmarking and Policymaking

    OpenAIRE

    Huggins , Robert

    2010-01-01

    Benchmarking exercises have become increasingly popular within the sphere of regional policymaking in recent years. The aim of this paper is to analyse the concept of regional benchmarking and its links with regional policymaking processes. It develops a typology of regional benchmarking exercises and regional benchmarkers, and critically reviews the literature, both academic and policy oriented. It is argued that critics who suggest regional benchmarking is a flawed concept and technique fai...

  14. Benchmarking and performance enhancement framework for multi-staging object-oriented languages

    Directory of Open Access Journals (Sweden)

    Ahmed H. Yousef

    2013-06-01

    Full Text Available This paper focuses on verifying the readiness, feasibility, generality and usefulness of multi-staging programming in software applications. We present a benchmark designed to evaluate the performance gain of different multi-staging programming (MSP languages implementations of object oriented languages. The benchmarks in this suite cover different tests that range from classic simple examples (like matrix algebra to advanced examples (like encryption and image processing. The benchmark is applied to compare the performance gain of two different MSP implementations (Mint and Metaphor that are built on object oriented languages (Java and C# respectively. The results concerning the application of this benchmark on these languages are presented and analysed. The measurement technique used in benchmarking leads to the development of a language independent performance enhancement framework that allows the programmer to select which code segments need staging. The framework also enables the programmer to verify the effectiveness of staging on the application performance. The framework is applied to a real case study. The case study results showed the effectiveness of the framework to achieve significant performance enhancement.

  15. Statistical Analysis of Reactor Pressure Vessel Fluence Calculation Benchmark Data Using Multiple Regression Techniques

    International Nuclear Information System (INIS)

    Carew, John F.; Finch, Stephen J.; Lois, Lambros

    2003-01-01

    The calculated >1-MeV pressure vessel fluence is used to determine the fracture toughness and integrity of the reactor pressure vessel. It is therefore of the utmost importance to ensure that the fluence prediction is accurate and unbiased. In practice, this assurance is provided by comparing the predictions of the calculational methodology with an extensive set of accurate benchmarks. A benchmarking database is used to provide an estimate of the overall average measurement-to-calculation (M/C) bias in the calculations ( ). This average is used as an ad-hoc multiplicative adjustment to the calculations to correct for the observed calculational bias. However, this average only provides a well-defined and valid adjustment of the fluence if the M/C data are homogeneous; i.e., the data are statistically independent and there is no correlation between subsets of M/C data.Typically, the identification of correlations between the errors in the database M/C values is difficult because the correlation is of the same magnitude as the random errors in the M/C data and varies substantially over the database. In this paper, an evaluation of a reactor dosimetry benchmark database is performed to determine the statistical validity of the adjustment to the calculated pressure vessel fluence. Physical mechanisms that could potentially introduce a correlation between the subsets of M/C ratios are identified and included in a multiple regression analysis of the M/C data. Rigorous statistical criteria are used to evaluate the homogeneity of the M/C data and determine the validity of the adjustment.For the database evaluated, the M/C data are found to be strongly correlated with dosimeter response threshold energy and dosimeter location (e.g., cavity versus in-vessel). It is shown that because of the inhomogeneity in the M/C data, for this database, the benchmark data do not provide a valid basis for adjusting the pressure vessel fluence.The statistical criteria and methods employed in

  16. Benchmarking Using Basic DBMS Operations

    Science.gov (United States)

    Crolotte, Alain; Ghazal, Ahmad

    The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

  17. High Energy Physics (HEP) benchmark program

    International Nuclear Information System (INIS)

    Yasu, Yoshiji; Ichii, Shingo; Yashiro, Shigeo; Hirayama, Hideo; Kokufuda, Akihiro; Suzuki, Eishin.

    1993-01-01

    High Energy Physics (HEP) benchmark programs are indispensable tools to select suitable computer for HEP application system. Industry standard benchmark programs can not be used for this kind of particular selection. The CERN and the SSC benchmark suite are famous HEP benchmark programs for this purpose. The CERN suite includes event reconstruction and event generator programs, while the SSC one includes event generators. In this paper, we found that the results from these two suites are not consistent. And, the result from the industry benchmark does not agree with either of these two. Besides, we describe comparison of benchmark results using EGS4 Monte Carlo simulation program with ones from two HEP benchmark suites. Then, we found that the result from EGS4 in not consistent with the two ones. The industry standard of SPECmark values on various computer systems are not consistent with the EGS4 results either. Because of these inconsistencies, we point out the necessity of a standardization of HEP benchmark suites. Also, EGS4 benchmark suite should be developed for users of applications such as medical science, nuclear power plant, nuclear physics and high energy physics. (author)

  18. Benchmarking Tool Kit.

    Science.gov (United States)

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  19. The Split-Brain Phenomenon Revisited: A Single Conscious Agent with Split Perception.

    Science.gov (United States)

    Pinto, Yair; de Haan, Edward H F; Lamme, Victor A F

    2017-11-01

    The split-brain phenomenon is caused by the surgical severing of the corpus callosum, the main route of communication between the cerebral hemispheres. The classical view of this syndrome asserts that conscious unity is abolished. The left hemisphere consciously experiences and functions independently of the right hemisphere. This view is a cornerstone of current consciousness research. In this review, we first discuss the evidence for the classical view. We then propose an alternative, the 'conscious unity, split perception' model. This model asserts that a split brain produces one conscious agent who experiences two parallel, unintegrated streams of information. In addition to changing our view of the split-brain phenomenon, this new model also poses a serious challenge for current dominant theories of consciousness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Global Vision over Benchmarking Process: Benchmarking Based Enterprises

    OpenAIRE

    Sitnikov, Catalina; Giurca Vasilescu, Laura

    2008-01-01

    Benchmarking uses the knowledge and the experience of others to improve the enterprise. Starting from the analysis of the performance and underlying the strengths and weaknesses of the enterprise it should be assessed what must be done in order to improve its activity. Using benchmarking techniques, an enterprise looks at how processes in the value chain are performed. The approach based on the vision “from the whole towards the parts” (a fragmented image of the enterprise’s value chain) redu...

  1. Triadic split-merge sampler

    Science.gov (United States)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  2. Investible benchmarks & hedge fund liquidity

    OpenAIRE

    Freed, Marc S; McMillan, Ben

    2011-01-01

    A lack of commonly accepted benchmarks for hedge fund performance has permitted hedge fund managers to attribute to skill returns that may actually accrue from market risk factors and illiquidity. Recent innovations in hedge fund replication permits us to estimate the extent of this misattribution. Using an option-based model, we find evidence that the value of liquidity options that investors implicitly grant managers when they invest may account for part or even all hedge fund returns. C...

  3. Boomerang deformity of cervical spinal cord migrating between split laminae after laminoplasty.

    Science.gov (United States)

    Kimura, S; Gomibuchi, F; Shimoda, H; Ikezawa, Y; Segawa, H; Kaneko, F; Uchiyama, S; Homma, T

    2000-04-01

    Patients with cervical compression myelopathy were studied to elucidate the mechanism underlying boomerang deformity, which results from the migration of the cervical spinal cord between split laminae after laminoplasty with median splitting of the spinous processes (boomerang sign). Thirty-nine cases, comprising 25 patients with cervical spondylotic myelopathy, 8 patients with ossification of the posterior longitudinal ligament, and 6 patients with cervical disc herniation with developmental canal stenosis, were examined. The clinical and radiological findings were retrospectively compared between patients with (B group, 8 cases) and without (C group, 31 cases) boomerang sign. Moderate increase of the grade of this deformity resulted in no clinical recovery, although there was no difference in clinical recovery between the two groups. Most boomerang signs developed at the C4/5 and/or C5/6 level, where maximal posterior movement of the spinal cord was achieved. Widths between lateral hinges and between split laminae in the B group were smaller than in the C group. Flatness of the spinal cord in the B group was more severe than in the C group. In conclusion, the boomerang sign was caused by posterior movement of the spinal cord, narrower enlargement of the spinal canal and flatness of the spinal cord.

  4. Compilation of benchmark results for fusion related Nuclear Data

    International Nuclear Information System (INIS)

    Maekawa, Fujio; Wada, Masayuki; Oyama, Yukio; Ichihara, Chihiro; Makita, Yo; Takahashi, Akito

    1998-11-01

    This report compiles results of benchmark tests for validation of evaluated nuclear data to be used in nuclear designs of fusion reactors. Parts of results were obtained under activities of the Fusion Neutronics Integral Test Working Group organized by the members of both Japan Nuclear Data Committee and the Reactor Physics Committee. The following three benchmark experiments were employed used for the tests: (i) the leakage neutron spectrum measurement experiments from slab assemblies at the D-T neutron source at FNS/JAERI, (ii) in-situ neutron and gamma-ray measurement experiments (so-called clean benchmark experiments) also at FNS, and (iii) the pulsed sphere experiments for leakage neutron and gamma-ray spectra at the D-T neutron source facility of Osaka University, OKTAVIAN. Evaluated nuclear data tested were JENDL-3.2, JENDL Fusion File, FENDL/E-1.0 and newly selected data for FENDL/E-2.0. Comparisons of benchmark calculations with the experiments for twenty-one elements, i.e., Li, Be, C, N, O, F, Al, Si, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, Nb, Mo, W and Pb, are summarized. (author). 65 refs

  5. Argonne Code Center: Benchmark problem book.

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1977-06-01

    This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book.

  6. A split hand-split foot (SHFM3) gene is located at 10q24{yields}25

    Energy Technology Data Exchange (ETDEWEB)

    Gurrieri, F.; Genuardi, M.; Nanni, L.; Sangiorgi, E.; Garofalo, G. [Catholic Univ. of Rome (Italy)] [and others

    1996-04-24

    The split hand-split foot (SHSF) malformation affects the central rays of the upper and lower limbs. It presents either as an isolated defect or in association with other skeletal or non-skeletal abnormalities. An autosomal SHSF locus (SHFM1) was previously mapped to 7q22.1. We report the mapping of a second autosomal SHSF locus to 10q24{yields}25 region. Maximum lod scores of 3.73, 4.33 and 4.33 at a recombination fraction of zero were obtained for the loci D10S198, PAX2 and D10S1239, respectively. An 19 cM critical region could be defined by haplotype analysis and several genes with a potential role in limb morphogenesis are located in this region. Heterogeneity testing indicates the existence of at least one additional autosomal SHSF locus. 36 refs., 3 figs., 3 tabs.

  7. Benchmarks for GADRAS performance validation

    International Nuclear Information System (INIS)

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L. Jr.

    2009-01-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  8. Benchmarking in Czech Higher Education

    Directory of Open Access Journals (Sweden)

    Plaček Michal

    2015-12-01

    Full Text Available The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Based on an analysis of the current situation and existing needs in the Czech Republic, as well as on a comparison with international experience, recommendations for public policy are made, which lie in the design of a model of a collaborative benchmarking for Czech economics and management in higher-education programs. Because the fully complex model cannot be implemented immediately – which is also confirmed by structured interviews with academics who have practical experience with benchmarking –, the final model is designed as a multi-stage model. This approach helps eliminate major barriers to the implementation of benchmarking.

  9. Non-splitting in Kirchberg's Ideal-related KK-Theory

    DEFF Research Database (Denmark)

    Eilers, Søren; Restorff, Gunnar; Ruiz, Efren

    2011-01-01

    A. Bonkat obtained a universal coefficient theorem in the setting of Kirchberg's ideal-related KK-theory in the fundamental case of a C*-algebra with one specified ideal. The universal coefficient sequence was shown to split, unnaturally, under certain conditions. Employing certain K-theoretical ...

  10. Benchmark Tests on the New IBM RISC System/6000 590 Workstation

    Directory of Open Access Journals (Sweden)

    Harvey J. Wasserman

    1995-01-01

    Full Text Available The results of benchmark tests on the superscalar IBM RISC System/6000 Model 590 are presented. A set of well-characterized Fortran benchmarks spanning a range of computational characteristics was used for the study. The data from the 590 system are compared with those from a single-processor CRAY C90 system as well as with other microprocessor-based systems, such as the Digital Equipment Corporation AXP 3000/500X and the Hewlett-Packard HP/735.

  11. DETECTION OF FLUX EMERGENCE, SPLITTING, MERGING, AND CANCELLATION OF NETWORK FIELD. I. SPLITTING AND MERGING

    Energy Technology Data Exchange (ETDEWEB)

    Iida, Y.; Yokoyama, T. [Department of Earth and Planetary Science, University of Tokyo, Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Hagenaar, H. J. [Lockheed Martin Advanced Technology Center, Org. ADBS, Building 252, 3251 Hanover Street, Palo Alto, CA 94304 (United States)

    2012-06-20

    Frequencies of magnetic patch processes on the supergranule boundary, namely, flux emergence, splitting, merging, and cancellation, are investigated through automatic detection. We use a set of line-of-sight magnetograms taken by the Solar Optical Telescope (SOT) on board the Hinode satellite. We found 1636 positive patches and 1637 negative patches in the data set, whose time duration is 3.5 hr and field of view is 112'' Multiplication-Sign 112''. The total numbers of magnetic processes are as follows: 493 positive and 482 negative splittings, 536 positive and 535 negative mergings, 86 cancellations, and 3 emergences. The total numbers of emergence and cancellation are significantly smaller than those of splitting and merging. Further, the frequency dependence of the merging and splitting processes on the flux content are investigated. Merging has a weak dependence on the flux content with a power-law index of only 0.28. The timescale for splitting is found to be independent of the parent flux content before splitting, which corresponds to {approx}33 minutes. It is also found that patches split into any flux contents with the same probability. This splitting has a power-law distribution of the flux content with an index of -2 as a time-independent solution. These results support that the frequency distribution of the flux content in the analyzed flux range is rapidly maintained by merging and splitting, namely, surface processes. We suggest a model for frequency distributions of cancellation and emergence based on this idea.

  12. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  13. Benchmarking Swiss electricity grids

    International Nuclear Information System (INIS)

    Walti, N.O.; Weber, Ch.

    2001-01-01

    This extensive article describes a pilot benchmarking project initiated by the Swiss Association of Electricity Enterprises that assessed 37 Swiss utilities. The data collected from these utilities on a voluntary basis included data on technical infrastructure, investments and operating costs. These various factors are listed and discussed in detail. The assessment methods and rating mechanisms that provided the benchmarks are discussed and the results of the pilot study are presented that are to form the basis of benchmarking procedures for the grid regulation authorities under the planned Switzerland's electricity market law. Examples of the practical use of the benchmarking methods are given and cost-efficiency questions still open in the area of investment and operating costs are listed. Prefaces by the Swiss Association of Electricity Enterprises and the Swiss Federal Office of Energy complete the article

  14. Splitting: The Development of a Measure.

    Science.gov (United States)

    Gerson, Mary-Joan

    1984-01-01

    Described the development of a scale that measures splitting as a psychological structure. The construct validity of the splitting scale is suggested by the positive relationship between splitting scores and a diagnostic measure of the narcissistic personality disorder, as well as a negative relationship between splitting scores and levels of…

  15. Pre-evaluation of fusion shielding benchmark experiment

    International Nuclear Information System (INIS)

    Hayashi, K.; Handa, H.; Konno, C.

    1994-01-01

    Shielding benchmark experiment is very useful to test the design code and nuclear data for fusion devices. There are many types of benchmark experiments that should be done in fusion shielding problems, but time and budget are limited. Therefore it will be important to select and determine the effective experimental configurations by precalculation before the experiment. The authors did three types of pre-evaluation to determine the experimental assembly configurations of shielding benchmark experiments planned in FNS, JAERI. (1) Void Effect Experiment - The purpose of this experiment is to measure the local increase of dose and nuclear heating behind small void(s) in shield material. Dimension of the voids and its arrangements were decided as follows. Dose and nuclear heating were calculated both for with and without void(s). Minimum size of the void was determined so that the ratio of these two results may be larger than error of the measurement system. (2) Auxiliary Shield Experiment - The purpose of this experiment is to measure shielding properties of B 4 C, Pb, W, and dose around superconducting magnet (SCM). Thickness of B 4 C, Pb, W and their arrangement including multilayer configuration were determined. (3) SCM Nuclear Heating Experiment - The purpose of this experiment is to measure nuclear heating and dose distribution in SCM material. Because it is difficult to use liquid helium as a part of SCM mock up material, material composition of SCM mock up are surveyed to have similar nuclear heating property of real SCM composition

  16. Bad splits in bilateral sagittal split osteotomy: systematic review and meta-analysis of reported risk factors.

    Science.gov (United States)

    Steenen, S A; van Wijk, A J; Becking, A G

    2016-08-01

    An unfavourable and unanticipated pattern of the bilateral sagittal split osteotomy (BSSO) is generally referred to as a 'bad split'. Patient factors predictive of a bad split reported in the literature are controversial. Suggested risk factors are reviewed in this article. A systematic review was undertaken, yielding a total of 30 studies published between 1971 and 2015 reporting the incidence of bad split and patient age, and/or surgical technique employed, and/or the presence of third molars. These included 22 retrospective cohort studies, six prospective cohort studies, one matched-pair analysis, and one case series. Spearman's rank correlation showed a statistically significant but weak correlation between increasing average age and increasing occurrence of bad splits in 18 studies (ρ=0.229; Pbad split among the different splitting techniques. A meta-analysis pooling the effect sizes of seven cohort studies showed no significant difference in the incidence of bad split between cohorts of patients with third molars present and concomitantly removed during surgery, and patients in whom third molars were removed at least 6 months preoperatively (odds ratio 1.16, 95% confidence interval 0.73-1.85, Z=0.64, P=0.52). In summary, there is no robust evidence to date to show that any risk factor influences the incidence of bad split. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. Computational fluid dynamics (CFD) round robin benchmark for a pressurized water reactor (PWR) rod bundle

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Shin K., E-mail: paengki1@tamu.edu; Hassan, Yassin A.

    2016-05-15

    Highlights: • The capabilities of steady RANS models were directly assessed for full axial scale experiment. • The importance of mesh and conjugate heat transfer was reaffirmed. • The rod inner-surface temperature was directly compared. • The steady RANS calculations showed a limitation in the prediction of circumferential distribution of the rod surface temperature. - Abstract: This study examined the capabilities and limitations of steady Reynolds-Averaged Navier–Stokes (RANS) approach for pressurized water reactor (PWR) rod bundle problems, based on the round robin benchmark of computational fluid dynamics (CFD) codes against the NESTOR experiment for a 5 × 5 rod bundle with typical split-type mixing vane grids (MVGs). The round robin exercise against the high-fidelity, broad-range (covering multi-spans and entire lateral domain) NESTOR experimental data for both the flow field and the rod temperatures enabled us to obtain important insights into CFD prediction and validation for the split-type MVG PWR rod bundle problem. It was found that the steady RANS turbulence models with wall function could reasonably predict two key variables for a rod bundle problem – grid span pressure loss and the rod surface temperature – once mesh (type, resolution, and configuration) was suitable and conjugate heat transfer was properly considered. However, they over-predicted the magnitude of the circumferential variation of the rod surface temperature and could not capture its peak azimuthal locations for a central rod in the wake of the MVG. These discrepancies in the rod surface temperature were probably because the steady RANS approach could not capture unsteady, large-scale cross-flow fluctuations and qualitative cross-flow pattern change due to the laterally confined test section. Based on this benchmarking study, lessons and recommendations about experimental methods as well as CFD methods were also provided for the future research.

  18. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  19. Split Malcev algebras

    Indian Academy of Sciences (India)

    project of the Spanish Ministerio de Educación y Ciencia MTM2007-60333. References. [1] Calderón A J, On split Lie algebras with symmetric root systems, Proc. Indian. Acad. Sci (Math. Sci.) 118(2008) 351–356. [2] Calderón A J, On split Lie triple systems, Proc. Indian. Acad. Sci (Math. Sci.) 119(2009). 165–177.

  20. Two-Loop Splitting Amplitudes

    International Nuclear Information System (INIS)

    Bern, Z.

    2004-01-01

    Splitting amplitudes govern the behavior of scattering amplitudes at the momenta of external legs become collinear. In this talk we outline the calculation of two-loop splitting amplitudes via the unitarity sewing method. This method retains the simple factorization properties of light-cone gauge, but avoids the need for prescriptions such as the principal value or Mandelstam-Leibbrandt ones. The encountered loop momentum integrals are then evaluated using integration-by-parts and Lorentz invariance identities. We outline a variety of applications for these splitting amplitudes

  1. Two-loop splitting amplitudes

    International Nuclear Information System (INIS)

    Bern, Z.; Dixon, L.J.; Kosower, D.A.

    2004-01-01

    Splitting amplitudes govern the behavior of scattering amplitudes at the momenta of external legs become collinear. In this talk we outline the calculation of two-loop splitting amplitudes via the unitarity sewing method. This method retains the simple factorization properties of light-cone gauge, but avoids the need for prescriptions such as the principal value or Mandelstam-Leibbrandt ones. The encountered loop momentum integrals are then evaluated using integration-by-parts and Lorentz invariance identities. We outline a variety of applications for these splitting amplitudes

  2. Generalized Split-Window Algorithm for Estimate of Land Surface Temperature from Chinese Geostationary FengYun Meteorological Satellite (FY-2C Data

    Directory of Open Access Journals (Sweden)

    Jun Xia

    2008-02-01

    Full Text Available On the basis of the radiative transfer theory, this paper addressed the estimate ofLand Surface Temperature (LST from the Chinese first operational geostationarymeteorological satellite-FengYun-2C (FY-2C data in two thermal infrared channels (IR1,10.3-11.3 μ m and IR2, 11.5-12.5 μ m , using the Generalized Split-Window (GSWalgorithm proposed by Wan and Dozier (1996. The coefficients in the GSW algorithmcorresponding to a series of overlapping ranging of the mean emissivity, the atmosphericWater Vapor Content (WVC, and the LST were derived using a statistical regressionmethod from the numerical values simulated with an accurate atmospheric radiativetransfer model MODTRAN 4 over a wide range of atmospheric and surface conditions.The simulation analysis showed that the LST could be estimated by the GSW algorithmwith the Root Mean Square Error (RMSE less than 1 K for the sub-ranges with theViewing Zenith Angle (VZA less than 30° or for the sub-rangs with VZA less than 60°and the atmospheric WVC less than 3.5 g/cm2 provided that the Land Surface Emissivities(LSEs are known. In order to determine the range for the optimum coefficients of theGSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according tothe land surface classification or using the method proposed by Jiang et al. (2006; and theWVC could be obtained from MODIS total precipitable water product MOD05, or beretrieved using Li et al.’ method (2003. The sensitivity and error analyses in term of theuncertainty of the LSE and WVC as well as the instrumental noise were performed. Inaddition, in order to compare the different formulations of the split-window algorithms,several recently proposed split-window algorithms were used to estimate the LST with thesame simulated FY-2C data. The result of the intercomparsion showed that most of thealgorithms give

  3. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  4. The Drill Down Benchmark

    NARCIS (Netherlands)

    P.A. Boncz (Peter); T. Rühl (Tim); F. Kwakkel

    1998-01-01

    textabstractData Mining places specific requirements on DBMS query performance that cannot be evaluated satisfactorily using existing OLAP benchmarks. The DD Benchmark - defined here - provides a practical case and yardstick to explore how well a DBMS is able to support Data Mining applications. It

  5. Concentric Split Flow Filter

    Science.gov (United States)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  6. Benchmarking and Learning in Public Healthcare

    DEFF Research Database (Denmark)

    Buckmaster, Natalie; Mouritsen, Jan

    2017-01-01

    This research investigates the effects of learning-oriented benchmarking in public healthcare settings. Benchmarking is a widely adopted yet little explored accounting practice that is part of the paradigm of New Public Management. Extant studies are directed towards mandated coercive benchmarking...

  7. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts to...... contribution to the discussions within the Eusponsored BEST Thematic Network (Benchmarking European Sustainable Transport) which ran from 2000 to 2003....

  8. Benchmarking – A tool for judgment or improvement?

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2010-01-01

    perceptions of benchmarking will be presented; public benchmarking and best practice benchmarking. These two types of benchmarking are used to characterize and discuss the Danish benchmarking system and to enhance which effects, possibilities and challenges that follow in the wake of using this kind......Change in construction is high on the agenda for the Danish government and a comprehensive effort is done in improving quality and efficiency. This has led to an initiated governmental effort in bringing benchmarking into the Danish construction sector. This paper is an appraisal of benchmarking...... as it is presently carried out in the Danish construction sector. Many different perceptions of benchmarking and the nature of the construction sector, lead to an uncertainty in how to perceive and use benchmarking, hence, generating an uncertainty in understanding the effects of benchmarking. This paper addresses...

  9. Crystal-field-driven redox reactions: How common minerals split H2O and CO2 into reduced H2 and C plus oxygen

    Science.gov (United States)

    Freund, F.; Batllo, F.; Leroy, R. C.; Lersky, S.; Masuda, M. M.; Chang, S.

    1991-01-01

    It is difficult to prove the presence of molecular H2 and reduced C in minerals containing dissolved H2 and CO2. A technique was developed which unambiguously shows that minerals grown in viciously reducing environments contain peroxy in their crystal structures. The peroxy represent interstitial oxygen atoms left behind when the solute H2O and/or CO2 split off H2 and C as a result of internal redox reactions, driven by the crystal field. The observation of peroxy affirms the presence of H2 and reduced C. It shows that the solid state is indeed an unusual reaction medium.

  10. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  11. In vivo and in vitro protein ligation by naturally occurring and engineered split DnaE inteins.

    Directory of Open Access Journals (Sweden)

    A Sesilja Aranko

    Full Text Available BACKGROUND: Protein trans-splicing by naturally occurring split DnaE inteins is used for protein ligation of foreign peptide fragments. In order to widen biotechnological applications of protein trans-splicing, it is highly desirable to have split inteins with shorter C-terminal fragments, which can be chemically synthesized. PRINCIPAL FINDINGS: We report the identification of new functional split sites in DnaE inteins from Synechocystis sp. PCC6803 and from Nostoc punctiforme. One of the newly engineered split intein bearing C-terminal 15 residues showed more robust protein trans-splicing activity than naturally occurring split DnaE inteins in a foreign context. During the course of our experiments, we found that protein ligation by protein trans-splicing depended not only on the splicing junction sequences, but also on the foreign extein sequences. Furthermore, we could classify the protein trans-splicing reactions in foreign contexts with a simple kinetic model into three groups according to their kinetic parameters in the presence of various reducing agents. CONCLUSION: The shorter C-intein of the newly engineered split intein could be a useful tool for biotechnological applications including protein modification, incorporation of chemical probes, and segmental isotopic labelling. Based on kinetic analysis of the protein splicing reactions, we propose a general strategy to improve ligation yields by protein trans-splicing, which could significantly enhance the applications of protein ligation by protein trans-splicing.

  12. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  13. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  14. Utilizing benchmark data from the ANL-ZPR diagnostic cores program

    International Nuclear Information System (INIS)

    Schaefer, R. W.; McKnight, R. D.

    2000-01-01

    The support of the criticality safety community is allowing the production of benchmark descriptions of several assemblies from the ZPR Diagnostic Cores Program. The assemblies have high sensitivities to nuclear data for a few isotopes. This can highlight limitations in nuclear data for selected nuclides or in standard methods used to treat these data. The present work extends the use of the simplified model of the U9 benchmark assembly beyond the validation of k eff . Further simplifications have been made to produce a data testing benchmark in the style of the standard CSEWG benchmark specifications. Calculations for this data testing benchmark are compared to results obtained with more detailed models and methods to determine their biases. These biases or corrections factors can then be applied in the use of the less refined methods and models. Data testing results using Versions IV, V, and VI of the ENDF/B nuclear data are presented for k eff , f 28 /f 25 , c 28 /f 25 , and β eff . These limited results demonstrate the importance of studying other integral parameters in addition to k eff in trying to improve nuclear data and methods and the importance of accounting for methods and/or modeling biases when using data testing results to infer the quality of the nuclear data files

  15. Benchmarking the energy efficiency of commercial buildings

    International Nuclear Information System (INIS)

    Chung, William; Hui, Y.V.; Lam, Y. Miu

    2006-01-01

    Benchmarking energy-efficiency is an important tool to promote the efficient use of energy in commercial buildings. Benchmarking models are mostly constructed in a simple benchmark table (percentile table) of energy use, which is normalized with floor area and temperature. This paper describes a benchmarking process for energy efficiency by means of multiple regression analysis, where the relationship between energy-use intensities (EUIs) and the explanatory factors (e.g., operating hours) is developed. Using the resulting regression model, these EUIs are then normalized by removing the effect of deviance in the significant explanatory factors. The empirical cumulative distribution of the normalized EUI gives a benchmark table (or percentile table of EUI) for benchmarking an observed EUI. The advantage of this approach is that the benchmark table represents a normalized distribution of EUI, taking into account all the significant explanatory factors that affect energy consumption. An application to supermarkets is presented to illustrate the development and the use of the benchmarking method

  16. Numisheet2005 Benchmark Analysis on Forming of an Automotive Underbody Cross Member: Benchmark 2

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    This report presents an international cooperation benchmark effort focusing on simulations of a sheet metal stamping process. A forming process of an automotive underbody cross member using steel and aluminum blanks is used as a benchmark. Simulation predictions from each submission are analyzed via comparison with the experimental results. A brief summary of various models submitted for this benchmark study is discussed. Prediction accuracy of each parameter of interest is discussed through the evaluation of cumulative errors from each submission

  17. SKaMPI: A Comprehensive Benchmark for Public Benchmarking of MPI

    Directory of Open Access Journals (Sweden)

    Ralf Reussner

    2002-01-01

    Full Text Available The main objective of the MPI communication library is to enable portable parallel programming with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal performance portability, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

  18. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth

  19. Power reactor pressure vessel benchmarks

    International Nuclear Information System (INIS)

    Rahn, F.J.

    1978-01-01

    A review is given of the current status of experimental and calculational benchmarks for use in understanding the radiation embrittlement effects in the pressure vessels of operating light water power reactors. The requirements of such benchmarks for application to pressure vessel dosimetry are stated. Recent developments in active and passive neutron detectors sensitive in the ranges of importance to embrittlement studies are summarized and recommendations for improvements in the benchmark are made. (author)

  20. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    Science.gov (United States)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  1. Design of a pre-collimator system for neutronics benchmark experiment

    International Nuclear Information System (INIS)

    Cai Xinggang; Liu Jiantao; Nie Yangbo; Bao Jie; Ruan Xichao; Lu Yanxia

    2013-01-01

    Benchmark experiment is an important means to inspect the reliability and accuracy of the evaluated nuclear data, the effect/background ratios are the important parameters to weight the quality of experimental data. In order to obtain higher effect/background ratios, a pre-collimator system was designed for benchmark experiment. This system mainly consists of a pre-collimator and a shadow cone, The MCNP-4C code was used to simulate the background spectra under various conditions, from the results we found that with the pre-collimator system have a very marked improvement in the effect/background ratios. (authors)

  2. Shielding benchmark problems, (2)

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Shin, Kazuo; Tada, Keiko.

    1980-02-01

    Shielding benchmark problems prepared by Working Group of Assessment of Shielding Experiments in the Research Committee on Shielding Design in the Atomic Energy Society of Japan were compiled by Shielding Laboratory in Japan Atomic Energy Research Institute. Fourteen shielding benchmark problems are presented newly in addition to twenty-one problems proposed already, for evaluating the calculational algorithm and accuracy of computer codes based on discrete ordinates method and Monte Carlo method and for evaluating the nuclear data used in codes. The present benchmark problems are principally for investigating the backscattering and the streaming of neutrons and gamma rays in two- and three-dimensional configurations. (author)

  3. Electricity consumption in school buildings - benchmark and web tools; Elforbrug i skoler - benchmark og webvaerktoej

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    The aim of this project has been to produce benchmarks for electricity consumption in Danish schools in order to encourage electricity conservation. An internet programme has been developed with the aim of facilitating schools' access to benchmarks and to evaluate energy consumption. The overall purpose is to create increased attention to the electricity consumption of each separate school by publishing benchmarks which take the schools' age and number of pupils as well as after school activities into account. Benchmarks can be used to make green accounts and work as markers in e.g. energy conservation campaigns, energy management and for educational purposes. The internet tool can be found on www.energiguiden.dk. (BA)

  4. Benchmarking is associated with improved quality of care in type 2 diabetes: the OPTIMISE randomized, controlled trial.

    Science.gov (United States)

    Hermans, Michel P; Elisaf, Moses; Michel, Georges; Muls, Erik; Nobels, Frank; Vandenberghe, Hans; Brotons, Carlos

    2013-11-01

    To assess prospectively the effect of benchmarking on quality of primary care for patients with type 2 diabetes by using three major modifiable cardiovascular risk factors as critical quality indicators. Primary care physicians treating patients with type 2 diabetes in six European countries were randomized to give standard care (control group) or standard care with feedback benchmarked against other centers in each country (benchmarking group). In both groups, laboratory tests were performed every 4 months. The primary end point was the percentage of patients achieving preset targets of the critical quality indicators HbA1c, LDL cholesterol, and systolic blood pressure (SBP) after 12 months of follow-up. Of 4,027 patients enrolled, 3,996 patients were evaluable and 3,487 completed 12 months of follow-up. Primary end point of HbA1c target was achieved in the benchmarking group by 58.9 vs. 62.1% in the control group (P = 0.398) after 12 months; 40.0 vs. 30.1% patients met the SBP target (P benchmarking group. The percentage of patients achieving all three targets at month 12 was significantly larger in the benchmarking group than in the control group (12.5 vs. 8.1%; P benchmarking was shown to be an effective tool for increasing achievement of critical quality indicators and potentially reducing patient cardiovascular residual risk profile.

  5. Split-illumination electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Tanigaki, Toshiaki; Aizawa, Shinji; Suzuki, Takahiro; Park, Hyun Soon [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Inada, Yoshikatsu [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Katahira 2-1-1, Sendai 980-8577 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi, Saitama 332-0012 (Japan); Taniyama, Akira [Corporate Research and Development Laboratories, Sumitomo Metal Industries, Ltd., Amagasaki, Hyogo 660-0891 (Japan); Shindo, Daisuke [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Katahira 2-1-1, Sendai 980-8577 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Onna-son, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-23

    We developed a split-illumination electron holography that uses an electron biprism in the illuminating system and two biprisms (applicable to one biprism) in the imaging system, enabling holographic interference micrographs of regions far from the sample edge to be obtained. Using a condenser biprism, we split an electron wave into two coherent electron waves: one wave is to illuminate an observation area far from the sample edge in the sample plane and the other wave to pass through a vacuum space outside the sample. The split-illumination holography has the potential to greatly expand the breadth of applications of electron holography.

  6. Split-illumination electron holography

    International Nuclear Information System (INIS)

    Tanigaki, Toshiaki; Aizawa, Shinji; Suzuki, Takahiro; Park, Hyun Soon; Inada, Yoshikatsu; Matsuda, Tsuyoshi; Taniyama, Akira; Shindo, Daisuke; Tonomura, Akira

    2012-01-01

    We developed a split-illumination electron holography that uses an electron biprism in the illuminating system and two biprisms (applicable to one biprism) in the imaging system, enabling holographic interference micrographs of regions far from the sample edge to be obtained. Using a condenser biprism, we split an electron wave into two coherent electron waves: one wave is to illuminate an observation area far from the sample edge in the sample plane and the other wave to pass through a vacuum space outside the sample. The split-illumination holography has the potential to greatly expand the breadth of applications of electron holography.

  7. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  8. HS06 benchmark for an ARM server

    International Nuclear Information System (INIS)

    Kluth, Stefan

    2014-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  9. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...

  10. Shielding benchmark test

    International Nuclear Information System (INIS)

    Kawai, Masayoshi

    1984-01-01

    Iron data in JENDL-2 have been tested by analyzing shielding benchmark experiments for neutron transmission through iron block performed at KFK using CF-252 neutron source and at ORNL using collimated neutron beam from reactor. The analyses are made by a shielding analysis code system RADHEAT-V4 developed at JAERI. The calculated results are compared with the measured data. As for the KFK experiments, the C/E values are about 1.1. For the ORNL experiments, the calculated values agree with the measured data within an accuracy of 33% for the off-center geometry. The d-t neutron transmission measurements through carbon sphere made at LLNL are also analyzed preliminarily by using the revised JENDL data for fusion neutronics calculation. (author)

  11. Split and Splice Approach for Highly Selective Targeting of Human NSCLC Tumors

    Science.gov (United States)

    2014-10-01

    development and implementation of the “split-and- spice ” approach required optimization of many independent parameters, which were addressed in parallel...verify the feasibility of the “split and splice” approach for targeting human NSCLC tumor cell lines in culture and prepare the optimized toxins for...for cultured cells (months 2- 8). 2B. To test the efficiency of cell targeting by the toxin variants reconstituted in vitro (months 3-6). 2C. To

  12. Benchmarking in Czech Higher Education

    OpenAIRE

    Plaček Michal; Ochrana František; Půček Milan

    2015-01-01

    The first part of this article surveys the current experience with the use of benchmarking at Czech universities specializing in economics and management. The results indicate that collaborative benchmarking is not used on this level today, but most actors show some interest in its introduction. The expression of the need for it and the importance of benchmarking as a very suitable performance-management tool in less developed countries are the impetus for the second part of our article. Base...

  13. Benchmark simulation models, quo vadis?

    Science.gov (United States)

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  14. A Seafloor Benchmark for 3-dimensional Geodesy

    Science.gov (United States)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  15. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    Science.gov (United States)

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  16. Aerodynamic Benchmarking of the Deepwind Design

    DEFF Research Database (Denmark)

    Bedona, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    2015-01-01

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts: this shape is considered as a fixed parameter in the benchmarking...

  17. Cool covered sky-splitting spectrum-splitting FK

    Energy Technology Data Exchange (ETDEWEB)

    Mohedano, Rubén; Chaves, Julio; Falicoff, Waqidi; Hernandez, Maikel; Sorgato, Simone [LPI, Altadena, CA, USA and Madrid (Spain); Miñano, Juan C.; Benitez, Pablo [LPI, Altadena, CA, USA and Madrid, Spain and Universidad Politécnica de Madrid (UPM), Madrid (Spain); Buljan, Marina [Universidad Politécnica de Madrid (UPM), Madrid (Spain)

    2014-09-26

    Placing a plane mirror between the primary lens and the receiver in a Fresnel Köhler (FK) concentrator gives birth to a quite different CPV system where all the high-tech components sit on a common plane, that of the primary lens panels. The idea enables not only a thinner device (a half of the original) but also a low cost 1-step manufacturing process for the optics, automatic alignment of primary and secondary lenses, and cell/wiring protection. The concept is also compatible with two different techniques to increase the module efficiency: spectrum splitting between a 3J and a BPC Silicon cell for better usage of Direct Normal Irradiance DNI, and sky splitting to harvest the energy of the diffuse radiation and higher energy production throughout the year. Simple calculations forecast the module would convert 45% of the DNI into electricity.

  18. Bad split during bilateral sagittal split osteotomy of the mandible with separators: a retrospective study of 427 patients.

    Science.gov (United States)

    Mensink, Gertjan; Verweij, Jop P; Frank, Michael D; Eelco Bergsma, J; Richard van Merkesteyn, J P

    2013-09-01

    An unfavourable fracture, known as a bad split, is a common operative complication in bilateral sagittal split osteotomy (BSSO). The reported incidence ranges from 0.5 to 5.5%/site. Since 1994 we have used sagittal splitters and separators instead of chisels for BSSO in our clinic in an attempt to prevent postoperative hypoaesthesia. Theoretically an increased percentage of bad splits could be expected with this technique. In this retrospective study we aimed to find out the incidence of bad splits associated with BSSO done with splitters and separators. We also assessed the risk factors for bad splits. The study group comprised 427 consecutive patients among whom the incidence of bad splits was 2.0%/site, which is well within the reported range. The only predictive factor for a bad split was the removal of third molars at the same time as BSSO. There was no significant association between bad splits and age, sex, class of occlusion, or the experience of the surgeon. We think that doing a BSSO with splitters and separators instead of chisels does not increase the risk of a bad split, and is therefore safe with predictable results. Copyright © 2012 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Vver-1000 Mox core computational benchmark

    International Nuclear Information System (INIS)

    2006-01-01

    The NEA Nuclear Science Committee has established an Expert Group that deals with the status and trends of reactor physics, fuel performance and fuel cycle issues related to disposing of weapons-grade plutonium in mixed-oxide fuel. The objectives of the group are to provide NEA member countries with up-to-date information on, and to develop consensus regarding, core and fuel cycle issues associated with burning weapons-grade plutonium in thermal water reactors (PWR, BWR, VVER-1000, CANDU) and fast reactors (BN-600). These issues concern core physics, fuel performance and reliability, and the capability and flexibility of thermal water reactors and fast reactors to dispose of weapons-grade plutonium in standard fuel cycles. The activities of the NEA Expert Group on Reactor-based Plutonium Disposition are carried out in close co-operation (jointly, in most cases) with the NEA Working Party on Scientific Issues in Reactor Systems (WPRS). A prominent part of these activities include benchmark studies. At the time of preparation of this report, the following benchmarks were completed or in progress: VENUS-2 MOX Core Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); VVER-1000 LEU and MOX Benchmark (completed); KRITZ-2 Benchmarks: carried out jointly with the WPRS (formerly the WPPR) (completed); Hollow and Solid MOX Fuel Behaviour Benchmark (completed); PRIMO MOX Fuel Performance Benchmark (ongoing); VENUS-2 MOX-fuelled Reactor Dosimetry Calculation (ongoing); VVER-1000 In-core Self-powered Neutron Detector Calculational Benchmark (started); MOX Fuel Rod Behaviour in Fast Power Pulse Conditions (started); Benchmark on the VENUS Plutonium Recycling Experiments Configuration 7 (started). This report describes the detailed results of the benchmark investigating the physics of a whole VVER-1000 reactor core using two-thirds low-enriched uranium (LEU) and one-third MOX fuel. It contributes to the computer code certification process and to the

  20. Shielding benchmark problems

    International Nuclear Information System (INIS)

    Tanaka, Shun-ichi; Sasamoto, Nobuo; Oka, Yoshiaki; Kawai, Masayoshi; Nakazawa, Masaharu.

    1978-09-01

    Shielding benchmark problems were prepared by the Working Group of Assessment of Shielding Experiments in the Research Comittee on Shielding Design of the Atomic Energy Society of Japan, and compiled by the Shielding Laboratory of Japan Atomic Energy Research Institute. Twenty-one kinds of shielding benchmark problems are presented for evaluating the calculational algorithm and the accuracy of computer codes based on the discrete ordinates method and the Monte Carlo method and for evaluating the nuclear data used in the codes. (author)

  1. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  2. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  3. Benchmark Tests to Develop Analytical Time-Temperature Limit for HANA-6 Cladding for Compliance with New LOCA Criteria

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sung Yong; Jang, Hun; Lim, Jea Young; Kim, Dae Il; Kim, Yoon Ho; Mok, Yong Kyoon [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    According to 10CFR50.46c, two analytical time and temperature limits for breakaway oxidation and postquench ductility (PQD) should be determined by approved experimental procedure as described in NRC Regulatory Guide (RG) 1.222 and 1.223. According to RG 1.222 and 1.223, rigorous qualification requirements for test system are required, such as thermal and weight gain benchmarks. In order to meet these requirements, KEPCO NF has developed the new special facility to evaluate LOCA performance of zirconium alloy cladding. In this paper, qualification results for test facility and HT oxidation model for HANA-6 are summarized. The results of thermal benchmark tests of LOCA HT oxidation tester is summarized as follows. 1. The best estimate HT oxidation model of HANA- 6 was developed for the vender proprietary HT oxidation model. 2. In accordance with the RG 1.222 and 1.223, Benchmark tests were performed by using LOCA HT oxidation tester 3. The maximum axial and circumferential temperature difference are ± 9 .deg. C and ± 2 .deg. C at 1200 .deg. C, respectively. At the other temperature conditions, temperature difference is less than 1200 .deg. C result. Thermal benchmark test results meet the requirements of NRC RG 1.222 and 1.223.

  4. Medical school benchmarking - from tools to programmes.

    Science.gov (United States)

    Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T

    2015-02-01

    Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.

  5. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  6. Issues in Benchmark Metric Selection

    Science.gov (United States)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  7. Benchmarking clinical photography services in the NHS.

    Science.gov (United States)

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  8. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...... not public. The survey is a cooperative project "Benchmarking DanishIndustries" with CIP/Aalborg University, the Danish Technological University, the DanishTechnological Institute and Copenhagen Business School as consortia partners. The project has beenfunded by the Danish Agency for Trade and Industry...

  9. Geometrical splitting in Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Elperin, T.; Dudziak, D.J.

    1982-01-01

    A statistical model is presented by which a direct statistical approach yielded an analytic expression for the second moment, the variance ratio, and the benefit function in a model of an n surface-splitting Monte Carlo game. In addition to the insight into the dependence of the second moment on the splitting parameters the main importance of the expressions developed lies in their potential to become a basis for in-code optimization of splitting through a general algorithm. Refs

  10. Effects of Angular Variation on Split D Differential Eddy Current Probe Response (Postprint)

    Science.gov (United States)

    2016-02-10

    AFRL-RX-WP-JA-2016-0327 EFFECTS OF ANGULAR VARIATION ON SPLIT D DIFFERENTIAL EDDY CURRENT PROBE RESPONSE (POSTPRINT) Ryan D...March 2014 – 22 September 2015 4. TITLE AND SUBTITLE EFFECTS OF ANGULAR VARIATION ON SPLIT D DIFFERENTIAL EDDY CURRENT PROBE RESPONSE (POSTPRINT... Current Probe Response Ryan D. Mooers1, a) and John C. Aldrin2 1United States Air Force Research Labs, Materials and Manufacturing Directorate, Structural

  11. Innovative wedge axe in making split firewood

    International Nuclear Information System (INIS)

    Mutikainen, A.

    1998-01-01

    Interteam Oy, a company located in Espoo, has developed a new method for making split firewood. The tools on which the patented System Logmatic are based are wedge axe and cylindrical splitting-carrying frame. The equipment costs about 495 FIM. The block of wood to be split is placed inside the upright carrying frame and split in a series of splitting actions using the innovative wedge axe. The finished split firewood remains in the carrying frame, which (as its name indicates) also serves as the means for carrying the firewood. This innovative wedge-axe method was compared with the conventional splitting of wood using an axe (Fiskars -handy 1400 splitting axe costing about 200 FIM) in a study conducted at TTS-Institute. There were eight test subjects involved in the study. In the case of the wedge-axe method, handling of the blocks to be split and of the finished firewood was a little quicker, but in actual splitting it was a little slower than the conventional axe method. The average productivity of splitting the wood and of the work stages related to it was about 0.4 m 3 per effective hour in both methods. The methods were also equivalent of one another in terms of the load imposed by the work when measured in terms of the heart rate. As regards work safety, the wedge-axe method was superior to the conventional method, but the continuous striking action and jolting transmitted to the arms were unpleasant (orig.)

  12. Theory of long-range interactions for Rydberg states attached to hyperfine-split cores

    Science.gov (United States)

    Robicheaux, F.; Booth, D. W.; Saffman, M.

    2018-02-01

    The theory is developed for one- and two-atom interactions when the atom has a Rydberg electron attached to a hyperfine-split core state. This situation is relevant for some of the rare-earth and alkaline-earth atoms that have been proposed for experiments on Rydberg-Rydberg interactions. For the rare-earth atoms, the core electrons can have a very substantial total angular momentum J and a nonzero nuclear spin I . In the alkaline-earth atoms there is a single (s ) core electron whose spin can couple to a nonzero nuclear spin for odd isotopes. The resulting hyperfine splitting of the core state can lead to substantial mixing between the Rydberg series attached to different thresholds. Compared to the unperturbed Rydberg series of the alkali-metal atoms, the series perturbations and near degeneracies from the different parity states could lead to qualitatively different behavior for single-atom Rydberg properties (polarizability, Zeeman mixing and splitting, etc.) as well as Rydberg-Rydberg interactions (C5 and C6 matrices).

  13. TMD splitting functions in kT factorization. The real contribution to the gluon-to-gluon splitting

    International Nuclear Information System (INIS)

    Hentschinski, M.; Kusina, A.; Kutak, K.; Serino, M.

    2018-01-01

    We calculate the transverse momentum dependent gluon-to-gluon splitting function within k T -factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of k T factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization. (orig.)

  14. Library of synthetic transcriptional AND gates built with split T7 RNA polymerase mutants.

    Science.gov (United States)

    Shis, David L; Bennett, Matthew R

    2013-03-26

    The construction of synthetic gene circuits relies on our ability to engineer regulatory architectures that are orthogonal to the host's native regulatory pathways. However, as synthetic gene circuits become larger and more complicated, we are limited by the small number of parts, especially transcription factors, that work well in the context of the circuit. The current repertoire of transcription factors consists of a limited selection of activators and repressors, making the implementation of transcriptional logic a complicated and component-intensive process. To address this, we modified bacteriophage T7 RNA polymerase (T7 RNAP) to create a library of transcriptional AND gates for use in Escherichia coli by first splitting the protein and then mutating the DNA recognition domain of the C-terminal fragment to alter its promoter specificity. We first demonstrate that split T7 RNAP is active in vivo and compare it with full-length enzyme. We then create a library of mutant split T7 RNAPs that have a range of activities when used in combination with a complimentary set of altered T7-specific promoters. Finally, we assay the two-input function of both wild-type and mutant split T7 RNAPs and find that regulated expression of the N- and C-terminal fragments of the split T7 RNAPs creates AND logic in each case. This work demonstrates that mutant split T7 RNAP can be used as a transcriptional AND gate and introduces a unique library of components for use in synthetic gene circuits.

  15. Benchmarking of human resources management

    Directory of Open Access Journals (Sweden)

    David M. Akinnusi

    2008-11-01

    Full Text Available This paper reviews the role of human resource management (HRM which, today, plays a strategic partnership role in management. The focus of the paper is on HRM in the public sector, where much hope rests on HRM as a means of transforming the public service and achieving much needed service delivery. However, a critical evaluation of HRM practices in the public sector reveals that these services leave much to be desired. The paper suggests the adoption of benchmarking as a process to revamp HRM in the public sector so that it is able to deliver on its promises. It describes the nature and process of benchmarking and highlights the inherent difficulties in applying benchmarking in HRM. It concludes with some suggestions for a plan of action. The process of identifying “best” practices in HRM requires the best collaborative efforts of HRM practitioners and academicians. If used creatively, benchmarking has the potential to bring about radical and positive changes in HRM in the public sector. The adoption of the benchmarking process is, in itself, a litmus test of the extent to which HRM in the public sector has grown professionally.

  16. Integrating Best Practice and Performance Indicators To Benchmark the Performance of a School System. Benchmarking Paper 940317.

    Science.gov (United States)

    Cuttance, Peter

    This paper provides a synthesis of the literature on the role of benchmarking, with a focus on its use in the public sector. Benchmarking is discussed in the context of quality systems, of which it is an important component. The paper describes the basic types of benchmarking, pertinent research about its application in the public sector, the…

  17. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport......’ evokes a broad range of concerns that are hard to address fully at the level of specific practices. Thirdly policies are not directly comparable across space and context. For these reasons attempting to benchmark ‘sustainable transport policies’ against one another would be a highly complex task, which...

  18. Urban pattern: Layout design by hierarchical domain splitting

    KAUST Repository

    Yang, Yongliang; Wang, Jun; Vouga, Etienne; Wonka, Peter

    2013-01-01

    We present a framework for generating street networks and parcel layouts. Our goal is the generation of high-quality layouts that can be used for urban planning and virtual environments. We propose a solution based on hierarchical domain splitting using two splitting types: streamline-based splitting, which splits a region along one or multiple streamlines of a cross field, and template-based splitting, which warps pre-designed templates to a region and uses the interior geometry of the template as the splitting lines. We combine these two splitting approaches into a hierarchical framework, providing automatic and interactive tools to explore the design space.

  19. Urban pattern: Layout design by hierarchical domain splitting

    KAUST Repository

    Yang, Yongliang

    2013-11-06

    We present a framework for generating street networks and parcel layouts. Our goal is the generation of high-quality layouts that can be used for urban planning and virtual environments. We propose a solution based on hierarchical domain splitting using two splitting types: streamline-based splitting, which splits a region along one or multiple streamlines of a cross field, and template-based splitting, which warps pre-designed templates to a region and uses the interior geometry of the template as the splitting lines. We combine these two splitting approaches into a hierarchical framework, providing automatic and interactive tools to explore the design space.

  20. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  1. Benchmarking for Cost Improvement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  2. Benchmarking for controllere: metoder, teknikker og muligheder

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Sandalgaard, Niels Erik; Dietrichson, Lars Grubbe

    2008-01-01

    Benchmarking indgår på mange måder i både private og offentlige virksomheders ledelsespraksis. I økonomistyring anvendes benchmark-baserede indikatorer (eller nøgletal), eksempelvis ved fastlæggelse af mål i resultatkontrakter eller for at angive det ønskede niveau for visse nøgletal i et Balanced...... Scorecard eller tilsvarende målstyringsmodeller. Artiklen redegør for begrebet benchmarking ved at præsentere og diskutere forskellige facetter af det, samt redegør for fire forskellige anvendelser af benchmarking for at vise begrebets bredde og væsentligheden af at klarlægge formålet med et...... benchmarkingprojekt. Dernæst bliver forskellen på resultatbenchmarking og procesbenchmarking behandlet, hvorefter brugen af intern hhv. ekstern benchmarking, samt brugen af benchmarking i budgetlægning og budgetopfølgning, behandles....

  3. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... on archival, publicly disclosed, professional performance data for 191 German orthopedics departments, matched with survey data on bureaucratic benchmarking information given to chief orthopedists by the administration. We find a positive association between bureaucratic benchmarking information provision...

  4. A comparative study of upwind and MacCormack schemes for CAA benchmark problems

    Science.gov (United States)

    Viswanathan, K.; Sankar, L. N.

    1995-01-01

    In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.

  5. EPA's Benchmark Dose Modeling Software

    Science.gov (United States)

    The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations ...

  6. Accelerator shielding benchmark problems

    International Nuclear Information System (INIS)

    Hirayama, H.; Ban, S.; Nakamura, T.

    1993-01-01

    Accelerator shielding benchmark problems prepared by Working Group of Accelerator Shielding in the Research Committee on Radiation Behavior in the Atomic Energy Society of Japan were compiled by Radiation Safety Control Center of National Laboratory for High Energy Physics. Twenty-five accelerator shielding benchmark problems are presented for evaluating the calculational algorithm, the accuracy of computer codes and the nuclear data used in codes. (author)

  7. OECD/NEA benchmark for time-dependent neutron transport calculations without spatial homogenization

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Jason, E-mail: jason.hou@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Ivanov, Kostadin N. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Boyarinov, Victor F.; Fomichenko, Peter A. [National Research Centre “Kurchatov Institute”, Kurchatov Sq. 1, Moscow (Russian Federation)

    2017-06-15

    Highlights: • A time-dependent homogenization-free neutron transport benchmark was created. • The first phase, known as the kinetics phase, was described in this work. • Preliminary results for selected 2-D transient exercises were presented. - Abstract: A Nuclear Energy Agency (NEA), Organization for Economic Co-operation and Development (OECD) benchmark for the time-dependent neutron transport calculations without spatial homogenization has been established in order to facilitate the development and assessment of numerical methods for solving the space-time neutron kinetics equations. The benchmark has been named the OECD/NEA C5G7-TD benchmark, and later extended with three consecutive phases each corresponding to one modelling stage of the multi-physics transient analysis of the nuclear reactor core. This paper provides a detailed introduction of the benchmark specification of Phase I, known as the “kinetics phase”, including the geometry description, supporting neutron transport data, transient scenarios in both two-dimensional (2-D) and three-dimensional (3-D) configurations, as well as the expected output parameters from the participants. Also presented are the preliminary results for the initial state 2-D core and selected transient exercises that have been obtained using the Monte Carlo method and the Surface Harmonic Method (SHM), respectively.

  8. Relationship between mandibular anatomy and the occurrence of a bad split upon sagittal split osteotomy.

    Science.gov (United States)

    Aarabi, Mohammadali; Tabrizi, Reza; Hekmat, Mina; Shahidi, Shoaleh; Puzesh, Ayatollah

    2014-12-01

    A bad split is a troublesome complication of the sagittal split osteotomy (SSO). The aim of this study was to evaluate the relation between the occurrence of a bad split and mandibular anatomy in SSO using cone-beam computed tomography. The authors designed a cohort retrospective study. Forty-eight patients (96 SSO sites) were studied. The buccolingual thickness of the retromandibular area (BLR), the buccolingual thickness of the ramus at the level of the lingula (BLTR), the height of the mandible from the alveolar crest to the inferior border of the mandible, (ACIB), the distance between the sigmoid notch and the inferior border of the mandible (SIBM), and the anteroposterior width of the ramus (APWR) were measured. The independent t test was applied to compare anatomic measurements between the group with and the group without bad splits. The receiver operating characteristic (ROC) test was used to find a cutoff point in anatomic size for various parts of the mandible related to the occurrence of bad splits. The mean SIBM was 47.05±6.33 mm in group 1 (with bad splits) versus 40.66±2.44 mm in group 2 (without bad splits; P=.01). The mean BLTR was 5.74±1.11 mm in group 1 versus 3.19±0.55 mm in group 2 (P=.04). The mean BLR was 14.98±2.78 mm in group 1 versus 11.21±1.29 mm in group 2 (P=.001). No statistically significant difference was found for APWR and ACIB between the 2 groups. The ROC test showed cutoff points of 10.17 mm for BLR, 36.69 mm for SIBM, and 4.06 mm for BLTR. This study showed that certain mandibular anatomic differences can increase the risk of a bad split during SSO surgery. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. SplitRacer - a new Semi-Automatic Tool to Quantify And Interpret Teleseismic Shear-Wave Splitting

    Science.gov (United States)

    Reiss, M. C.; Rumpker, G.

    2017-12-01

    We have developed a semi-automatic, MATLAB-based GUI to combine standard seismological tasks such as the analysis and interpretation of teleseismic shear-wave splitting. Shear-wave splitting analysis is widely used to infer seismic anisotropy, which can be interpreted in terms of lattice-preferred orientation of mantle minerals, shape-preferred orientation caused by fluid-filled cracks or alternating layers. Seismic anisotropy provides a unique link between directly observable surface structures and the more elusive dynamic processes in the mantle below. Thus, resolving the seismic anisotropy of the lithosphere/asthenosphere is of particular importance for geodynamic modeling and interpretations. The increasing number of seismic stations from temporary experiments and permanent installations creates a new basis for comprehensive studies of seismic anisotropy world-wide. However, the increasingly large data sets pose new challenges for the rapid and reliably analysis of teleseismic waveforms and for the interpretation of the measurements. Well-established routines and programs are available but are often impractical for analyzing large data sets from hundreds of stations. Additionally, shear wave splitting results are seldom evaluated using the same well-defined quality criteria which may complicate comparison with results from different studies. SplitRacer has been designed to overcome these challenges by incorporation of the following processing steps: i) downloading of waveform data from multiple stations in mseed-format using FDSNWS tools; ii) automated initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold; iii) particle-motion analysis of selected phases at longer periods to detect and correct for sensor misalignment; iv) splitting analysis of selected phases based on transverse-energy minimization for multiple, randomly-selected, relevant time windows; v) one and two-layer joint-splitting analysis for all phases at one station by

  10. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  11. ZrO2 nanoparticles' effects on split tensile strength of self compacting concrete

    Directory of Open Access Journals (Sweden)

    Ali Nazari

    2010-12-01

    Full Text Available In the present study, split tensile strength of self compacting concrete with different amount of ZrO2 nanoparticles has been investigated. ZrO2 nanoparticles with the average particle size of 15 nm were added partially to cement paste (Portland cement together with polycarboxylate superplasticizer and split tensile strength of the specimens has been measured. The results indicate that ZrO2 nanoparticles are able to improve split tensile strength of concrete and recover the negative effects of polycarboxylate superplasticizer. ZrO2 nanoparticle as a partial replacement of cement up to 4 wt. (% could accelerate C-S-H gel formation as a result of increased crystalline Ca(OH2 amount at the early age of hydration. The increased the ZrO2 nanoparticles' content more than 4 wt. (%, causes the reduced the split tensile strength because of unsuitable dispersion of nanoparticles in the concrete matrix.

  12. Benchmark Two-Good Utility Functions

    NARCIS (Netherlands)

    de Jaegher, K.

    Benchmark two-good utility functions involving a good with zero income elasticity and unit income elasticity are well known. This paper derives utility functions for the additional benchmark cases where one good has zero cross-price elasticity, unit own-price elasticity, and zero own price

  13. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  14. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  15. A Heterogeneous Medium Analytical Benchmark

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1999-01-01

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results

  16. Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration

    Science.gov (United States)

    Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.

    2012-01-01

    Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861

  17. Benchmarking i eksternt regnskab og revision

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Kiertzner, Lars

    2001-01-01

    løbende i en benchmarking-proces. Dette kapitel vil bredt undersøge, hvor man med nogen ret kan få benchmarking-begrebet knyttet til eksternt regnskab og revision. Afsnit 7.1 beskæftiger sig med det eksterne årsregnskab, mens afsnit 7.2 tager fat i revisionsområdet. Det sidste afsnit i kapitlet opsummerer...... betragtningerne om benchmarking i forbindelse med begge områder....

  18. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  19. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  20. VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS

    Directory of Open Access Journals (Sweden)

    Tagor Malem Sembiring

    2015-10-01

    Full Text Available ABSTRACT VALIDATION OF FULL CORE GEOMETRY MODEL OF THE NODAL3 CODE IN THE PWR TRANSIENT BENCHMARK PROBLEMS. The coupled neutronic and thermal-hydraulic (T/H code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR ejection at peripheral core using a full core geometry model, the C1 and C2 cases.  By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM and the improved quasistatic method (IQS. All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16% occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4% for C2 case.  All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. Keywords: nodal method, coupled neutronic and thermal-hydraulic code, PWR, transient case, control rod ejection.   ABSTRAK VALIDASI MODEL GEOMETRI TERAS PENUH PAKET PROGRAM NODAL3 DALAM PROBLEM BENCHMARK GAYUT WAKTU PWR. Paket program kopel neutronik dan termohidraulika (T/H, NODAL3, telah divalidasi dengan beberapa kasus benchmark statis PWR dan kasus benchmark gayut waktu PWR NEACRP.  Akan tetapi, paket program NODAL3 belum divalidasi dalam kasus benchmark gayut waktu akibat penarikan sebuah perangkat batang kendali (CR di tepi teras menggunakan model geometri teras penuh, yaitu kasus C1 dan C2. Dengan penelitian ini, akurasi paket program

  1. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    Science.gov (United States)

    Jacques, Diederik

    2017-04-01

    environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  2. [Benchmarking in patient identification: An opportunity to learn].

    Science.gov (United States)

    Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C

    To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Ad hoc committee on reactor physics benchmarks

    International Nuclear Information System (INIS)

    Diamond, D.J.; Mosteller, R.D.; Gehin, J.C.

    1996-01-01

    In the spring of 1994, an ad hoc committee on reactor physics benchmarks was formed under the leadership of two American Nuclear Society (ANS) organizations. The ANS-19 Standards Subcommittee of the Reactor Physics Division and the Computational Benchmark Problem Committee of the Mathematics and Computation Division had both seen a need for additional benchmarks to help validate computer codes used for light water reactor (LWR) neutronics calculations. Although individual organizations had employed various means to validate the reactor physics methods that they used for fuel management, operations, and safety, additional work in code development and refinement is under way, and to increase accuracy, there is a need for a corresponding increase in validation. Both organizations thought that there was a need to promulgate benchmarks based on measured data to supplement the LWR computational benchmarks that have been published in the past. By having an organized benchmark activity, the participants also gain by being able to discuss their problems and achievements with others traveling the same route

  4. Are Ducted Mini-Splits Worth It?

    Energy Technology Data Exchange (ETDEWEB)

    Winkler, Jonathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Maguire, Jeffrey B [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Metzger, Cheryn E. [Pacific Northwest National Laboratory; Zhang, Jason [Pacific Northwest National Laboratory

    2018-02-01

    Ducted mini-split heat pumps are gaining popularity in some regions of the country due to their energy-efficient specifications and their ability to be hidden from sight. Although product and install costs are typically higher than the ductless mini-split heat pumps, this technology is well worth the premium for some homeowners who do not like to see an indoor unit in their living area. Due to the interest in this technology by local utilities and homeowners, the Bonneville Power Administration (BPA) has funded the Pacific Northwest National Laboratory (PNNL) and the National Renewable Energy Laboratory (NREL) to develop capabilities within the Building Energy Optimization (BEopt) tool to model ducted mini-split heat pumps. After the fundamental capabilities were added, energy-use results could be compared to other technologies that were already in BEopt, such as zonal electric resistance heat, central air source heat pumps, and ductless mini-split heat pumps. Each of these technologies was then compared using five prototype configurations in three different BPA heating zones to determine how the ducted mini-split technology would perform under different scenarios. The result of this project was a set of EnergyPlus models representing the various prototype configurations in each climate zone. Overall, the ducted mini-split heat pumps saved about 33-60% compared to zonal electric resistance heat (with window AC systems modeled in the summer). The results also showed that the ducted mini-split systems used about 4% more energy than the ductless mini-split systems, which saved about 37-64% compared to electric zonal heat (depending on the prototype and climate).

  5. Computational Chemistry Comparison and Benchmark Database

    Science.gov (United States)

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  6. Particulate photocatalysts for overall water splitting

    Science.gov (United States)

    Chen, Shanshan; Takata, Tsuyoshi; Domen, Kazunari

    2017-10-01

    The conversion of solar energy to chemical energy is a promising way of generating renewable energy. Hydrogen production by means of water splitting over semiconductor photocatalysts is a simple, cost-effective approach to large-scale solar hydrogen synthesis. Since the discovery of the Honda-Fujishima effect, considerable progress has been made in this field, and numerous photocatalytic materials and water-splitting systems have been developed. In this Review, we summarize existing water-splitting systems based on particulate photocatalysts, focusing on the main components: light-harvesting semiconductors and co-catalysts. The essential design principles of the materials employed for overall water-splitting systems based on one-step and two-step photoexcitation are also discussed, concentrating on three elementary processes: photoabsorption, charge transfer and surface catalytic reactions. Finally, we outline challenges and potential advances associated with solar water splitting by particulate photocatalysts for future commercial applications.

  7. MTCB: A Multi-Tenant Customizable database Benchmark

    NARCIS (Netherlands)

    van der Zijden, WIm; Hiemstra, Djoerd; van Keulen, Maurice

    2017-01-01

    We argue that there is a need for Multi-Tenant Customizable OLTP systems. Such systems need a Multi-Tenant Customizable Database (MTC-DB) as a backing. To stimulate the development of such databases, we propose the benchmark MTCB. Benchmarks for OLTP exist and multi-tenant benchmarks exist, but no

  8. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  9. Benchmarking the Netherlands. Benchmarking for growth

    International Nuclear Information System (INIS)

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity growth. Throughout

  10. Benchmarking the Netherlands. Benchmarking for growth

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-01-01

    This is the fourth edition of the Ministry of Economic Affairs' publication 'Benchmarking the Netherlands', which aims to assess the competitiveness of the Dutch economy. The methodology and objective of the benchmarking remain the same. The basic conditions for economic activity (institutions, regulation, etc.) in a number of benchmark countries are compared in order to learn from the solutions found by other countries for common economic problems. This publication is devoted entirely to the potential output of the Dutch economy. In other words, its ability to achieve sustainable growth and create work over a longer period without capacity becoming an obstacle. This is important because economic growth is needed to increase prosperity in the broad sense and meeting social needs. Prosperity in both a material (per capita GDP) and immaterial (living environment, environment, health, etc) sense, in other words. The economy's potential output is determined by two structural factors: the growth of potential employment and the structural increase in labour productivity. Analysis by the Netherlands Bureau for Economic Policy Analysis (CPB) shows that in recent years the increase in the capacity for economic growth has been realised mainly by increasing the supply of labour and reducing the equilibrium unemployment rate. In view of the ageing of the population in the coming years and decades the supply of labour is unlikely to continue growing at the pace we have become accustomed to in recent years. According to a number of recent studies, to achieve a respectable rate of sustainable economic growth the aim will therefore have to be to increase labour productivity. To realise this we have to focus on for six pillars of economic policy: (1) human capital, (2) functioning of markets, (3) entrepreneurship, (4) spatial planning, (5) innovation, and (6) sustainability. These six pillars determine the course for economic policy aiming at higher productivity

  11. (O)Mega split

    Energy Technology Data Exchange (ETDEWEB)

    Benakli, Karim; Darmé, Luc; Goodsell, Mark D. [Sorbonne Universités, UPMC Univ Paris 06, UMR 7589,LPTHE, F-75005, Paris (France); CNRS, UMR 7589,LPTHE, F-75005, Paris (France)

    2015-11-16

    We study two realisations of the Fake Split Supersymmetry Model (FSSM), the simplest model that can easily reproduce the experimental value of the Higgs mass for an arbitrarily high supersymmetry scale M{sub S}, as a consequence of swapping higgsinos for equivalent states, fake higgsinos, with suppressed Yukawa couplings. If the LSP is identified as the main Dark matter component, then a standard thermal history of the Universe implies upper bounds on M{sub S}, which we derive. On the other hand, we show that renormalisation group running of soft masses aboveM{sub S} barely constrains the model — in stark contrast to Split Supersymmetry — and hence we can have a “Mega Split” spectrum even with all of these assumptions and constraints, which include the requirements of a correct relic abundance, a gluino life-time compatible with Big Bang Nucleosynthesis and absence of signals in present direct detection experiments of inelastic dark matter. In an appendix we describe a related scenario, Fake Split Extended Supersymmetry, which enjoys similar properties.

  12. Homozygous sequence variants in the WNT10B gene underlie split hand/foot malformation

    Directory of Open Access Journals (Sweden)

    Asmat Ullah

    2018-01-01

    Full Text Available Abstract Split-hand/split-foot malformation (SHFM, also known as ectrodactyly is a rare genetic disorder. It is a clinically and genetically heterogeneous group of limb malformations characterized by absence/hypoplasia and/or median cleft of hands and/or feet. To date, seven genes underlying SHFM have been identified. This study described four consanguineous families (A-D segregating SHFM in an autosomal recessive manner. Linkage in the families was established to chromosome 12p11.1–q13.13 harboring WNT10B gene. Sequence analysis identified a novel homozygous nonsense variant (p.Gln154* in exon 4 of the WNT10B gene in two families (A and B. In the other two families (C and D, a previously reported variant (c.300_306dupAGGGCGG; p.Leu103Argfs*53 was detected. This study further expands the spectrum of the sequence variants reported in the WNT10B gene, which result in the split hand/foot malformation.

  13. Benchmark simulation models, quo vadis?

    DEFF Research Database (Denmark)

    Jeppsson, U.; Alex, J; Batstone, D. J.

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to p...

  14. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  15. A novel hypothesis splitting method implementation for multi-hypothesis filters

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel

    2013-01-01

    The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...

  16. Revaluering benchmarking - A topical theme for the construction industry

    DEFF Research Database (Denmark)

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in organizations. To understand these sociological impacts, benchmarking research needs to transcend...... the perception of benchmarking systems as secondary and derivative and instead studying benchmarking as constitutive of social relations and as irredeemably social phenomena. I have attempted to do so in this paper by treating benchmarking using a calculative practice perspective, and describing how...

  17. An algorithm for the split-feasibility problems with application to the split-equality problem.

    Science.gov (United States)

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  18. World-Wide Benchmarking of ITER Nb$_{3}$Sn Strand Test Facilities

    CERN Document Server

    Jewell, MC; Takahashi, Yoshikazu; Shikov, Alexander; Devred, Arnaud; Vostner, Alexander; Liu, Fang; Wu, Yu; Jewell, Matthew C; Boutboul, Thierry; Bessette, Denis; Park, Soo-Hyeon; Isono, Takaaki; Vorobieva, Alexandra; Martovetsky, Nicolai; Seo, Kazutaka

    2010-01-01

    The world-wide procurement of Nb$_{3}$Sn and NbTi for the ITER superconducting magnet systems will involve eight to ten strand suppliers from six Domestic Agencies (DAs) on three continents. To ensure accurate and consistent measurement of the physical and superconducting properties of the composite strand, a strand test facility benchmarking effort was initiated in August 2008. The objectives of this effort are to assess and improve the superconducting strand test and sample preparation technologies at each DA and supplier, in preparation for the more than ten thousand samples that will be tested during ITER procurement. The present benchmarking includes tests for critical current (I-c), n-index, hysteresis loss (Q(hys)), residual resistivity ratio (RRR), strand diameter, Cu fraction, twist pitch, twist direction, and metal plating thickness (Cr or Ni). Nineteen participants from six parties (China, EU, Japan, South Korea, Russia, and the United States) have participated in the benchmarking. This round, cond...

  19. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  20. How Benchmarking and Higher Education Came Together

    Science.gov (United States)

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  1. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  2. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered......Progress in science and engineering relies on the ability to measure, reliably and in detail, pertinent properties of artifacts under design. Progress in the area of database-index design thus relies on empirical studies based on prototype implementations of indexes. This paper proposes a benchmark...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  3. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  4. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  5. Benchmarking: A Process for Improvement.

    Science.gov (United States)

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  6. M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting

    Science.gov (United States)

    Abgarmi, Bizhan; Ozacar, A. Arda

    2017-04-01

    Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for

  7. Benchmark Analysis Of The High Temperature Gas Cooled Reactors Using Monte Carlo Technique

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huda, M.Q.

    2008-01-01

    Information about several past and present experimental and prototypical facilities based on High Temperature Gas-Cooled Reactor (HTGR) concepts have been examined to assess the potential of these facilities for use in this benchmarking effort. Both reactors and critical facilities applicable to pebble-bed type cores have been considered. Two facilities - HTR-PROTEUS of Switzerland and HTR-10 of China and one conceptual design from Germany - HTR-PAP20 - appear to have the greatest potential for use in benchmarking the codes. This study presents the benchmark analysis of these reactors technologies by using MCNP4C2 and MVP/GMVP Codes to support the evaluation and future development of HTGRs. The ultimate objective of this work is to identify and develop new capabilities needed to support Generation IV initiative. (author)

  8. Hospital benchmarking: are U.S. eye hospitals ready?

    Science.gov (United States)

    de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S

    2012-01-01

    Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added

  9. Split-Cre complementation restores combination activity on transgene excision in hair roots of transgenic tobacco.

    Directory of Open Access Journals (Sweden)

    Mengling Wen

    Full Text Available The Cre/loxP system is increasingly exploited for genetic manipulation of DNA in vitro and in vivo. It was previously reported that inactive ''split-Cre'' fragments could restore Cre activity in transgenic mice when overlapping co-expression was controlled by two different promoters. In this study, we analyzed recombination activities of split-Cre proteins, and found that no recombinase activity was detected in the in vitro recombination reaction in which only the N-terminal domain (NCre of split-Cre protein was expressed, whereas recombination activity was obtained when the C-terminal (CCre or both NCre and CCre fragments were supplied. We have also determined the recombination efficiency of split-Cre proteins which were co-expressed in hair roots of transgenic tobacco. No Cre recombination event was observed in hair roots of transgenic tobacco when the NCre or CCre genes were expressed alone. In contrast, an efficient recombination event was found in transgenic hairy roots co-expressing both inactive split-Cre genes. Moreover, the restored recombination efficiency of split-Cre proteins fused with the nuclear localization sequence (NLS was higher than that of intact Cre in transgenic lines. Thus, DNA recombination mediated by split-Cre proteins provides an alternative method for spatial and temporal regulation of gene expression in transgenic plants.

  10. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  11. WWER-1000 Burnup Credit Benchmark (CB5)

    International Nuclear Information System (INIS)

    Manolova, M.A.

    2002-01-01

    In the paper the specification of WWER-1000 Burnup Credit Benchmark first phase (depletion calculations), given. The second phase - criticality calculations for the WWER-1000 fuel pin cell, will be given after the evaluation of the results, obtained at the first phase. The proposed benchmark is a continuation of the WWER benchmark activities in this field (Author)

  12. The role of benchmarking for yardstick competition

    International Nuclear Information System (INIS)

    Burns, Phil; Jenkins, Cloda; Riechmann, Christoph

    2005-01-01

    With the increasing interest in yardstick regulation, there is a need to understand the most appropriate method for realigning tariffs at the outset. Benchmarking is the tool used for such realignment and is therefore a necessary first-step in the implementation of yardstick competition. A number of concerns have been raised about the application of benchmarking, making some practitioners reluctant to move towards yardstick based regimes. We assess five of the key concerns often discussed and find that, in general, these are not as great as perceived. The assessment is based on economic principles and experiences with applying benchmarking to regulated sectors, e.g. in the electricity and water industries in the UK, The Netherlands, Austria and Germany in recent years. The aim is to demonstrate that clarity on the role of benchmarking reduces the concern about its application in different regulatory regimes. We find that benchmarking can be used in regulatory settlements, although the range of possible benchmarking approaches that are appropriate will be small for any individual regulatory question. Benchmarking is feasible as total cost measures and environmental factors are better defined in practice than is commonly appreciated and collusion is unlikely to occur in environments with more than 2 or 3 firms (where shareholders have a role in monitoring and rewarding performance). Furthermore, any concern about companies under-recovering costs is a matter to be determined through the regulatory settlement and does not affect the case for using benchmarking as part of that settlement. (author)

  13. Present status on numerical algorithms and benchmark tests for point kinetics and quasi-static approximate kinetics

    International Nuclear Information System (INIS)

    Ise, Takeharu

    1976-12-01

    Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)

  14. The energy level splitting for Unharmonic dc SQUID to be used as phase Q-bit

    DEFF Research Database (Denmark)

    Klenov, Nicolai V.; Kornev, Victor K.; Pedersen, Niels Falsig

    2006-01-01

    splitting. Threshold condition for the double-well form origin has been determined taking into account the impact of both harmonics. The splitting gap of the ground energy level has been calculated as a function of the harmonic amplitudes for different ratio s of characteristic Josephson energy E......-C to the Coulomb energy E-Q0. It has been shown that the gap value comes to about 7E(Q0) with increase of the ratio s. No external field needed, no bias current required and no circular currents are major advantages of such a qubit. (c) 2006 Elsevier B.V. All rights reserved....

  15. Thermodynamic evaluation of the Kalina split-cycle concepts for waste heat recovery applications

    International Nuclear Information System (INIS)

    Nguyen, Tuong-Van; Knudsen, Thomas; Larsen, Ulrik; Haglind, Fredrik

    2014-01-01

    The Kalina split-cycle is a thermodynamic process for converting thermal energy into electrical power. It uses an ammonia–water mixture as a working fluid (like a conventional Kalina cycle) and has a varying ammonia concentration during the pre-heating and evaporation steps. This second feature results in an improved match between the heat source and working fluid temperature profiles, decreasing the entropy generation in the heat recovery system. The present work compares the thermodynamic performance of this power cycle with the conventional Kalina process, and investigates the impact of varying boundary conditions by conducting an exergy analysis. The design parameters of each configuration were determined by performing a multi-variable optimisation. The results indicate that the Kalina split-cycle with reheat presents an exergetic efficiency by 2.8% points higher than a reference Kalina cycle with reheat, and by 4.3% points without reheat. The cycle efficiency varies by 14% points for a variation of the exhaust gas temperature of 100 °C, and by 1% point for a cold water temperature variation of 30 °C. This analysis also pinpoints the large irreversibilities in the low-pressure turbine and condenser, and indicates a reduction of the exergy destruction by about 23% in the heat recovery system compared to the baseline cycle. - Highlights: • The thermodynamic performance of the Kalina split-cycle is assessed. • The Kalina split-cycle is compared to the Kalina cycle, with and without reheat. • An exergy analysis is performed to evaluate its thermodynamic performance. • The impact of varying boundary conditions is investigated. • The Kalina split-cycle displays high exergetic efficiency for low- and medium-temperature applications

  16. Crystal electric field splitting of R{sup 3+}-ions in pure and Co- and Cu-doped RNi{sub 2}B{sub 2}C (R=Ho, Er, Tm)

    Energy Technology Data Exchange (ETDEWEB)

    Gasser, U.; Allenspach, P.; Henggeler, W.; Zolliker, M.; Furrer, A. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-09-01

    From the crystal-electric-field (CEF) splitting of the R{sup 3+}-ions, the CEF parameters of RNi{sub 2}B{sub 2}C (R=Ho, Er, Tm) were deduced. In order to get information about the influence of the variation of the density of states (DOS) at the Fermi level (E{sub F}), CEF spectroscopy measurements with Co- and Cu-doped ErNi{sub 2}B{sub 2}C-samples were performed. (author) 1 fig., 1 tab., 1 ref.

  17. Fundamental metallurgical aspects of axial splitting in zircaloy cladding

    International Nuclear Information System (INIS)

    Chung, H. M.

    2000-01-01

    Fundamental metallurgical aspects of axial splitting in irradiated Zircaloy cladding have been investigated by microstructural characterization and analytical modeling, with emphasis on application of the results to understand high-burnup fuel failure under RIA situations. Optical microscopy, SEM, and TEM were conducted on BWR and PWR fuel cladding tubes that were irradiated to fluence levels of 3.3 x 10 21 n cm -2 to 5.9 x 10 21 n cm -2 (E > 1 MeV) and tested in hot cell at 292--325 C in Ar. The morphology, distribution, and habit planes of macroscopic and microscopic hydrides in as-irradiated and posttest cladding were determined by stereo-TEM. The type and magnitude of the residual stress produced in association with oxide-layer growth and dense hydride precipitation, and several synergistic factors that strongly influence axial-splitting behavior were analyzed. The results of the microstructural characterization and stress analyses were then correlated with axial-splitting behavior of high-burnup PWR cladding reported for simulated-RIA conditions. The effects of key test procedures and their implications for the interpretation of RIA test results are discussed

  18. SP2Bench: A SPARQL Performance Benchmark

    Science.gov (United States)

    Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg

    A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.

  19. Preliminary Magnetostratigraphic Study of the Split Mountain and Lower Imperial Groups, Split Mountain Gorge, Western Salton Trough, CA

    Science.gov (United States)

    Fluette, A. L.; Housen, B. A.; Dorsey, R. J.

    2004-12-01

    We present preliminary results of a magnetostratigraphic study of Miocene-Pliocene sedimentary rocks of the Split Mt. and lower Imperial Groups exposed in Split Mt. Gorge and eastern Fish Creek-Vallecito basin, western Salton Trough. Precise age control for the base of this thick section is needed to improve our understanding of the early history of extension-related subsidence in this region. The geologic setting and stratigraphic framework are known from previous work by Dibblee (1954, 1996), Woodard (1963), Kerr (1982), Winker (1987), Kerr and Kidwell (1991), Winker and Kidwell (1986; 1996), and others. We have analyzed Upper Miocene to lower Pliocene strata exposed in a conformable section in Split Mt. Gorge, including (in order from the base; nomenclature of Winker and Kidwell, 1996): (1) Split Mt. Group: Red Rock Fm alluvial sandstone; Elephant Trees alluvial conglomerate; and lower megabreccia unit; and (2) lower part of Imperial Group, including: Fish Creek Gypsum; proximal to distal turbidites of the Latrania Fm and Wind Caves Mbr of Deguynos Fm; upper megabreccia unit; marine mudstone and rhythmites of the Mud Hills Mbr (Deguynos Fm); and the basal part of the Yuha Mbr (Deguynos Fm). Measured thickness from the base of the Elephant Trees Cgl to the base of the Yuha Mbr is about 1050 m, consistent with previous measurements of Winker (1987). Paleomagnetic samples were collected at approximately 10 m intervals throughout this section. The upper portion of our sampled section overlaps with the lower part of the section sampled for magnetostratigraphic study by Opdyke et al. (1977) and Johnson et al. (1983). They interpreted the base of their section to be about 4.3 Ma, and calculated an average sedimentation rate of approximately 5.5 mm/yr for the lower part of their section. Good-quality preliminary results from 15 paleomagnetic sites distributed throughout our sampled section permit preliminary identification of 6 polarity zones. Based on regional mapping

  20. RB reactor benchmark cores

    International Nuclear Information System (INIS)

    Pesic, M.

    1998-01-01

    A selected set of the RB reactor benchmark cores is presented in this paper. The first results of validation of the well-known Monte Carlo MCNP TM code and adjoining neutron cross section libraries are given. They confirm the idea for the proposal of the new U-D 2 O criticality benchmark system and support the intention to include this system in the next edition of the recent OECD/NEA Project: International Handbook of Evaluated Criticality Safety Experiment, in near future. (author)

  1. Additive operator-difference schemes splitting schemes

    CERN Document Server

    Vabishchevich, Petr N

    2013-01-01

    Applied mathematical modeling isconcerned with solving unsteady problems. This bookshows how toconstruct additive difference schemes to solve approximately unsteady multi-dimensional problems for PDEs. Two classes of schemes are highlighted: methods of splitting with respect to spatial variables (alternating direction methods) and schemes of splitting into physical processes. Also regionally additive schemes (domain decomposition methods)and unconditionally stable additive schemes of multi-component splitting are considered for evolutionary equations of first and second order as well as for sy

  2. Alteration of split renal function during Captopril treatment

    International Nuclear Information System (INIS)

    Aburano, Tamio; Takayama, Teruhiko; Nakajima, Kenichi; Tonami, Norihisa; Hisada, Kinichi; Yasuhara, Shuichirou; Miyamori, Isamu; Takeda, Ryoyu

    1987-01-01

    Two different methods to evaluate the alteration of split renal function following continued Captopril treatment were studied in a total of 21 patients with hypertension. Eight patients with renovascular hypertension (five with unilateral renal artery stenosis and three with bilateral renal artery stenoses), three patients with diabetic nephropathy, one patient with primary aldosteronism, and nine patients with essential hypertension were included. The studies were performed the day prior to receiving Captopril (baseline), and 6th or 7th day following continued Captopril treatment (37.5 mg or 75 mg/day). Split effective renal plasma flow (ERPF) and glomerular filtration rate (GFR) after injections of I-131 hippuran and Tc-99m DTPA were measured using kidney counting corrected for depth and dose, described by Schlegel and Gates. In the patients with renovascular hypertension, split GFR in the stenotic kidney was significantly decreased 6th or 7th day following continued Captopril treatment compared to a baseline value. And split ERPF in the stenotic kidney was slightly increased although significant increase of split ERPF was not shown. In the patients with diabetic nephropathy, primary aldosteronism or essential hypertension, on the other hand, split GFR was not changed and split ERPF was slightly increased. These findings suggest that the Captopril induced alterations of split renal function may be of importance for the diagnosis of renovascular hypertension. For this purpose, split GFR determination is more useful than split ERPF determination. (author)

  3. Benchmarking specialty hospitals, a scoping review on theory and practice.

    Science.gov (United States)

    Wind, A; van Harten, W H

    2017-04-04

    Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.

  4. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  5. Development of a California commercial building benchmarking database

    International Nuclear Information System (INIS)

    Kinney, Satkartar; Piette, Mary Ann

    2002-01-01

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database

  6. Thermal hydraulics-II. 2. Benchmarking of the TRIO Two-Phase-Flow Module

    International Nuclear Information System (INIS)

    Helton, Donald; Kumbaro, Anela; Hassan, Yassin

    2001-01-01

    The Commissariat a l'Energie Atomique (CEA) is currently developing a two-phase-flow module for the Trio-U CFD computer program. Work in the area of advanced numerical technique application to two-phase flow is being carried out by the SYSCO division at the CEA Saclay center. Recently, this division implemented several advanced numerical solvers, including approximate Riemann solvers and flux vector splitting schemes. As a test of these new advances, several benchmark tests were executed. This paper describes the pertinent results of this study. The first benchmark problem was the Ransom faucet problem. This problem consists of a vertical column of water acting under the gravity force. The appeal of this problem is that it tests the program's handling of the body force term and it has an analytical solution. The Trio results [based on a two-fluid, two-dimensional (2-D) simulation] for this problem were very encouraging. The two-phase-flow module was able to reproduce the analytical velocity and void fraction profiles. A reasonable amount of numerical diffusion was observed, and the numerical solution converged to the analytical solution as the grid size was refined, as shown in Fig. 1. A second series of benchmark problems is concerned with the employment of a drag force term. In a first approach, we test the capability of the code to take account of this source term, using a flux scheme solution technique. For this test, a rectangular duct was utilized. As shown in Fig. 2, mesh refinement results in an approach to the analytical solution. Next, a convergent/divergent nozzle problem is proposed. The nozzle is characterized by a brief contraction section and a long expansion section. A two-phase, 2-D, non-condensing model is used in conjunction with the Rieman solver. Figure 3 shows a comparison of the pressure profile for the experimental case and for the values calculated by the TRIO U two-phase-flow module. Trio was able to handle the drag force term and

  7. Spin Splitting in Different Semiconductor Quantum Wells

    International Nuclear Information System (INIS)

    Hao Yafei

    2012-01-01

    We theoretically investigate the spin splitting in four undoped asymmetric quantum wells in the absence of external electric field and magnetic field. The quantum well geometry dependence of spin splitting is studied with the Rashba and the Dresselhaus spin-orbit coupling included. The results show that the structure of quantum well plays an important role in spin splitting. The Rashba and the Dresselhaus spin splitting in four asymmetric quantum wells are quite different. The origin of the distinction is discussed in this work. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  8. How benchmarking can improve patient nutrition.

    Science.gov (United States)

    Ellis, Jane

    Benchmarking is a tool that originated in business to enable organisations to compare their services with industry-wide best practice. Early last year the Department of Health published The Essence of Care, a benchmarking toolkit adapted for use in health care. It focuses on eight elements of care that are crucial to patients' experiences. Nurses and other health care professionals at a London NHS trust have begun a trust-wide benchmarking project. The aim is to improve patients' experiences of health care by sharing and comparing information, and by identifying examples of good practice and areas for improvement. The project began with two of the eight elements of The Essence of Care, with the intention of covering the rest later. This article describes the benchmarking process for nutrition and some of the consequent improvements in care.

  9. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  10. Splitting Ward identity

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Mahmoud [Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-04-15

    Within the background-field framework we present a path integral derivation of the splitting Ward identity for the one-particle irreducible effective action in the presence of an infrared regulator, and make connection with earlier works on the subject. The approach is general in the sense that it does not rely on how the splitting is performed. This identity is then used to address the problem of background dependence of the effective action at an arbitrary energy scale. We next introduce the modified master equation and emphasize its role in constraining the effective action. Finally, application to general gauge theories within the geometric approach is discussed. (orig.)

  11. Splitting Ward identity

    International Nuclear Information System (INIS)

    Safari, Mahmoud

    2016-01-01

    Within the background-field framework we present a path integral derivation of the splitting Ward identity for the one-particle irreducible effective action in the presence of an infrared regulator, and make connection with earlier works on the subject. The approach is general in the sense that it does not rely on how the splitting is performed. This identity is then used to address the problem of background dependence of the effective action at an arbitrary energy scale. We next introduce the modified master equation and emphasize its role in constraining the effective action. Finally, application to general gauge theories within the geometric approach is discussed. (orig.)

  12. IAEA sodium void reactivity benchmark calculations

    International Nuclear Information System (INIS)

    Hill, R.N.; Finck, P.J.

    1992-01-01

    In this paper, the IAEA-1 992 ''Benchmark Calculation of Sodium Void Reactivity Effect in Fast Reactor Core'' problem is evaluated. The proposed design is a large axially heterogeneous oxide-fueled fast reactor as described in Section 2; the core utilizes a sodium plenum above the core to enhance leakage effects. The calculation methods used in this benchmark evaluation are described in Section 3. In Section 4, the calculated core performance results for the benchmark reactor model are presented; and in Section 5, the influence of steel and interstitial sodium heterogeneity effects is estimated

  13. Benchmark Imagery FY11 Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-14

    This report details the work performed in FY11 under project LL11-GS-PD06, “Benchmark Imagery for Assessing Geospatial Semantic Extraction Algorithms.” The original LCP for the Benchmark Imagery project called for creating a set of benchmark imagery for verifying and validating algorithms that extract semantic content from imagery. More specifically, the first year was slated to deliver real imagery that had been annotated, the second year to deliver real imagery that had composited features, and the final year was to deliver synthetic imagery modeled after the real imagery.

  14. Pharmaceutical counselling about different types of tablet-splitting methods based on the results of weighing tests and mechanical development of splitting devices.

    Science.gov (United States)

    Somogyi, O; Meskó, A; Csorba, L; Szabó, P; Zelkó, R

    2017-08-30

    The division of tablets and adequate methods of splitting them are a complex problem in all sectors of health care. Although tablet-splitting is often required, this procedure can be difficult for patients. Four tablets were investigated with different external features (shape, score-line, film-coat and size). The influencing effect of these features and the splitting methods was investigated according to the precision and "weight loss" of splitting techniques. All four types of tablets were halved by four methods: by hand, with a kitchen knife, with an original manufactured splitting device and with a modified tablet splitter based on a self-developed mechanical model. The mechanical parameters (harness and friability) of the products were measured during the study. The "weight loss" and precision of splitting methods were determined and compared by statistical analysis. On the basis of the results, the external features (geometry), the mechanical parameters of tablets and the mechanical structure of splitting devices can influence the "weight loss" and precision of tablet-splitting. Accordingly, a new decision-making scheme was developed for the selection of splitting methods. In addition, the skills of patients and the specialties of therapy should be considered so that pharmaceutical counselling can be more effective regarding tablet-splitting. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Iterative Splitting Methods for Differential Equations

    CERN Document Server

    Geiser, Juergen

    2011-01-01

    Iterative Splitting Methods for Differential Equations explains how to solve evolution equations via novel iterative-based splitting methods that efficiently use computational and memory resources. It focuses on systems of parabolic and hyperbolic equations, including convection-diffusion-reaction equations, heat equations, and wave equations. In the theoretical part of the book, the author discusses the main theorems and results of the stability and consistency analysis for ordinary differential equations. He then presents extensions of the iterative splitting methods to partial differential

  16. Tourism Destination Benchmarking: Evaluation and Selection of the Benchmarking Partners

    Directory of Open Access Journals (Sweden)

    Luštický Martin

    2012-03-01

    Full Text Available Tourism development has an irreplaceable role in regional policy of almost all countries. This is due to its undeniable benefits for the local population with regards to the economic, social and environmental sphere. Tourist destinations compete for visitors at tourism market and subsequently get into a relatively sharp competitive struggle. The main goal of regional governments and destination management institutions is to succeed in this struggle by increasing the competitiveness of their destination. The quality of strategic planning and final strategies is a key factor of competitiveness. Even though the tourism sector is not the typical field where the benchmarking methods are widely used, such approaches could be successfully applied. The paper focuses on key phases of the benchmarking process which lies in the search for suitable referencing partners. The partners are consequently selected to meet general requirements to ensure the quality if strategies. Following from this, some specific characteristics are developed according to the SMART approach. The paper tests this procedure with an expert evaluation of eight selected regional tourism strategies of regions in the Czech Republic, Slovakia and Great Britain. In this way it validates the selected criteria in the frame of the international environment. Hence, it makes it possible to find strengths and weaknesses of selected strategies and at the same time facilitates the discovery of suitable benchmarking partners.

  17. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  18. Development of a California commercial building benchmarking database

    Energy Technology Data Exchange (ETDEWEB)

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  19. 2-Photon tandem device for water splitting

    DEFF Research Database (Denmark)

    Seger, Brian; Castelli, Ivano Eligio; Vesborg, Peter Christian Kjærgaard

    2014-01-01

    Within the field Of photocatalytic water splitting there are several strategies to achieve the goal of efficient and cheap photocatalytic water splitting. This work examines one particular strategy by focusing on monolithically stacked, two-photon photoelectrochemical cells. The overall aim...... for photocatalytic water splitting by using a large bandgap photocathode and a low bandgap photoanode with attached protection layers....

  20. Raising Quality and Achievement. A College Guide to Benchmarking.

    Science.gov (United States)

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  1. Prismatic Core Coupled Transient Benchmark

    International Nuclear Information System (INIS)

    Ortensi, J.; Pope, M.A.; Strydom, G.; Sen, R.S.; DeHart, M.D.; Gougar, H.D.; Ellis, C.; Baxter, A.; Seker, V.; Downar, T.J.; Vierow, K.; Ivanov, K.

    2011-01-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  2. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  3. S-wave spectroscopy and Hyperne splitting of Bc meson

    International Nuclear Information System (INIS)

    Shah, Manan; Bhavsar, Tanvi; Vinodkumar, P.C.

    2017-01-01

    B c meson is the only heavy meson with two open flavours. This system is also interesting because they cannot annihilate into gluons. The mass spectra and hyperfine splitting of the B c meson are investigated in the Dirac framework with the help of linear + constant potential. The spin-spin interactions are also included in the calculation of the pseudoscalar and vector meson masses. Our computed result for the B c meson are in very good agreement with experimental results as well as other available theoretical result. Decay properties are also interesting because it is expected that decay of B c meson occur in to neutral meson. We hope our theoretical results are helpful for future experimental observations

  4. Benchmarking of refinery emissions performance : Executive summary

    International Nuclear Information System (INIS)

    2003-07-01

    This study was undertaken to collect emissions performance data for Canadian and comparable American refineries. The objective was to examine parameters that affect refinery air emissions performance and develop methods or correlations to normalize emissions performance. Another objective was to correlate and compare the performance of Canadian refineries to comparable American refineries. For the purpose of this study, benchmarking involved the determination of levels of emission performance that are being achieved for generic groups of facilities. A total of 20 facilities were included in the benchmarking analysis, and 74 American refinery emission correlations were developed. The recommended benchmarks, and the application of those correlations for comparison between Canadian and American refinery performance, were discussed. The benchmarks were: sulfur oxides, nitrogen oxides, carbon monoxide, particulate, volatile organic compounds, ammonia and benzene. For each refinery in Canada, benchmark emissions were developed. Several factors can explain differences in Canadian and American refinery emission performance. 4 tabs., 7 figs

  5. Semi-strong split domination in graphs

    Directory of Open Access Journals (Sweden)

    Anwar Alwardi

    2014-06-01

    Full Text Available Given a graph $G = (V,E$, a dominating set $D subseteq V$ is called a semi-strong split dominating set of $G$ if $|V setminus D| geq 1$ and the maximum degree of the subgraph induced by $V setminus D$ is 1. The minimum cardinality of a semi-strong split dominating set (SSSDS of G is the semi-strong split domination number of G, denoted $gamma_{sss}(G$. In this work, we introduce the concept and prove several results regarding it.

  6. ZZ ECN-BUBEBO, ECN-Petten Burnup Benchmark Book, Inventories, Afterheat

    International Nuclear Information System (INIS)

    Kloosterman, Jan Leen

    1999-01-01

    Description of program or function: Contains experimental benchmarks which can be used for the validation of burnup code systems and accompanied data libraries. Although the benchmarks presented here are thoroughly described in literature, it is in many cases not straightforward to retrieve unambiguously the correct input data and corresponding results from the benchmark Descriptions. Furthermore, results which can easily be measured, are sometimes difficult to calculate because of conversions to be made. Therefore, emphasis has been put to clarify the input of the benchmarks and to present the benchmark results in such a way that they can easily be calculated and compared. For more thorough Descriptions of the benchmarks themselves, the literature referred to here should be consulted. This benchmark book is divided in 11 chapters/files containing the following in text and tabular form: chapter 1: Introduction; chapter 2: Burnup Credit Criticality Benchmark Phase 1-B; chapter 3: Yankee-Rowe Core V Fuel Inventory Study; chapter 4: H.B. Robinson Unit 2 Fuel Inventory Study; chapter 5: Turkey Point Unit 3 Fuel Inventory Study; chapter 6: Turkey Point Unit 3 Afterheat Power Study; chapter 7: Dickens Benchmark on Fission Product Energy Release of U-235; chapter 8: Dickens Benchmark on Fission Product Energy Release of Pu-239; chapter 9: Yarnell Benchmark on Decay Heat Measurements of U-233; chapter 10: Yarnell Benchmark on Decay Heat Measurements of U-235; chapter 11: Yarnell Benchmark on Decay Heat Measurements of Pu-239

  7. Numerical investigation of a dual-loop EGR split strategy using a split index and multi-objective Pareto optimization

    International Nuclear Information System (INIS)

    Park, Jungsoo; Song, Soonho; Lee, Kyo Seung

    2015-01-01

    Highlights: • Model-based control of dual-loop EGR system is performed. • EGR split index is developed to provide non-dimensional index for optimization. • EGR rates are calibrated using EGR split index at specific operating conditions. • Multi-objective Pareto optimization is performed to minimize NO X and BSFC. • Optimum split strategies are suggested with LP-rich dual-loop EGR at high load. - Abstract: A proposed dual-loop exhaust-gas recirculation (EGR) system that combines the features of high-pressure (HP) and low-pressure (LP) systems is considered a key technology for improving the combustion behavior of diesel engines. The fraction of HP and LP flows, known as the EGR split, for a given dual-loop EGR rate play an important role in determining the engine performance and emission characteristics. Therefore, identifying the proper EGR split is important for the engine optimization and calibration processes, which affect the EGR response and deNO X efficiencies. The objective of this research was to develop a dual-loop EGR split strategy using numerical analysis and one-dimensional (1D) cycle simulation. A control system was modeled by coupling the 1D cycle simulation and the control logic. An EGR split index was developed to investigate the HP/LP split effects on the engine performance and emissions. Using the model-based control system, a multi-objective Pareto (MOP) analysis was used to minimize the NO X formation and fuel consumption through optimized engine operating parameters. The MOP analysis was performed using a response surface model extracted from Latin hypercube sampling as a fractional factorial design of experiment. By using an LP rich dual-loop EGR, a high EGR rate was attained at low, medium, and high engine speeds, increasing the applicable load ranges compared to base conditions

  8. Analysis of Benchmark 2 results

    International Nuclear Information System (INIS)

    Bacha, F.; Lefievre, B.; Maillard, J.; Silva, J.

    1994-01-01

    The code GEANT315 has been compared to different codes in two benchmarks. We analyze its performances through our results, especially in the thick target case. In spite of gaps in nucleus-nucleus interaction theories at intermediate energies, benchmarks allow possible improvements of physical models used in our codes. Thereafter, a scheme of radioactive waste burning system is studied. (authors). 4 refs., 7 figs., 1 tab

  9. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  10. HPCG Benchmark Technical Specification

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, Michael Allen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States)

    2013-10-01

    The High Performance Conjugate Gradient (HPCG) benchmark [cite SNL, UTK reports] is a tool for ranking computer systems based on a simple additive Schwarz, symmetric Gauss-Seidel preconditioned conjugate gradient solver. HPCG is similar to the High Performance Linpack (HPL), or Top 500, benchmark [1] in its purpose, but HPCG is intended to better represent how today’s applications perform. In this paper we describe the technical details of HPCG: how it is designed and implemented, what code transformations are permitted and how to interpret and report results.

  11. [Do you mean benchmarking?].

    Science.gov (United States)

    Bonnet, F; Solignac, S; Marty, J

    2008-03-01

    The purpose of benchmarking is to settle improvement processes by comparing the activities to quality standards. The proposed methodology is illustrated by benchmark business cases performed inside medical plants on some items like nosocomial diseases or organization of surgery facilities. Moreover, the authors have built a specific graphic tool, enhanced with balance score numbers and mappings, so that the comparison between different anesthesia-reanimation services, which are willing to start an improvement program, is easy and relevant. This ready-made application is even more accurate as far as detailed tariffs of activities are implemented.

  12. Benchmark problem for IAEA coordinated research program (CRP-3) on GCR afterheat removal. 1

    International Nuclear Information System (INIS)

    Takada, Shoji; Shiina, Yasuaki; Inagaki, Yoshiyuki; Hishida, Makoto; Sudo, Yukio

    1995-08-01

    In this report, detailed data which are necessary for the benchmark analysis of International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP-3) on 'Heat Transport and Afterheat Removal for Gas-cooled Reactors under Accident Conditions' are described concerning about the configuration and sizes of the cooling panel test apparatus, experimental data and thermal properties. The test section of the test apparatus is composed of pressure vessel (max. 450degC) containing an electric heater (max. 100kW, 600degC) and cooling panels surrounding the pressure vessel. Gas pressure is varied from vacuum to 1.0MPa in the pressure vessel. Two experimental cases are selected as benchmark problems about afterheat removal of HTGR, described as follows, The experimental conditions are vacuum inside the pressure vessel and heater output 13.14kW, and helium gas pressure 0.73MPa inside the pressure vessel and heater output 28.79kW. Benchmark problems are to calculate temperature distributions on the outer surface of pressure vessel and heat transferred to the cooling panel using the experimental data. The analytical result of temperature distribution on the pressure vessel was estimated +38degC, -29degC compared with the experimental data, and analytical result of heat transferred from the surface of pressure vessel to the cooling panel was estimated max. -11.4% compared with the experimental result by using the computational code -THANPACST2- of JAERI. (author)

  13. Benchmarking in digital circuit design automation

    NARCIS (Netherlands)

    Jozwiak, L.; Gawlowski, D.M.; Slusarczyk, A.S.

    2008-01-01

    This paper focuses on benchmarking, which is the main experimental approach to the design method and EDA-tool analysis, characterization and evaluation. We discuss the importance and difficulties of benchmarking, as well as the recent research effort related to it. To resolve several serious

  14. Benchmarking, Total Quality Management, and Libraries.

    Science.gov (United States)

    Shaughnessy, Thomas W.

    1993-01-01

    Discussion of the use of Total Quality Management (TQM) in higher education and academic libraries focuses on the identification, collection, and use of reliable data. Methods for measuring quality, including benchmarking, are described; performance measures are considered; and benchmarking techniques are examined. (11 references) (MES)

  15. Toxicological benchmarks for wildlife: 1994 Revision

    International Nuclear Information System (INIS)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II.

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report

  16. Toxicological benchmarks for wildlife: 1994 Revision

    Energy Technology Data Exchange (ETDEWEB)

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  17. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  18. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-09-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR’06 are highlighted, and the future of the two projects is discussed.

  19. INTEGRAL BENCHMARKS AVAILABLE THROUGH THE INTERNATIONAL REACTOR PHYSICS EXPERIMENT EVALUATION PROJECT AND THE INTERNATIONAL CRITICALITY SAFETY BENCHMARK EVALUATION PROJECT

    International Nuclear Information System (INIS)

    J. Blair Briggs; Lori Scott; Enrico Sartori; Yolanda Rugama

    2008-01-01

    Interest in high-quality integral benchmark data is increasing as efforts to quantify and reduce calculational uncertainties accelerate to meet the demands of next generation reactor and advanced fuel cycle concepts. The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) continue to expand their efforts and broaden their scope to identify, evaluate, and provide integral benchmark data for method and data validation. Benchmark model specifications provided by these two projects are used heavily by the international reactor physics, nuclear data, and criticality safety communities. Thus far, 14 countries have contributed to the IRPhEP, and 20 have contributed to the ICSBEP. The status of the IRPhEP and ICSBEP is discussed in this paper, and the future of the two projects is outlined and discussed. Selected benchmarks that have been added to the IRPhEP and ICSBEP handbooks since PHYSOR-06 are highlighted, and the future of the two projects is discussed

  20. Splitting Functions at High Transverse Momentum

    CERN Document Server

    Moutafis, Rhea Penelope; CERN. Geneva. TH Department

    2017-01-01

    Among the production channels of the Higgs boson one contribution could become significant at high transverse momentum which is the radiation of a Higgs boson from another particle. This note focuses on the calculation of splitting functions and cross sections of such processes. The calculation is first carried out on the example $e\\rightarrow e\\gamma$ to illustrate the way splitting functions are calculated. Then the splitting function of $e\\rightarrow eh$ is calculated in similar fashion. This procedure can easily be generalized to processes such as $q\\rightarrow qh$ or $g\\rightarrow gh$.

  1. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  2. Analysis of a molten salt reactor benchmark

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Bajpai, Anil; Degweker, S.B.

    2013-01-01

    This paper discusses results of our studies of an IAEA molten salt reactor (MSR) benchmark. The benchmark, proposed by Japan, involves burnup calculations of a single lattice cell of a MSR for burning plutonium and other minor actinides. We have analyzed this cell with in-house developed burnup codes BURNTRAN and McBURN. This paper also presents a comparison of the results of our codes and those obtained by the proposers of the benchmark. (author)

  3. SplitRacer - a semi-automatic tool for the analysis and interpretation of teleseismic shear-wave splitting

    Science.gov (United States)

    Reiss, Miriam Christina; Rümpker, Georg

    2017-04-01

    We present a semi-automatic, graphical user interface tool for the analysis and interpretation of teleseismic shear-wave splitting in MATLAB. Shear wave splitting analysis is a standard tool to infer seismic anisotropy, which is often interpreted as due to lattice-preferred orientation of e.g. mantle minerals or shape-preferred orientation caused by cracks or alternating layers in the lithosphere and hence provides a direct link to the earth's kinematic processes. The increasing number of permanent stations and temporary experiments result in comprehensive studies of seismic anisotropy world-wide. Their successive comparison with a growing number of global models of mantle flow further advances our understanding the earth's interior. However, increasingly large data sets pose the inevitable question as to how to process them. Well-established routines and programs are accurate but often slow and impractical for analyzing a large amount of data. Additionally, shear wave splitting results are seldom evaluated using the same quality criteria which complicates a straight-forward comparison. SplitRacer consists of several processing steps: i) download of data per FDSNWS, ii) direct reading of miniSEED-files and an initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold. iii) an analysis of the particle motion of selected phases and successive correction of the sensor miss-alignment based on the long-axis of the particle motion. iv) splitting analysis of selected events: seismograms are first rotated into radial and transverse components, then the energy-minimization method is applied, which provides the polarization and delay time of the phase. To estimate errors, the analysis is done for different randomly-chosen time windows. v) joint-splitting analysis for all events for one station, where the energy content of all phases is inverted simultaneously. This allows to decrease the influence of noise and to increase robustness of the measurement

  4. Improved continuum lowering calculations in screened hydrogenic model with l-splitting for high energy density systems

    Science.gov (United States)

    Ali, Amjad; Shabbir Naz, G.; Saleem Shahzad, M.; Kouser, R.; Aman-ur-Rehman; Nasim, M. H.

    2018-03-01

    The energy states of the bound electrons in high energy density systems (HEDS) are significantly affected due to the electric field of the neighboring ions. Due to this effect bound electrons require less energy to get themselves free and move into the continuum. This phenomenon of reduction in potential is termed as ionization potential depression (IPD) or the continuum lowering (CL). The foremost parameter to depict this change is the average charge state, therefore accurate modeling for CL is imperative in modeling atomic data for computation of radiative and thermodynamic properties of HEDS. In this paper, we present an improved model of CL in the screened hydrogenic model with l-splitting (SHML) proposed by G. Faussurier and C. Blancard, P. Renaudin [High Energy Density Physics 4 (2008) 114] and its effect on average charge state. We propose the level charge dependent calculation of CL potential energy and inclusion of exchange and correlation energy in SHML. By doing this, we made our model more relevant to HEDS and free from CL empirical parameter to the plasma environment. We have implemented both original and modified model of SHML in our code named OPASH and benchmark our results with experiments and other state-of-the-art simulation codes. We compared our results of average charge state for Carbon, Beryllium, Aluminum, Iron and Germanium against published literature and found a very reasonable agreement between them.

  5. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  6. Benchmarking: contexts and details matter.

    Science.gov (United States)

    Zheng, Siyuan

    2017-07-05

    Benchmarking is an essential step in the development of computational tools. We take this opportunity to pitch in our opinions on tool benchmarking, in light of two correspondence articles published in Genome Biology.Please see related Li et al. and Newman et al. correspondence articles: www.dx.doi.org/10.1186/s13059-017-1256-5 and www.dx.doi.org/10.1186/s13059-017-1257-4.

  7. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    International Nuclear Information System (INIS)

    Marshall, M.A.; Bess, J.D.

    2011-01-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas(reg s ign) reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 ± 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  8. Benchmark analysis of MCNP trademark ENDF/B-VI iron

    International Nuclear Information System (INIS)

    Court, J.D.; Hendricks, J.S.

    1994-12-01

    The MCNP ENDF/B-VI iron cross-section data was subjected to four benchmark studies as part of the Hiroshima/Nagasaki dose re-evaluation for the National Academy of Science and the Defense Nuclear Agency. The four benchmark studies were: (1) the iron sphere benchmarks from the Lawrence Livermore Pulsed Spheres; (2) the Oak Ridge National Laboratory Fusion Reactor Shielding Benchmark; (3) a 76-cm diameter iron sphere benchmark done at the University of Illinois; (4) the Oak Ridge National Laboratory Benchmark for Neutron Transport through Iron. MCNP4A was used to model each benchmark and computational results from the ENDF/B-VI iron evaluations were compared to ENDF/B-IV, ENDF/B-V, the MCNP Recommended Data Set (which includes Los Alamos National Laboratory Group T-2 evaluations), and experimental data. The results show that the ENDF/B-VI iron evaluations are as good as, or better than, previous data sets

  9. Analysis of an OECD/NEA high-temperature reactor benchmark

    International Nuclear Information System (INIS)

    Hosking, J. G.; Newton, T. D.; Koeberl, O.; Morris, P.; Goluoglu, S.; Tombakoglu, T.; Colak, U.; Sartori, E.

    2006-01-01

    This paper describes analyses of the OECD/NEA HTR benchmark organized by the 'Working Party on the Scientific Issues of Reactor Systems (WPRS)', formerly the 'Working Party on the Physics of Plutonium Fuels and Innovative Fuel Cycles'. The benchmark was specifically designed to provide inter-comparisons for plutonium and thorium fuels when used in HTR systems. Calculations considering uranium fuel have also been included in the benchmark, in order to identify any increased uncertainties when using plutonium or thorium fuels. The benchmark consists of five phases, which include cell and whole-core calculations. Analysis of the benchmark has been performed by a number of international participants, who have used a range of deterministic and Monte Carlo code schemes. For each of the benchmark phases, neutronics parameters have been evaluated. Comparisons are made between the results of the benchmark participants, as well as comparisons between the predictions of the deterministic calculations and those from detailed Monte Carlo calculations. (authors)

  10. Generalized field-splitting algorithms for optimal IMRT delivery efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2007-09-21

    Intensity-modulated radiation therapy (IMRT) uses radiation beams of varying intensities to deliver varying doses of radiation to different areas of the tissue. The use of IMRT has allowed the delivery of higher doses of radiation to the tumor and lower doses to the surrounding healthy tissue. It is not uncommon for head and neck tumors, for example, to have large treatment widths that are not deliverable using a single field. In such cases, the intensity matrix generated by the optimizer needs to be split into two or three matrices, each of which may be delivered using a single field. Existing field-splitting algorithms used the pre-specified arbitrary split line or region where the intensity matrix is split along a column, i.e., all rows of the matrix are split along the same column (with or without the overlapping of split fields, i.e., feathering). If three fields result, then the two splits are along the same two columns for all rows. In this paper we study the problem of splitting a large field into two or three subfields with the field width as the only constraint, allowing for an arbitrary overlap of the split fields, so that the total MU efficiency of delivering the split fields is maximized. Proof of optimality is provided for the proposed algorithm. An average decrease of 18.8% is found in the total MUs when compared to the split generated by a commercial treatment planning system and that of 10% is found in the total MUs when compared to the split generated by our previously published algorithm. For more information on this article, see medicalphysicsweb.org.

  11. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  12. Split-plot fractional designs: Is minimum aberration enough?

    DEFF Research Database (Denmark)

    Kulahci, Murat; Ramirez, Jose; Tobias, Randy

    2006-01-01

    Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....

  13. MoMaS reactive transport benchmark using PFLOTRAN

    Science.gov (United States)

    Park, H.

    2017-12-01

    MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.

  14. Criteria of benchmark selection for efficient flexible multibody system formalisms

    Directory of Open Access Journals (Sweden)

    Valášek M.

    2007-10-01

    Full Text Available The paper deals with the selection process of benchmarks for testing and comparing efficient flexible multibody formalisms. The existing benchmarks are briefly summarized. The purposes for benchmark selection are investigated. The result of this analysis is the formulation of the criteria of benchmark selection for flexible multibody formalisms. Based on them the initial set of suitable benchmarks is described. Besides that the evaluation measures are revised and extended.

  15. Nanocrystals and Nanoclusters as Cocatalysts for Photocatalytic Water Splitting

    KAUST Repository

    Sinatra, Lutfan

    2016-12-04

    The energy consumptions worldwide have increased simultaneously with the growth of the population and of the economy. Nowadays, finding an alternative way to satisfy the energy demand is one of the great challenges for the future of humanity, especially due to the limitation of fossil fuels and their effect on global warming. Hydrogen, as an alternative fuel for the future, is very attractive. Compared to traditional methods, such as the steam reforming of natural gas or coal gasification, photocatalytic water splitting (PWS) is considered to be the most sustainable alternative for producing hydrogen as a future fuel. PWS usually relies on semiconductor material that can transform the absorbed solar photon into photogenerated electrons and holes, creating a photopotential which can drive the electrochemical production of molecular hydrogen from the reduction of water. Despite its promising application, semiconductor-based PWS usually suffers from low carrier mobility and short diffusion length. Furthermore, the recombination of photogenerated electrons and holes might occur, especially if there are no suitable reaction sites available on the surface of the semiconductor. In order to facilitate the catalytic reactions on the surface of the semiconductor, the presence of a cocatalyst is necessary in order to obtain more efficient PWS processes. To this day, noble metals such as Pt, Pd, RuO2 and IrO2 are still the benchmark cocatalysts for PWS. Nevertheless, due to their high cost and limited supply, it is mandatory to develop a suitable strategy and to identify more efficient materials. Therefore, within the framework of this dissertation, novel cocatalysts and strategies that can improve the efficiency of the photocatalytic water splitting processes have been developed. Firstly, we developed a cocatalyst combining noble metals and semiconductors by means of partial galvanic replacement of the Cu2O nanocrystal with Au. The deposition of this cocatalyst on TiO2 was

  16. 26 CFR 1.7872-15 - Split-dollar loans.

    Science.gov (United States)

    2010-04-01

    ...'s death benefit proceeds, the policy's cash surrender value, or both. (ii) Payments that are only... regarding certain split-dollar term loans payable on the death of an individual, certain split-dollar term... insurance arrangement make a representation—(i) Requirement. An otherwise noncontingent payment on a split...

  17. Criticality safety benchmarking of PASC-3 and ECNJEF1.1

    International Nuclear Information System (INIS)

    Li, J.

    1992-09-01

    To validate the code system PASC-3 and the multigroup cross section library ECNJEF1.1 on various applications many benchmarks are required. This report presents the results of critically safety benchmarking for five calculational and four experimental benchmarks. These benchmarks are related to the transport package of fissile materials such as spent fuel. The fissile nuclides in these benchmarks are 235 U and 239 Pu. The modules of PASC-3 which have been used for the calculations are BONAMI, NITAWL and KENO.5A. The final results for the experimental benchmarks do agree well with experimental data. For the calculational benchmarks the results presented here are in reasonable agreement with the results from other investigations. (author). 8 refs.; 20 figs.; 5 tabs

  18. Project W-320 thermal hydraulic model benchmarking and baselining

    International Nuclear Information System (INIS)

    Sathyanarayana, K.

    1998-01-01

    Project W-320 will be retrieving waste from Tank 241-C-106 and transferring the waste to Tank 241-AY-102. Waste in both tanks must be maintained below applicable thermal limits during and following the waste transfer. Thermal hydraulic process control models will be used for process control of the thermal limits. This report documents the process control models and presents a benchmarking of the models with data from Tanks 241-C-106 and 241-AY-102. Revision 1 of this report will provide a baselining of the models in preparation for the initiation of sluicing

  19. Revaluering benchmarking - A topical theme for the construction industry

    OpenAIRE

    Rasmussen, Grane Mikael Gregaard

    2011-01-01

    Over the past decade, benchmarking has increasingly gained foothold in the construction industry. The predominant research, perceptions and uses of benchmarking are valued so strongly and uniformly, that what may seem valuable, is actually abstaining researchers and practitioners from studying and questioning the concept objectively. This paper addresses the underlying nature of benchmarking, and accounts for the importance of focusing attention on the sociological impacts benchmarking has in...

  20. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  1. Benchmarking and validation activities within JEFF project

    Directory of Open Access Journals (Sweden)

    Cabellos O.

    2017-01-01

    Full Text Available The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  2. Benchmarking and validation activities within JEFF project

    Science.gov (United States)

    Cabellos, O.; Alvarez-Velarde, F.; Angelone, M.; Diez, C. J.; Dyrda, J.; Fiorito, L.; Fischer, U.; Fleming, M.; Haeck, W.; Hill, I.; Ichou, R.; Kim, D. H.; Klix, A.; Kodeli, I.; Leconte, P.; Michel-Sendis, F.; Nunnenmann, E.; Pecchia, M.; Peneliau, Y.; Plompen, A.; Rochman, D.; Romojaro, P.; Stankovskiy, A.; Sublet, J. Ch.; Tamagno, P.; Marck, S. van der

    2017-09-01

    The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library, with all needed data and covariances, and ensure that it is robust and reliable for a variety of applications. Within an evaluation effort, benchmarking activities play an important role in validating proposed libraries. The Joint Evaluated Fission and Fusion (JEFF) Project aims to provide such a nuclear data library, and thus, requires a coherent and efficient benchmarking process. The aim of this paper is to present the activities carried out by the new JEFF Benchmarking and Validation Working Group, and to describe the role of the NEA Data Bank in this context. The paper will also review the status of preliminary benchmarking for the next JEFF-3.3 candidate cross-section files.

  3. Benchmarking

    OpenAIRE

    Beretta Sergio; Dossi Andrea; Grove Hugh

    2000-01-01

    Due to their particular nature, the benchmarking methodologies tend to exceed the boundaries of management techniques, and to enter the territories of managerial culture. A culture that is also destined to break into the accounting area not only strongly supporting the possibility of fixing targets, and measuring and comparing the performance (an aspect that is already innovative and that is worthy of attention), but also questioning one of the principles (or taboos) of the accounting or...

  4. A guiding oblique osteotomy cut to prevent bad split in sagittal split ramus osteotomy: a technical note

    Directory of Open Access Journals (Sweden)

    Gururaj Arakeri

    2015-06-01

    Full Text Available Aim: To present a simple technical modification of a medial osteotomy cut which prevents its misdirection and overcomes various anatomical variations as well as technical problems. Methods: The medial osteotomy cut is modified in the posterior half at an angle of 15°-20° following novel landmarks. Results: The proposed cut exclusively directs the splitting forces downwards to create a favorable lingual fracture, preventing the possibility of an upwards split which would cause a coronoid or condylar fracture. Conclusion: This modification has proven to be successful to date without encountering the complications of a bad split or nerve damage.

  5. Splitting and non splitting are pollution models photochemical reactions in the urban areas of greater Tehran area

    International Nuclear Information System (INIS)

    Heidarinasab, A.; Dabir, B.; Sahimi, M.; Badii, Kh.

    2003-01-01

    During the past years, one of the most important problems has been air pollution in urban areas. In this regards, ozone, as one of the major products of photochemical reactions, has great importance. The term 'photochemical' is applied to a number of secondary pollutants that appear as a result of sun-related reactions, ozone being the most important one. So far various models have been suggested to predict these pollutants. In this paper, we developed the model that has been introduced by Dabir, et al. [4]. In this model more than 48 chemical species and 114 chemical reactions are involved. The result of this development, showed good to excellent agreement across the region for compounds such as O 3 , NO, NO 2 , CO, and SO 2 with regard to VOC and NMHC. The results of the simulation were compared with previous work [4] and the effects of increasing the number of components and reactions were evaluated. The results of the operator splitting method were compared with non splitting solving method. The result showed that splitting method with one-tenth time step collapsed with non splitting method (Crank-Nicolson, under-relaxation iteration method without splitting of the equation terms). Then we developed one dimensional model to 3-D and were compared with experimental data

  6. Benchmarking in Identifying Priority Directions of Development of Telecommunication Operators

    Directory of Open Access Journals (Sweden)

    Zaharchenko Lolita A.

    2013-12-01

    Full Text Available The article analyses evolution of development and possibilities of application of benchmarking in the telecommunication sphere. It studies essence of benchmarking on the basis of generalisation of approaches of different scientists to definition of this notion. In order to improve activity of telecommunication operators, the article identifies the benchmarking technology and main factors, that determine success of the operator in the modern market economy, and the mechanism of benchmarking and component stages of carrying out benchmarking by a telecommunication operator. It analyses the telecommunication market and identifies dynamics of its development and tendencies of change of the composition of telecommunication operators and providers. Having generalised the existing experience of benchmarking application, the article identifies main types of benchmarking of telecommunication operators by the following features: by the level of conduct of (branch, inter-branch and international benchmarking; by relation to participation in the conduct (competitive and joint; and with respect to the enterprise environment (internal and external.

  7. Thermal reactor benchmark tests on JENDL-2

    International Nuclear Information System (INIS)

    Takano, Hideki; Tsuchihashi, Keichiro; Yamane, Tsuyoshi; Akino, Fujiyoshi; Ishiguro, Yukio; Ido, Masaru.

    1983-11-01

    A group constant library for the thermal reactor standard nuclear design code system SRAC was produced by using the evaluated nuclear data JENDL-2. Furthermore, the group constants for 235 U were calculated also from ENDF/B-V. Thermal reactor benchmark calculations were performed using the produced group constant library. The selected benchmark cores are two water-moderated lattices (TRX-1 and 2), two heavy water-moderated cores (DCA and ETA-1), two graphite-moderated cores (SHE-8 and 13) and eight critical experiments for critical safety. The effective multiplication factors and lattice cell parameters were calculated and compared with the experimental values. The results are summarized as follows. (1) Effective multiplication factors: The results by JENDL-2 are considerably improved in comparison with ones by ENDF/B-IV. The best agreement is obtained by using JENDL-2 and ENDF/B-V (only 235 U) data. (2) Lattice cell parameters: For the rho 28 (the ratio of epithermal to thermal 238 U captures) and C* (the ratio of 238 U captures to 235 U fissions), the values calculated by JENDL-2 are in good agreement with the experimental values. The rho 28 (the ratio of 238 U to 235 U fissions) are overestimated as found also for the fast reactor benchmarks. The rho 02 (the ratio of epithermal to thermal 232 Th captures) calculated by JENDL-2 or ENDF/B-IV are considerably underestimated. The functions of the SRAC system have been continued to be extended according to the needs of its users. A brief description will be given, in Appendix B, to the extended parts of the SRAC system together with the input specification. (author)

  8. Handbook of critical experiments benchmarks

    International Nuclear Information System (INIS)

    Durst, B.M.; Bierman, S.R.; Clayton, E.D.

    1978-03-01

    Data from critical experiments have been collected together for use as benchmarks in evaluating calculational techniques and nuclear data. These benchmarks have been selected from the numerous experiments performed on homogeneous plutonium systems. No attempt has been made to reproduce all of the data that exists. The primary objective in the collection of these data is to present representative experimental data defined in a concise, standardized format that can easily be translated into computer code input

  9. Benchmark results in radiative transfer

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Siewert, C.E.

    1986-02-01

    Several aspects of the F N method are reported, and the method is used to solve accurately some benchmark problems in radiative transfer in the field of atmospheric physics. The method was modified to solve cases of pure scattering and an improved process was developed for computing the radiation intensity. An algorithms for computing several quantities used in the F N method was done. An improved scheme to evaluate certain integrals relevant to the method is done, and a two-term recursion relation that has proved useful for the numerical evaluation of matrix elements, basic for the method, is given. The methods used to solve the encountered linear algebric equations are discussed, and the numerical results are evaluated. (M.C.K.) [pt

  10. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Stueben, H.; Wegner, P.; Wettig, T.; Wittig, H.

    2004-01-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E; Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC

  11. Tune splitting in the presence of linear coupling

    International Nuclear Information System (INIS)

    Parzen, G.

    1991-01-01

    The presence of random skew quadrupole field errors will couple the x and y motions. The x and y motions are then each given by the sum of 2 normal modes with the tunes v 1 and v 2 , which may differ appreciably from v x and v y , the unperturbed tunes. This is often called tune splitting since |v 1 - v 2 | is usually larger than |v x - v y |. This tune splitting may be large in proton accelerators using superconducting magnets, because of the relatively large random skew quadrupole field errors that are expected in these magnets. This effect is also increased by the required insertions in proton colliders which generate large β-functions in the insertion region. This tune splitting has been studied in the RHIC accelerator. For RHIC, a tune splitting as large as 0.2 was found in one worse case. A correction system has been developed for correcting this large tune splitting which uses two families of skew quadrupole correctors. It has been found that this correction system corrects most of the large tune splitting, but a residual tune splitting remains that is still appreciable. This paper discusses the corrections to this residual time

  12. OECD benchmark a of MOX fueled PWR unit cells using SAS2H, triton and mocup

    International Nuclear Information System (INIS)

    Ganda, F.; Greenspan, A.

    2005-01-01

    Three code systems are tested by applying them to calculate the OECD PWR MOX unit cell benchmark A. The codes tested are the SAS2H code sequence of the SCALE5 code package using 44 group library, MOCUP (MCNP4C + ORIGEN2), and the new TRITON depletion sequence of SCALE5 using 238 group cross sections generated using CENTRM with continuous energy cross sections. The burnup-dependent k ∞ and actinides concentration calculated by all three code-systems were found to be in good agreement with the OECD benchmark average results. Limited results were calculated also with the WIMS-ANL code package. WIMS-ANL was found to significantly under-predict k ∞ as well as the concentration of Pu 242 , consistently with the predictions of the WIMS-LWR reported by two of the OECD benchmark participants. Additionally, SAS2H is benchmarked against MOCUP for a hydride fuel containing unit cell, giving very satisfactory agreement. (authors)

  13. Benchmarking computer platforms for lattice QCD applications

    International Nuclear Information System (INIS)

    Hasenbusch, M.; Jansen, K.; Pleiter, D.; Wegner, P.; Wettig, T.

    2003-09-01

    We define a benchmark suite for lattice QCD and report on benchmark results from several computer platforms. The platforms considered are apeNEXT, CRAY T3E, Hitachi SR8000, IBM p690, PC-Clusters, and QCDOC. (orig.)

  14. A simplified 2D HTTR benchmark problem

    International Nuclear Information System (INIS)

    Zhang, Z.; Rahnema, F.; Pounders, J. M.; Zhang, D.; Ougouag, A.

    2009-01-01

    To access the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of relevant whole core configurations. In this paper we have created a numerical benchmark problem in 2D configuration typical of a high temperature gas cooled prismatic core. This problem was derived from the HTTR start-up experiment. For code-to-code verification, complex details of geometry and material specification of the physical experiments are not necessary. To this end, the benchmark problem presented here is derived by simplifications that remove the unnecessary details while retaining the heterogeneity and major physics properties from the neutronics viewpoint. Also included here is a six-group material (macroscopic) cross section library for the benchmark problem. This library was generated using the lattice depletion code HELIOS. Using this library, benchmark quality Monte Carlo solutions are provided for three different configurations (all-rods-in, partially-controlled and all-rods-out). The reference solutions include the core eigenvalue, block (assembly) averaged fuel pin fission density distributions, and absorption rate in absorbers (burnable poison and control rods). (authors)

  15. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    Bulej, Lubomír

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  16. Second benchmark problem for WIPP structural computations

    International Nuclear Information System (INIS)

    Krieg, R.D.; Morgan, H.S.; Hunter, T.O.

    1980-12-01

    This report describes the second benchmark problem for comparison of the structural codes used in the WIPP project. The first benchmark problem consisted of heated and unheated drifts at a depth of 790 m, whereas this problem considers a shallower level (650 m) more typical of the repository horizon. But more important, the first problem considered a homogeneous salt configuration, whereas this problem considers a configuration with 27 distinct geologic layers, including 10 clay layers - 4 of which are to be modeled as possible slip planes. The inclusion of layering introduces complications in structural and thermal calculations that were not present in the first benchmark problem. These additional complications will be handled differently by the various codes used to compute drift closure rates. This second benchmark problem will assess these codes by evaluating the treatment of these complications

  17. Split-plot designs for robotic serial dilution assays.

    Science.gov (United States)

    Buzas, Jeffrey S; Wager, Carrie G; Lansky, David M

    2011-12-01

    This article explores effective implementation of split-plot designs in serial dilution bioassay using robots. We show that the shortest path for a robot to fill plate wells for a split-plot design is equivalent to the shortest common supersequence problem in combinatorics. We develop an algorithm for finding the shortest common supersequence, provide an R implementation, and explore the distribution of the number of steps required to implement split-plot designs for bioassay through simulation. We also show how to construct collections of split plots that can be filled in a minimal number of steps, thereby demonstrating that split-plot designs can be implemented with nearly the same effort as strip-plot designs. Finally, we provide guidelines for modeling data that result from these designs. © 2011, The International Biometric Society.

  18. Benchmarks: The Development of a New Approach to Student Evaluation.

    Science.gov (United States)

    Larter, Sylvia

    The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…

  19. Aerodynamic benchmarking of the DeepWind design

    DEFF Research Database (Denmark)

    Bedon, Gabriele; Schmidt Paulsen, Uwe; Aagaard Madsen, Helge

    The aerodynamic benchmarking for the DeepWind rotor is conducted comparing different rotor geometries and solutions and keeping the comparison as fair as possible. The objective for the benchmarking is to find the most suitable configuration in order to maximize the power production and minimize...... the blade solicitation and the cost of energy. Different parameters are considered for the benchmarking study. The DeepWind blade is characterized by a shape similar to the Troposkien geometry but asymmetric between the top and bottom parts. The blade shape is considered as a fixed parameter...

  20. Integral benchmarks with reference to thorium fuel cycle

    International Nuclear Information System (INIS)

    Ganesan, S.

    2003-01-01

    This is a power point presentation about the Indian participation in the CRP 'Evaluated Data for the Thorium-Uranium fuel cycle'. The plans and scope of the Indian participation are to provide selected integral experimental benchmarks for nuclear data validation, including Indian Thorium burn up benchmarks, post-irradiation examination studies, comparison of basic evaluated data files and analysis of selected benchmarks for Th-U fuel cycle

  1. 7 CFR 51.2731 - U.S. Spanish Splits.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false U.S. Spanish Splits. 51.2731 Section 51.2731... STANDARDS) United States Standards for Grades of Shelled Spanish Type Peanuts Grades § 51.2731 U.S. Spanish Splits. “U.S. Spanish Splits” consists of shelled Spanish type peanut kernels which are split or broken...

  2. Synthesis of the turbulent mixing in a rod bundle with vaned spacer grids based on the OECD-KAERI CFD benchmark exercise

    International Nuclear Information System (INIS)

    Lee, Jae Ryong; Kim, Jungwoo; Song, Chul-Hwa

    2014-01-01

    Highlights: • OECD/KAERI international CFD benchmark exercise was operated by KAERI. • The purpose is to validate relevant CFD codes based on the MATiS-H experiments. • Blind calculation results were synthesized in terms of mean velocity and RMS. • Quality of control volume rather than the number of it was emphasized. • Major findings were followed OECD/NEA CSNI report. - Abstract: The second international CFD benchmark exercise on turbulent mixing in a rod bundle has been launched by OECD/NEA, to validate relevant CFD (Computational Fluid Dynamics) codes and develop problem-specific Best Practice Guidelines (BPG) based on the KAERI (Korea Atomic Energy Research Institute) MATiS-H experiments on the turbulent mixing in a 5 × 5 rod array having two different types of vaned spacer grids: split and swirl types. For this 2nd international benchmark exercise (IBE-2), the MATiS-H testing provided a unique set of experimental data such as axial and lateral velocity components, turbulent intensity, and vorticity information. Blind CFD calculation results were submitted by twenty-five (25) participants to KAERI, who is the host organization of the IBE-2, and then analyzed and synthesized by comparing them with the MATiS-H data. Based on the synthesis of the results from both the experiments and blind CFD calculations for the IBE-2, and also by comparing with the IBE-1 benchmark exercise on the mixing in a T-junction, useful information for simulating this kind of complicated physical problem in a rod bundle was obtained. And some additional Best Practice Guidelines (BPG) are newly proposed. A summary of the synthesis results obtained in the IBE-2 is presented in this paper

  3. Synthesis of the turbulent mixing in a rod bundle with vaned spacer grids based on the OECD-KAERI CFD benchmark exercise

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Kim, Jungwoo; Song, Chul-Hwa, E-mail: chsong@kaeri.re.kr

    2014-11-15

    Highlights: • OECD/KAERI international CFD benchmark exercise was operated by KAERI. • The purpose is to validate relevant CFD codes based on the MATiS-H experiments. • Blind calculation results were synthesized in terms of mean velocity and RMS. • Quality of control volume rather than the number of it was emphasized. • Major findings were followed OECD/NEA CSNI report. - Abstract: The second international CFD benchmark exercise on turbulent mixing in a rod bundle has been launched by OECD/NEA, to validate relevant CFD (Computational Fluid Dynamics) codes and develop problem-specific Best Practice Guidelines (BPG) based on the KAERI (Korea Atomic Energy Research Institute) MATiS-H experiments on the turbulent mixing in a 5 × 5 rod array having two different types of vaned spacer grids: split and swirl types. For this 2nd international benchmark exercise (IBE-2), the MATiS-H testing provided a unique set of experimental data such as axial and lateral velocity components, turbulent intensity, and vorticity information. Blind CFD calculation results were submitted by twenty-five (25) participants to KAERI, who is the host organization of the IBE-2, and then analyzed and synthesized by comparing them with the MATiS-H data. Based on the synthesis of the results from both the experiments and blind CFD calculations for the IBE-2, and also by comparing with the IBE-1 benchmark exercise on the mixing in a T-junction, useful information for simulating this kind of complicated physical problem in a rod bundle was obtained. And some additional Best Practice Guidelines (BPG) are newly proposed. A summary of the synthesis results obtained in the IBE-2 is presented in this paper.

  4. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    International Nuclear Information System (INIS)

    Bess, John D.; Montierth, Leland; Köberl, Oliver

    2014-01-01

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the 235 U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of k eff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of k eff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments

  5. Split scheduling with uniform setup times.

    NARCIS (Netherlands)

    F. Schalekamp; R.A. Sitters (René); S.L. van der Ster; L. Stougie (Leen); V. Verdugo; A. van Zuylen

    2015-01-01

    htmlabstractWe study a scheduling problem in which jobs may be split into parts, where the parts of a split job may be processed simultaneously on more than one machine. Each part of a job requires a setup time, however, on the machine where the job part is processed. During setup, a

  6. Numerical simulation and experiment on multilayer stagger-split die.

    Science.gov (United States)

    Liu, Zhiwei; Li, Mingzhe; Han, Qigang; Yang, Yunfei; Wang, Bolong; Sui, Zhou

    2013-05-01

    A novel ultra-high pressure device, multilayer stagger-split die, has been constructed based on the principle of "dividing dies before cracking." Multilayer stagger-split die includes an encircling ring and multilayer assemblages, and the mating surfaces of the multilayer assemblages are mutually staggered between adjacent layers. In this paper, we investigated the stressing features of this structure through finite element techniques, and the results were compared with those of the belt type die and single split die. The contrast experiments were also carried out to test the bearing pressure performance of multilayer stagger-split die. It is concluded that the stress distributions are reasonable and the materials are utilized effectively for multilayer stagger-split die. And experiments indicate that the multilayer stagger-split die can bear the greatest pressure.

  7. Tensor products of higher almost split sequences

    OpenAIRE

    Pasquali, Andrea

    2015-01-01

    We investigate how the higher almost split sequences over a tensor product of algebras are related to those over each factor. Herschend and Iyama gave a precise criterion for when the tensor product of an $n$-representation finite algebra and an $m$-representation finite algebra is $(n+m)$-representation finite. In this case we give a complete description of the higher almost split sequences over the tensor product by expressing every higher almost split sequence as the mapping cone of a suit...

  8. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  9. Sunspot splitting triggering an eruptive flare

    Science.gov (United States)

    Louis, Rohan E.; Puschmann, Klaus G.; Kliem, Bernhard; Balthasar, Horst; Denker, Carsten

    2014-02-01

    Aims: We investigate how the splitting of the leading sunspot and associated flux emergence and cancellation in active region NOAA 11515 caused an eruptive M5.6 flare on 2012 July 2. Methods: Continuum intensity, line-of-sight magnetogram, and dopplergram data of the Helioseismic and Magnetic Imager were employed to analyse the photospheric evolution. Filtergrams in Hα and He I 10830 Å of the Chromospheric Telescope at the Observatorio del Teide, Tenerife, track the evolution of the flare. The corresponding coronal conditions were derived from 171 Å and 304 Å images of the Atmospheric Imaging Assembly. Local correlation tracking was utilized to determine shear flows. Results: Emerging flux formed a neutral line ahead of the leading sunspot and new satellite spots. The sunspot splitting caused a long-lasting flow towards this neutral line, where a filament formed. Further flux emergence, partly of mixed polarity, as well as episodes of flux cancellation occurred repeatedly at the neutral line. Following a nearby C-class precursor flare with signs of interaction with the filament, the filament erupted nearly simultaneously with the onset of the M5.6 flare and evolved into a coronal mass ejection. The sunspot stretched without forming a light bridge, splitting unusually fast (within about a day, complete ≈6 h after the eruption) in two nearly equal parts. The front part separated strongly from the active region to approach the neighbouring active region where all its coronal magnetic connections were rooted. It also rotated rapidly (by 4.9° h-1) and caused significant shear flows at its edge. Conclusions: The eruption resulted from a complex sequence of processes in the (sub-)photosphere and corona. The persistent flows towards the neutral line likely caused the formation of a flux rope that held the filament. These flows, their associated flux cancellation, the emerging flux, and the precursor flare all contributed to the destabilization of the flux rope. We

  10. Clean Energy Manufacturing Analysis Center Benchmark Report: Framework and Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Sandor, Debra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keyser, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mann, Margaret [National Renewable Energy Lab. (NREL), Golden, CO (United States); Engel-Cox, Jill [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-05-23

    This report documents the CEMAC methodologies for developing and reporting annual global clean energy manufacturing benchmarks. The report reviews previously published manufacturing benchmark reports and foundational data, establishes a framework for benchmarking clean energy technologies, describes the CEMAC benchmark analysis methodologies, and describes the application of the methodologies to the manufacturing of four specific clean energy technologies.

  11. The level 1 and 2 specification for parallel benchmark and a benchmark test of scalar-parallel computer SP2 based on the specifications

    International Nuclear Information System (INIS)

    Orii, Shigeo

    1998-06-01

    A benchmark specification for performance evaluation of parallel computers for numerical analysis is proposed. Level 1 benchmark, which is a conventional type benchmark using processing time, measures performance of computers running a code. Level 2 benchmark proposed in this report is to give the reason of the performance. As an example, scalar-parallel computer SP2 is evaluated with this benchmark specification in case of a molecular dynamics code. As a result, the main causes to suppress the parallel performance are maximum band width and start-up time of communication between nodes. Especially the start-up time is proportional not only to the number of processors but also to the number of particles. (author)

  12. Benchmarking to improve the quality of cystic fibrosis care.

    Science.gov (United States)

    Schechter, Michael S

    2012-11-01

    Benchmarking involves the ascertainment of healthcare programs with most favorable outcomes as a means to identify and spread effective strategies for delivery of care. The recent interest in the development of patient registries for patients with cystic fibrosis (CF) has been fueled in part by an interest in using them to facilitate benchmarking. This review summarizes reports of how benchmarking has been operationalized in attempts to improve CF care. Although certain goals of benchmarking can be accomplished with an exclusive focus on registry data analysis, benchmarking programs in Germany and the United States have supplemented these data analyses with exploratory interactions and discussions to better understand successful approaches to care and encourage their spread throughout the care network. Benchmarking allows the discovery and facilitates the spread of effective approaches to care. It provides a pragmatic alternative to traditional research methods such as randomized controlled trials, providing insights into methods that optimize delivery of care and allowing judgments about the relative effectiveness of different therapeutic approaches.

  13. Split of the superconducting transition and magnetism in UPt3

    International Nuclear Information System (INIS)

    Marikhin, V.G.

    1992-01-01

    A possible reason for splitting the superconducting phase transition in UPt 3 is discussed. The strong coupling of conduction electrons with uranium atom magnetic moments may be such a cause. The given assertion is based on the simple model described by the two-component order parameter φ Ginzburg -Landau functional. The Ginzburg - Landau functional without coupling has the whole symmetry D 6h of hexagonal crystal. Due to the presence of uranium atom magnetic moments M the symmetry is broken locally with the coupling term γ|Mφ| 2 in the Ginzburg - Landau functional. Averaging over the vector M configurations with the involment of the finite correlation radius a is performed. The inequality a 6h . This means that in a real crystal the hexagonal symmetry is not broken at the scales larger ξ. In the framework of the given theory the expressions for the specific heat jumps and equation combining the upper critical field H c2 and the phase transition split ΔT c with the pressure variation are obtained. The difficulties connencted with the small experimental magnitude of uranium atom magnetic moments are discussed

  14. Conditional Toxin Splicing Using a Split Intein System.

    Science.gov (United States)

    Alford, Spencer C; O'Sullivan, Connor; Howard, Perry L

    2017-01-01

    Protein toxin splicing mediated by split inteins can be used as a strategy for conditional cell ablation. The approach requires artificial fragmentation of a potent protein toxin and tethering each toxin fragment to a split intein fragment. The toxin-intein fragments are, in turn, fused to dimerization domains, such that addition of a dimerizing agent reconstitutes the split intein. These chimeric toxin-intein fusions remain nontoxic until the dimerizer is added, resulting in activation of intein splicing and ligation of toxin fragments to form an active toxin. Considerations for the engineering and implementation of conditional toxin splicing (CTS) systems include: choice of toxin split site, split site (extein) chemistry, and temperature sensitivity. The following method outlines design criteria and implementation notes for CTS using a previously engineered system for splicing a toxin called sarcin, as well as for developing alternative CTS systems.

  15. Split Scheduling with Uniform Setup Times

    NARCIS (Netherlands)

    Schalekamp, F.; Sitters, R.A.; van der Ster, S.L.; Stougie, L.; Verdugo, V.; van Zuylen, A.

    2015-01-01

    We study a scheduling problem in which jobs may be split into parts, where the parts of a split job may be processed simultaneously on more than one machine. Each part of a job requires a setup time, however, on the machine where the job part is processed. During setup, a machine cannot process or

  16. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    . The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques...

  17. Hematite photoanode co-functionalized with self-assembling melanin and C-phycocyanin for solar water splitting at neutral pH

    Energy Technology Data Exchange (ETDEWEB)

    Schrantz, Krisztina; Wyss, Pradeep P.; Ihssen, Julian; Toth, Rita; Bora, Debajeet K.; Vitol, Elina A.; Rozhkova, Elena A.; Pieles, Uwe; Thöny-Meyer, Linda; Braun, Artur

    2017-04-01

    tNature provides functional units which can be integrated in inorganic solar cell materials, such as lightharvesting antenna proteins and photosynthetic molecular machineries, and thus help in advancing artifi-cial photosynthesis. Their integration needs to address mechanical adhesion, light capture, charge transferand corrosion resistance. We showed recently how enzymatic polymerization of melanin can immobi-lize the cyanobacterial light harvesting protein C-phycocyanin on the surface of hematite, a prospectivemetal oxide photoanode for solar hydrogen production by water splitting in photoelectrochemical cells.After the optimization of the functionalization procedure, in this work we show reproducible hydrogenproduction, measured parallel to the photocurrent on this bio-hybrid electrode in benign neutral pHphosphate. Over 90% increase compared to the photocurrent of the pristine hematite could be achieved.The hydrogen evolution was monitored during the photoelectrochemical measurement in an improvedphotoelectrochemical cell. The C-phycocyanin-melanin coating on the hematite was shown to exhibit acomb-like fractal pattern. Raman spectroscopy supported the presence of the protein on the hematiteanode surface. The stability of the protein coating is demonstrated during the 2 h GC measurement andthe 24 h operando current density measurement

  18. Benchmarking multi-dimensional large strain consolidation analyses

    International Nuclear Information System (INIS)

    Priestley, D.; Fredlund, M.D.; Van Zyl, D.

    2010-01-01

    Analyzing the consolidation of tailings slurries and dredged fills requires a more extensive formulation than is used for common (small strain) consolidation problems. Large strain consolidation theories have traditionally been limited to 1-D formulations. SoilVision Systems has developed the capacity to analyze large strain consolidation problems in 2 and 3-D. The benchmarking of such formulations is not a trivial task. This paper presents several examples of modeling large strain consolidation in the beta versions of the new software. These examples were taken from the literature and were used to benchmark the large strain formulation used by the new software. The benchmarks reported here are: a comparison to the consolidation software application CONDES0, Townsend's Scenario B and a multi-dimensional analysis of long-term column tests performed on oil sands tailings. All three of these benchmarks were attained using the SVOffice suite. (author)

  19. JENDL-4.0 benchmarking for fission reactor applications

    International Nuclear Information System (INIS)

    Chiba, Go; Okumura, Keisuke; Sugino, Kazuteru; Nagaya, Yasunobu; Yokoyama, Kenji; Kugo, Teruhiko; Ishikawa, Makoto; Okajima, Shigeaki

    2011-01-01

    Benchmark testing for the newly developed Japanese evaluated nuclear data library JENDL-4.0 is carried out by using a huge amount of integral data. Benchmark calculations are performed with a continuous-energy Monte Carlo code and with the deterministic procedure, which has been developed for fast reactor analyses in Japan. Through the present benchmark testing using a wide range of benchmark data, significant improvement in the performance of JENDL-4.0 for fission reactor applications is clearly demonstrated in comparison with the former library JENDL-3.3. Much more accurate and reliable prediction for neutronic parameters for both thermal and fast reactors becomes possible by using the library JENDL-4.0. (author)

  20. NNLO time-like splitting functions in QCD

    International Nuclear Information System (INIS)

    Moch, S.; Vogt, A.

    2008-07-01

    We review the status of the calculation of the time-like splitting functions for the evolution of fragmentation functions to the next-to-next-to-leading order in perturbative QCD. By employing relations between space-like and time-like deep-inelastic processes, all quark-quark and the gluon-gluon time-like splitting functions have been obtained to three loops. The corresponding quantities for the quark-gluon and gluon-quark splitting at this order are presently still unknown except for their second Mellin moments. (orig.)

  1. Reviews and syntheses: Field data to benchmark the carbon cycle models for tropical forests

    Science.gov (United States)

    Clark, Deborah A.; Asao, Shinichi; Fisher, Rosie; Reed, Sasha; Reich, Peter B.; Ryan, Michael G.; Wood, Tana E.; Yang, Xiaojuan

    2017-10-01

    For more accurate projections of both the global carbon (C) cycle and the changing climate, a critical current need is to improve the representation of tropical forests in Earth system models. Tropical forests exchange more C, energy, and water with the atmosphere than any other class of land ecosystems. Further, tropical-forest C cycling is likely responding to the rapid global warming, intensifying water stress, and increasing atmospheric CO2 levels. Projections of the future C balance of the tropics vary widely among global models. A current effort of the modeling community, the ILAMB (International Land Model Benchmarking) project, is to compile robust observations that can be used to improve the accuracy and realism of the land models for all major biomes. Our goal with this paper is to identify field observations of tropical-forest ecosystem C stocks and fluxes, and of their long-term trends and climatic and CO2 sensitivities, that can serve this effort. We propose criteria for reference-level field data from this biome and present a set of documented examples from old-growth lowland tropical forests. We offer these as a starting point towards the goal of a regularly updated consensus set of benchmark field observations of C cycling in tropical forests.

  2. Stylized whole-core benchmark of the Integral Inherently Safe Light Water Reactor (I2S-LWR) concept

    International Nuclear Information System (INIS)

    Hon, Ryan; Kooreman, Gabriel; Rahnema, Farzad; Petrovic, Bojan

    2017-01-01

    Highlights: • A stylized benchmark specification of the I2S-LWR core. • A library of cross sections were generated in both 8 and 47 groups. • Monte Carlo solutions generated for the 8 group library using MCNP5. • Cross sections and pin fission densities provided in journal’s repository. - Abstract: The Integral, Inherently Safe Light Water Reactor (I 2 S-LWR) is a pressurized water reactor (PWR) concept under development by a multi-institutional team led by Georgia Tech. The core is similar in size to small 2-loop PWRs while having the power level of current large reactors (∼1000 MWe) but using uranium silicide fuel and advanced stainless steel cladding. A stylized benchmark specification of the I 2 S-LWR core has been developed in order to test whole-core neutronics codes and methods. For simplification the core was split into 57 distinct material regions for cross section generation. Cross sections were generated using the lattice physics code HELIOS version 1.10 in both 8 and 47 groups. Monte Carlo solutions, including eigenvalue and pin fission densities, were generated for the 8 group library using MCNP5. Due to space limitations in this paper, the full cross section library and normalized pin fission density results are provided in the journal’s electronic repository.

  3. Benchmark problems for numerical implementations of phase field models

    International Nuclear Information System (INIS)

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-01-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  4. Recent Progress in Energy-Driven Water Splitting.

    Science.gov (United States)

    Tee, Si Yin; Win, Khin Yin; Teo, Wee Siang; Koh, Leng-Duei; Liu, Shuhua; Teng, Choon Peng; Han, Ming-Yong

    2017-05-01

    Hydrogen is readily obtained from renewable and non-renewable resources via water splitting by using thermal, electrical, photonic and biochemical energy. The major hydrogen production is generated from thermal energy through steam reforming/gasification of fossil fuel. As the commonly used non-renewable resources will be depleted in the long run, there is great demand to utilize renewable energy resources for hydrogen production. Most of the renewable resources may be used to produce electricity for driving water splitting while challenges remain to improve cost-effectiveness. As the most abundant energy resource, the direct conversion of solar energy to hydrogen is considered the most sustainable energy production method without causing pollutions to the environment. In overall, this review briefly summarizes thermolytic, electrolytic, photolytic and biolytic water splitting. It highlights photonic and electrical driven water splitting together with photovoltaic-integrated solar-driven water electrolysis.

  5. A 3D stylized half-core CANDU benchmark problem

    International Nuclear Information System (INIS)

    Pounders, Justin M.; Rahnema, Farzad; Serghiuta, Dumitru; Tholammakkil, John

    2011-01-01

    A 3D stylized half-core Canadian deuterium uranium (CANDU) reactor benchmark problem is presented. The benchmark problem is comprised of a heterogeneous lattice of 37-element natural uranium fuel bundles, heavy water moderated, heavy water cooled, with adjuster rods included as reactivity control devices. Furthermore, a 2-group macroscopic cross section library has been developed for the problem to increase the utility of this benchmark for full-core deterministic transport methods development. Monte Carlo results are presented for the benchmark problem in cooled, checkerboard void, and full coolant void configurations.

  6. Mini-Split Heat Pumps Multifamily Retrofit Feasibility Study

    Energy Technology Data Exchange (ETDEWEB)

    Dentz, Jordan [ARIES Collaborative, New York, NY (United States); Podorson, David [ARIES Collaborative, New York, NY (United States); Varshney, Kapil [ARIES Collaborative, New York, NY (United States)

    2014-05-01

    Mini-split heat pumps can provide space heating and cooling in many climates and are relatively affordable. These and other features make them potentially suitable for retrofitting into multifamily buildings in cold climates to replace electric resistance heating or other outmoded heating systems. This report investigates the suitability of mini-split heat pumps for multifamily retrofits. Various technical and regulatory barriers are discussed and modeling was performed to compare long-term costs of substituting mini-splits for a variety of other heating and cooling options. A number of utility programs have retrofit mini-splits in both single family and multifamily residences. Two such multifamily programs are discussed in detail.

  7. Communication: Tunnelling splitting in the phosphine molecule

    Science.gov (United States)

    Sousa-Silva, Clara; Tennyson, Jonathan; Yurchenko, Sergey N.

    2016-09-01

    Splitting due to tunnelling via the potential energy barrier has played a significant role in the study of molecular spectra since the early days of spectroscopy. The observation of the ammonia doublet led to attempts to find a phosphine analogous, but these have so far failed due to its considerably higher barrier. Full dimensional, variational nuclear motion calculations are used to predict splittings as a function of excitation energy. Simulated spectra suggest that such splittings should be observable in the near infrared via overtones of the ν2 bending mode starting with 4ν2.

  8. International handbook of evaluated criticality safety benchmark experiments

    International Nuclear Information System (INIS)

    2010-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in October of 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) became an official activity of the Organization for Economic Cooperation and Development - Nuclear Energy Agency (OECD-NEA) in 1995. This handbook contains criticality safety benchmark specifications that have been derived from experiments performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculational techniques used to establish minimum subcritical margins for operations with fissile material and to determine criticality alarm requirement and placement. Many of the specifications are also useful for nuclear data testing. Example calculations are presented; however, these calculations do not constitute a validation of the codes or cross section data. The evaluated criticality safety benchmark data are given in nine volumes. These volumes span over 55,000 pages and contain 516 evaluations with benchmark specifications for 4,405 critical, near critical, or subcritical configurations, 24 criticality alarm placement / shielding configurations with multiple dose points for each, and 200 configurations that have been categorized as fundamental physics measurements that are relevant to criticality safety applications. Experiments that are found unacceptable for use as criticality safety benchmark experiments are discussed in these evaluations; however, benchmark specifications are not derived for such experiments (in some cases models are provided in an appendix). Approximately 770 experimental configurations are categorized as unacceptable for use as criticality safety benchmark experiments. Additional evaluations are in progress and will be

  9. Repeated Results Analysis for Middleware Regression Benchmarking

    Czech Academy of Sciences Publication Activity Database

    Bulej, Lubomír; Kalibera, T.; Tůma, P.

    2005-01-01

    Roč. 60, - (2005), s. 345-358 ISSN 0166-5316 R&D Projects: GA ČR GA102/03/0672 Institutional research plan: CEZ:AV0Z10300504 Keywords : middleware benchmarking * regression benchmarking * regression testing Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.756, year: 2005

  10. On the additive splitting procedures and their computer realization

    DEFF Research Database (Denmark)

    Farago, I.; Thomsen, Per Grove; Zlatev, Z.

    2008-01-01

    Two additive splitting procedures are defined and studied in this paper. It is shown that these splitting procedures have good stability properties. Some other splitting procedures, which are traditionally used in mathematical models used in many scientific and engineering fields, are sketched. All...

  11. Splitting methods in communication, imaging, science, and engineering

    CERN Document Server

    Osher, Stanley; Yin, Wotao

    2016-01-01

    This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas. .

  12. ZZ WPPR, Pu Recycling Benchmark Results

    International Nuclear Information System (INIS)

    Lutz, D.; Mattes, M.; Delpech, Marc; Juanola, Marc

    2002-01-01

    Description of program or function: The NEA NSC Working Party on Physics of Plutonium Recycling has commissioned a series of benchmarks covering: - Plutonium recycling in pressurized-water reactors; - Void reactivity effect in pressurized-water reactors; - Fast Plutonium-burner reactors: beginning of life; - Plutonium recycling in fast reactors; - Multiple recycling in advanced pressurized-water reactors. The results have been published (see references). ZZ-WPPR-1-A/B contains graphs and tables relative to the PWR Mox pin cell benchmark, representing typical fuel for plutonium recycling, one corresponding to a first cycle, the second for a fifth cycle. These computer readable files contain the complete set of results, while the printed report contains only a subset. ZZ-WPPR-2-CYC1 are the results from cycle 1 of the multiple recycling benchmarks

  13. Interior beam searchlight semi-analytical benchmark

    International Nuclear Information System (INIS)

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  14. Benchmark referencing of neutron dosimetry measurements

    International Nuclear Information System (INIS)

    Eisenhauer, C.M.; Grundl, J.A.; Gilliam, D.M.; McGarry, E.D.; Spiegel, V.

    1980-01-01

    The concept of benchmark referencing involves interpretation of dosimetry measurements in applied neutron fields in terms of similar measurements in benchmark fields whose neutron spectra and intensity are well known. The main advantage of benchmark referencing is that it minimizes or eliminates many types of experimental uncertainties such as those associated with absolute detection efficiencies and cross sections. In this paper we consider the cavity external to the pressure vessel of a power reactor as an example of an applied field. The pressure vessel cavity is an accessible location for exploratory dosimetry measurements aimed at understanding embrittlement of pressure vessel steel. Comparisons with calculated predictions of neutron fluence and spectra in the cavity provide a valuable check of the computational methods used to estimate pressure vessel safety margins for pressure vessel lifetimes

  15. Atomic Energy Research benchmark activity

    International Nuclear Information System (INIS)

    Makai, M.

    1998-01-01

    The test problems utilized in the validation and verification process of computer programs in Atomic Energie Research are collected into one bunch. This is the first step towards issuing a volume in which tests for VVER are collected, along with reference solutions and a number of solutions. The benchmarks do not include the ZR-6 experiments because they have been published along with a number of comparisons in the Final reports of TIC. The present collection focuses on operational and mathematical benchmarks which cover almost the entire range of reaktor calculation. (Author)

  16. Irradiation-induced amorphization in split-dislocation cores

    International Nuclear Information System (INIS)

    Ovid'ko, I.A.; Rejzis, A.B.

    1999-01-01

    The model describing special splitting of lattice and grain-boundary dislocations as one of the micromechanisms of solid-phase amorphization in irradiated crystals is proposed. Calculation of energy characteristics of the process of dislocations special splitting is carried out [ru

  17. Calculation of spin-spin zero-field splitting within periodic boundary conditions: Towards all-electron accuracy

    Science.gov (United States)

    Biktagirov, Timur; Schmidt, Wolf Gero; Gerstmann, Uwe

    2018-03-01

    For high-spin centers, one of the key spectroscopic fingerprints is the zero-field splitting (ZFS) addressable by electron paramagnetic resonance. In this paper, an implementation of the spin-spin contribution to the ZFS tensor within the projector augmented-wave (PAW) formalism is reported. We use a single-determinant approach proposed by M. J. Rayson and P. R. Briddon [Phys. Rev. B 77, 035119 (2008), 10.1103/PhysRevB.77.035119], and complete it by adding a PAW reconstruction term which has not been taken into account before. We benchmark the PAW approach against a well-established all-electron method for a series of diatomic radicals and defects in diamond and cubic silicon carbide. While for some of the defect centers the PAW reconstruction is found to be almost negligible, in agreement with the common assumption, we show that in general it significantly improves the calculated ZFS towards the all-electron results.

  18. Infrared rovibrational spectroscopy of OH–C2H2 in 4He nanodroplets: Parity splitting due to partially quenched electronic angular momentum

    International Nuclear Information System (INIS)

    Douberly, Gary E.; Liang, Tao; Raston, Paul L.; Marshall, Mark D.

    2015-01-01

    The T-shaped OH–C 2 H 2 complex is formed in helium droplets via the sequential pick-up and solvation of the monomer fragments. Rovibrational spectra of the a-type OH stretch and b-type antisymmetric CH stretch vibrations contain resolved parity splitting that reveals the extent to which electronic angular momentum of the OH moiety is quenched upon complex formation. The energy difference between the spin-orbit coupled 2 B 1 (A″) and 2 B 2 (A′) electronic states is determined spectroscopically to be 216 cm −1 in helium droplets, which is 13 cm −1 larger than in the gas phase [Marshall et al., J. Chem. Phys. 121, 5845 (2004)]. The effect of the helium is rationalized as a difference in the solvation free energies of the two electronic states. This interpretation is motivated by the separation between the Q(3/2) and R(3/2) transitions in the infrared spectrum of the helium-solvated 2 Π 3/2 OH radical. Despite the expectation of a reduced rotational constant, the observed Q(3/2) to R(3/2) splitting is larger than in the gas phase by ≈0.3 cm −1 . This observation can be accounted for quantitatively by assuming the energetic separation between 2 Π 3/2 and 2 Π 1/2 manifolds is increased by ≈40 cm −1 upon helium solvation

  19. LDA merging and splitting with applications to multiagent cooperative learning and system alteration.

    Science.gov (United States)

    Pang, Shaoning; Ban, Tao; Kadobayashi, Youki; Kasabov, Nikola K

    2012-04-01

    To adapt linear discriminant analysis (LDA) to real-world applications, there is a pressing need to equip it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting. These provide the benefits of ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage costs than traditional approaches under common application conditions. These properties are validated by experiments on a benchmark face image data set. By a case study on the application of the proposed method to multiagent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.

  20. Standard Model Particles from Split Octonions

    Directory of Open Access Journals (Sweden)

    Gogberashvili M.

    2016-01-01

    Full Text Available We model physical signals using elements of the algebra of split octonions over the field of real numbers. Elementary particles are corresponded to the special elements of the algebra that nullify octonionic norms (zero divisors. It is shown that the standard model particle spectrum naturally follows from the classification of the independent primitive zero divisors of split octonions.

  1. An experimental study on the alteration of thermal enhancement ratio by combination of split dose hyperthermia irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sun Ok; Kim, Hee Seup [Ewha Womens University College of Medicine, Seoul (Korea, Republic of)

    1983-06-15

    The study was undertaken to evaluate the alteration of thermal enhancement ratio as a function of time intervals between two split dose hyperthermias followed by irradiation. For the experiments, 330 mice were divided into 3 groups; the first, 72 mice were used to evaluate the heat reaction by single dose hyperthermia and heat resistance by split dose hyperthermia, the second, 36 mice were used to evaluate the radiation reaction by irradiation only, and the third, 222 mice were used for TER observation by combination of single dose hyperthermia and irradiation, and TER alteration by combination of split dose hyperthermia and irradiation. For each group the skin reaction score of mouse tail was used for observation and evaluation of the result of heat and irradiation. The results obtained are summarized as follows: 1. The heating time resulting 50% necrosis (ND{sub 5}0) Was 101 minutes in 43 .deg. C and 24 minutes in 45 .deg. C hyperthermia, which indicated that three is reciprocal proportion between temperature and heating time. 2. Development of heat resistance was observed by split dose hyperthermia. 3. The degree of skin reaction by irradiation only was increased proportionally as a function of radiation dose, and calculated radiation dose corresponding to skin score 1.5 (D{sub 1}.5) was 4,137 rads. 4. Obtained thermal enhancement ratio by combination of single dose hyperthermia and irradiation was increased proportionally as a function of heating time. 5. Thermal enhancement ratio was decreased by combination of split dose hyperthermia and irradiation, which was less intense and lasted longer than development of heat resistance. In summary, these studies indicate that the alteration of thermal enhancement ratio has influence on heat resistance by split dose hyperthermia and irradiation.

  2. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  3. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    Science.gov (United States)

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  4. Benchmarking set for domestic smart grid management

    NARCIS (Netherlands)

    Bosman, M.G.C.; Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    In this paper we propose a benchmark for domestic smart grid management. It consists of an in-depth description of a domestic smart grid, in which local energy consumers, producers and buffers can be controlled. First, from this description a general benchmark framework is derived, which can be used

  5. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  6. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  7. The Splitting Loope

    Science.gov (United States)

    Wilkins, Jesse L. M.; Norton, Anderson

    2011-01-01

    Teaching experiments have generated several hypotheses concerning the construction of fraction schemes and operations and relationships among them. In particular, researchers have hypothesized that children's construction of splitting operations is crucial to their construction of more advanced fractions concepts (Steffe, 2002). The authors…

  8. A benchmark simulation model to describe plant-wide phosphorus transformations in WWTPs

    DEFF Research Database (Denmark)

    Flores-Alsina, Xavier; Ikumi, D.; Kazadi-Mbamba, C.

    It is more than 10 years since the publication of the BSM1 technical report (Copp, 2002). The main objective of BSM1 was to create a platform for benchmarking C and N removal strategies in activated sludge systems. The initial platform evolved into BSM1_LT and BSM2, which allowed for the evaluati...

  9. Splitting Strip Detector Clusters in Dense Environments

    CERN Document Server

    Nachman, Benjamin Philip; The ATLAS collaboration

    2018-01-01

    Tracking in high density environments, particularly in high energy jets, plays an important role in many physics analyses at the LHC. In such environments, there is significant degradation of track reconstruction performance. Between runs 1 and 2, ATLAS implemented an algorithm that splits pixel clusters originating from multiple charged particles, using charge information, resulting in the recovery of much of the lost efficiency. However, no attempt was made in prior work to split merged clusters in the Semi Conductor Tracker (SCT), which does not measure charge information. In spite of the lack of charge information in SCT, a cluster-splitting algorithm has been developed in this work. It is based primarily on the difference between the observed cluster width and the expected cluster width, which is derived from track incidence angle. The performance of this algorithm is found to be competitive with the existing pixel cluster splitting based on track information.

  10. ZZ-PBMR-400, OECD/NEA PBMR Coupled Neutronics/Thermal Hydraulics Transient Benchmark - The PBMR-400 Core Design

    International Nuclear Information System (INIS)

    Reitsma, Frederik

    2007-01-01

    Description of benchmark: This international benchmark, concerns Pebble-Bed Modular Reactor (PBMR) coupled neutronics/thermal hydraulics transients based on the PBMR-400 MW design. The deterministic neutronics, thermal-hydraulics and transient analysis tools and methods available to design and analyse PBMRs lag, in many cases, behind the state of the art compared to other reactor technologies. This has motivated the testing of existing methods for HTGRs but also the development of more accurate and efficient tools to analyse the neutronics and thermal-hydraulic behaviour for the design and safety evaluations of the PBMR. In addition to the development of new methods, this includes defining appropriate benchmarks to verify and validate the new methods in computer codes. The scope of the benchmark is to establish well-defined problems, based on a common given set of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark exercise has the following objectives: - Establish a standard benchmark for coupled codes (neutronics/thermal-hydraulics) for PBMR design; - Code-to-code comparison using a common cross section library ; - Obtain a detailed understanding of the events and the processes; - Benefit from different approaches, understanding limitations and approximations. Major Design and Operating Characteristics of the PBMR (PBMR Characteristic and Value): Installed thermal capacity: 400 MW(t); Installed electric capacity: 165 MW(e); Load following capability: 100-40-100%; Availability: ≥ 95%; Core configuration: Vertical with fixed centre graphite reflector; Fuel: TRISO ceramic coated U-235 in graphite spheres; Primary coolant: Helium; Primary coolant pressure: 9 MPa; Moderator: Graphite; Core outlet temperature: 900 C.; Core inlet temperature: 500 C.; Cycle type: Direct; Number of circuits: 1; Cycle

  11. Numisheet2005 Benchmark Analysis on Forming of an Automotive Deck Lid Inner Panel: Benchmark 1

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao Jian

    2005-01-01

    Numerical simulations in sheet metal forming processes have been a very challenging topic in industry. There are many computer codes and modeling techniques existing today. However, there are many unknowns affecting the prediction accuracy. Systematic benchmark tests are needed to accelerate the future implementations and to provide as a reference. This report presents an international cooperative benchmark effort for an automotive deck lid inner panel. Predictions from simulations are analyzed and discussed against the corresponding experimental results. The correlations between accuracy of each parameter of interest are discussed in this report

  12. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  13. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-06-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behaviour. We also suggest some other tests that could be used as bench-marks

  14. Bench-marking beam-beam simulations using coherent quadrupole effects

    International Nuclear Information System (INIS)

    Krishnagopal, S.; Chin, Y.H.

    1992-01-01

    Computer simulations are used extensively in the study of the beam-beam interaction. The proliferation of such codes raises the important question of their reliability, and motivates the development of a dependable set of bench-marks. We argue that rather than detailed quantitative comparisons, the ability of different codes to predict the same qualitative physics should be used as a criterion for such bench-marks. We use the striking phenomenon of coherent quadrupole oscillations as one such bench-mark, and demonstrate that our codes do indeed observe this behavior. We also suggest some other tests that could be used as bench-marks

  15. Fee Splitting among General Practitioners: A Cross-Sectional Study in Iran.

    Science.gov (United States)

    Parsa, Mojtaba; Larijani, Bagher; Aramesh, Kiarash; Nedjat, Saharnaz; Fotouhi, Akbar; Yekaninejad, Mir Saeed; Ebrahimian, Nejatollah; Kandi, Mohamad Jafar

    2016-12-01

    Fee splitting is a process whereby a physician refers a patient to another physician or a healthcare facility and receives a portion of the charge in return. This survey was conducted to study general practitioners' (GPs) attitudes toward fee splitting as well as the prevalence, causes, and consequences of this process. This is a cross-sectional study on 223 general practitioners in 2013. Concerning the causes and consequences of fee splitting, an unpublished qualitative study was conducted by interviewing a number of GPs and specialists and the questionnaire options were the results of the information obtained from this study. Of the total 320 GPs, 247 returned the questionnaires. The response rate was 77.18%. Of the 247 returned questionnaires, 223 fulfilled the inclusion criteria. Among the participants, 69.1% considered fee splitting completely wrong and 23.2% (frequently or rarely) practiced fee splitting. The present study showed that the prevalence of fee splitting among physicians who had positive attitudes toward fee splitting was 4.63 times higher than those who had negative attitudes. In addition, this study showed that, compared to private hospitals, fee splitting is less practiced in public hospitals. The major cause of fee splitting was found to be unrealistic/unfair tariffs and the main consequence of fee splitting was thought to be an increase in the number of unnecessary patient referrals. Fee splitting is an unethical act, contradicts the goals of the medical profession, and undermines patient's best interest. In Iran, there is no code of ethics on fee splitting, but in this study, it was found that the majority of GPs considered it unethical. However, among those who had negative attitudes toward fee splitting, there were physicians who did practice fee splitting. The results of the study showed that physicians who had a positive attitude toward fee splitting practiced it more than others. Therefore, if physicians consider fee splitting unethical

  16. Validation of full core geometry model of the NODAL3 code in the PWR transient Benchmark problems

    International Nuclear Information System (INIS)

    T-M Sembiring; S-Pinem; P-H Liem

    2015-01-01

    The coupled neutronic and thermal-hydraulic (T/H) code, NODAL3 code, has been validated in some PWR static benchmark and the NEACRP PWR transient benchmark cases. However, the NODAL3 code have not yet validated in the transient benchmark cases of a control rod assembly (CR) ejection at peripheral core using a full core geometry model, the C1 and C2 cases. By this research work, the accuracy of the NODAL3 code for one CR ejection or the unsymmetrical group of CRs ejection case can be validated. The calculations by the NODAL3 code have been carried out by the adiabatic method (AM) and the improved quasistatic method (IQS). All calculated transient parameters by the NODAL3 code were compared with the reference results by the PANTHER code. The maximum relative difference of 16 % occurs in the calculated time of power maximum parameter by using the IQS method, while the relative difference of the AM method is 4 % for C2 case. All calculation results by the NODAL3 code shows there is no systematic difference, it means the neutronic and T/H modules are adopted in the code are considered correct. Therefore, all calculation results by using the NODAL3 code are very good agreement with the reference results. (author)

  17. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  18. RISKIND verification and benchmark comparisons

    International Nuclear Information System (INIS)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models

  19. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  20. Benchmarking the implementation of E-Commerce A Case Study Approach

    OpenAIRE

    von Ettingshausen, C. R. D. Freiherr

    2009-01-01

    The purpose of this thesis was to develop a guideline to support the implementation of E-Commerce with E-Commerce benchmarking. Because of its importance as an interface with the customer, web-site benchmarking has been a widely researched topic. However, limited research has been conducted on benchmarking E-Commerce across other areas of the value chain. Consequently this thesis aims to extend benchmarking into E-Commerce related subjects. The literature review examined ...

  1. Benchmarking of EPRI-cell epithermal methods with the point-energy discrete-ordinates code (OZMA)

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.

    1982-01-01

    The purpose of the present study is to benchmark E-C resonance-shielding and cell-averaging methods against a rigorous deterministic solution on a fine-group level (approx. 30 groups between 1 eV and 5.5 keV). The benchmark code used is OZMA, which solves the space-dependent slowing-down equations using continuous-energy discrete ordinates or integral transport theory to produce fine-group cross sections. Results are given for three water-moderated lattices - a mixed oxide, a uranium method, and a tight-pitch high-conversion uranium oxide configuration. The latter two lattices were chosen because of the strong self shielding of the 238 U resonances

  2. The Split sudâmja

    Directory of Open Access Journals (Sweden)

    Petar Šimunović

    1991-12-01

    Full Text Available The name of the Split feast Sudamja!Sudajma ("festa sancti Domnii" has not yet been adequately explained. The author believes that the name originated from the Old Dalmatian adjective san(ctu + Domnĭu. In the adjective santu the cluster /an/ in front of·a consonant gave in Croatian the back nasal /q/ pronounced until the end of the 10th century and giving /u/ after that. In this way the forms *Sudumja and similar originated. The short stressed /u/ in the closed syllable was percieved by the Croatian folk as their semivowel which later gave /a/ = Sudamja. The author connects this feature with that in the toponimes Makar ( /jm/ is well known in Croatian dialectology (sumja > sujma, and it resembles the metatheses which occurs in the Split toponimes: Sukošjân > Sukojšãn ( < *santu Cassianu, Pojišân/Pojšiin (< *pasianu < Pansianu. The author finds the same feature in the toponime Dumjača (: *Dumi- + -ača. He considers these features as Croatian popular adaptations which have not occured in the personal name Dujam, the toponime Dujmovača "terrae s. Domnii" and in the adjective sandujamski, because of the link with the saint's name Domnio!Duymo etc., which has been well liked and is frequent as name of Split Romas as well as Croats from the foundation of Split, has never been broken.

  3. Communication: Tunnelling splitting in the phosphine molecule

    Energy Technology Data Exchange (ETDEWEB)

    Sousa-Silva, Clara; Tennyson, Jonathan; Yurchenko, Sergey N. [Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom)

    2016-09-07

    Splitting due to tunnelling via the potential energy barrier has played a significant role in the study of molecular spectra since the early days of spectroscopy. The observation of the ammonia doublet led to attempts to find a phosphine analogous, but these have so far failed due to its considerably higher barrier. Full dimensional, variational nuclear motion calculations are used to predict splittings as a function of excitation energy. Simulated spectra suggest that such splittings should be observable in the near infrared via overtones of the ν{sub 2} bending mode starting with 4ν{sub 2}.

  4. Splitting Descartes

    DEFF Research Database (Denmark)

    Schilhab, Theresa

    2007-01-01

    Kognition og Pædagogik vol. 48:10-18. 2003 Short description : The cognitivistic paradigm and Descartes' view of embodied knowledge. Abstract: That the philosopher Descartes separated the mind from the body is hardly news: He did it so effectively that his name is forever tied to that division....... But what exactly is Descartes' point? How does the Kartesian split hold up to recent biologically based learning theories?...

  5. The Splitting Group

    Science.gov (United States)

    Norton, Anderson; Wilkins, Jesse L. M.

    2012-01-01

    Piagetian theory describes mathematical development as the construction and organization of mental operations within psychological structures. Research on student learning has identified the vital roles of two particular operations--splitting and units coordination--play in students' development of advanced fractions knowledge. Whereas Steffe and…

  6. One-loop triple collinear splitting amplitudes in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Badger, Simon; Buciuni, Francesco; Peraro, Tiziano [Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh,Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-09-28

    We study the factorisation properties of one-loop scattering amplitudes in the triple collinear limit and extract the universal splitting amplitudes for processes initiated by a gluon. The splitting amplitudes are derived from the analytic Higgs plus four partons amplitudes. We present compact results for primitive helicity splitting amplitudes making use of super-symmetric decompositions. The universality of the collinear factorisation is checked numerically against the full colour six parton squared matrix elements.

  7. Split ring containment attachment device

    International Nuclear Information System (INIS)

    Sammel, A.G.

    1996-01-01

    A containment attachment device is described for operatively connecting a glovebag to plastic sheeting covering hazardous material. The device includes an inner split ring member connected on one end to a middle ring member wherein the free end of the split ring member is inserted through a slit in the plastic sheeting to captively engage a generally circular portion of the plastic sheeting. A collar potion having an outer ring portion is provided with fastening means for securing the device together wherein the glovebag is operatively connected to the collar portion. 5 figs

  8. Split NMSSM with electroweak baryogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Demidov, S.V.; Gorbunov, D.S. [Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary prospect 7a, Moscow 117312 (Russian Federation); Moscow Institute of Physics and Technology,Institutsky per. 9, Dolgoprudny 141700 (Russian Federation); Kirpichnikov, D.V. [Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary prospect 7a, Moscow 117312 (Russian Federation)

    2016-11-24

    In light of the Higgs boson discovery and other results of the LHC we reconsider generation of the baryon asymmetry in the split Supersymmetry model with an additional singlet superfield in the Higgs sector (non-minimal split SUSY). We find that successful baryogenesis during the first order electroweak phase transition is possible within a phenomenologically viable part of the model parameter space. We discuss several phenomenological consequences of this scenario, namely, predictions for the electric dipole moments of electron and neutron and collider signatures of light charginos and neutralinos.

  9. Mass splitting induced by gravitation

    International Nuclear Information System (INIS)

    Maia, M.D.

    1982-08-01

    The exact combination of internal and geometrical symmetries and the associated mass splitting problem is discussed. A 10-parameter geometrical symmetry is defined in a curved space-time in such a way that it is a combination of de Sitter groups. In the flat limit it reproduces the Poincare-group and its Lie algebra has a nilpotent action on the combined symmetry only in that limit. An explicit mass splitting expression is derived and an estimation of the order of magnitude for spin-zero mesons is made. (author)

  10. Splitting of turbulent spot in transitional pipe flow

    Science.gov (United States)

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.

    2017-11-01

    Recent study (Wu et al., PNAS, 1509451112, 2015) demonstrated the feasibility and accuracy of direct computation of the Osborne Reynolds' pipe transition problem without the unphysical, axially periodic boundary condition. Here we use this approach to study the splitting of turbulent spot in transitional pipe flow, a feature first discovered by E.R. Lindgren (Arkiv Fysik 15, 1959). It has been widely believed that spot splitting is a mysterious stochastic process that has general implications on the lifetime and sustainability of wall turbulence. We address the following two questions: (1) What is the dynamics of turbulent spot splitting in pipe transition? Specifically, we look into any possible connection between the instantaneous strain rate field and the spot splitting. (2) How does the passive scalar field behave during the process of pipe spot splitting. In this study, the turbulent spot is introduced at the inlet plane through a sixty degree wide numerical wedge within which fully-developed turbulent profiles are assigned over a short time interval; and the simulation Reynolds numbers are 2400 for a 500 radii long pipe, and 2300 for a 1000 radii long pipe, respectively. Numerical dye is tagged on the imposed turbulent spot at the inlet. Splitting of the imposed turbulent spot is detected very easily. Preliminary analysis of the DNS results seems to suggest that turbulent spot slitting can be easily understood based on instantaneous strain rate field, and such spot splitting may not be relevant in external flows such as the flat-plate boundary layer.

  11. Introducing a Generic Concept for an Online IT-Benchmarking System

    OpenAIRE

    Ziaie, Pujan;Ziller, Markus;Wollersheim, Jan;Krcmar, Helmut

    2014-01-01

    While IT benchmarking has grown considerably in the last few years, conventional benchmarking tools have not been able to adequately respond to the rapid changes in technology and paradigm shifts in IT-related domains. This paper aims to review benchmarking methods and leverage design science methodology to present design elements for a novel software solution in the field of IT benchmarking. The solution, which introduces a concept for generic (service-independent) indicators is based on and...

  12. Strategic behaviour under regulatory benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Jamasb, T. [Cambridge Univ. (United Kingdom). Dept. of Applied Economics; Nillesen, P. [NUON NV (Netherlands); Pollitt, M. [Cambridge Univ. (United Kingdom). Judge Inst. of Management

    2004-09-01

    In order to improve the efficiency of electricity distribution networks, some regulators have adopted incentive regulation schemes that rely on performance benchmarking. Although regulation benchmarking can influence the ''regulation game,'' the subject has received limited attention. This paper discusses how strategic behaviour can result in inefficient behaviour by firms. We then use the Data Envelopment Analysis (DEA) method with US utility data to examine implications of illustrative cases of strategic behaviour reported by regulators. The results show that gaming can have significant effects on the measured performance and profitability of firms. (author)

  13. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  14. Guidelines to Develop Efficient Photocatalysts for Water Splitting

    KAUST Repository

    Garcia Esparza, Angel T.

    2016-01-01

    Photocatalytic overall water splitting is the only viable solar-to-fuel conversion technology. The research discloses an investigation process wherein by dissecting the photocatalytic water splitting device, electrocatalysts, and semiconductor

  15. Introduction to 'International Handbook of Criticality Safety Benchmark Experiments'

    International Nuclear Information System (INIS)

    Komuro, Yuichi

    1998-01-01

    The Criticality Safety Benchmark Evaluation Project (CSBEP) was initiated in 1992 by the United States Department of Energy. The project quickly became an international effort as scientists from other interested countries became involved. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) is now an official activity of the Organization for Economic Cooperation and Development-Nuclear Energy Agency (OECD-NEA). 'International Handbook of Criticality Safety Benchmark Experiments' was prepared and is updated year by year by the working group of the project. This handbook contains criticality safety benchmark specifications that have been derived from experiments that were performed at various nuclear critical facilities around the world. The benchmark specifications are intended for use by criticality safety engineers to validate calculation techniques used. The author briefly introduces the informative handbook and would like to encourage Japanese engineers who are in charge of nuclear criticality safety to use the handbook. (author)

  16. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias; Smith, Neil; Ghanem, Bernard

    2016-01-01

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  17. A Benchmark and Simulator for UAV Tracking

    KAUST Repository

    Mueller, Matthias

    2016-09-16

    In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photorealistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx.). © Springer International Publishing AG 2016.

  18. Benchmarking CRISPR on-target sgRNA design.

    Science.gov (United States)

    Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi

    2017-02-15

    CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    1997-01-01

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) point source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green's function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade

  20. Exposing the QCD Splitting Function with CMS Open Data.

    Science.gov (United States)

    Larkoski, Andrew; Marzani, Simone; Thaler, Jesse; Tripathee, Aashish; Xue, Wei

    2017-09-29

    The splitting function is a universal property of quantum chromodynamics (QCD) which describes how energy is shared between partons. Despite its ubiquitous appearance in many QCD calculations, the splitting function cannot be measured directly, since it always appears multiplied by a collinear singularity factor. Recently, however, a new jet substructure observable was introduced which asymptotes to the splitting function for sufficiently high jet energies. This provides a way to expose the splitting function through jet substructure measurements at the Large Hadron Collider. In this Letter, we use public data released by the CMS experiment to study the two-prong substructure of jets and test the 1→2 splitting function of QCD. To our knowledge, this is the first ever physics analysis based on the CMS Open Data.

  1. Benchmarking a signpost to excellence in quality and productivity

    CERN Document Server

    Karlof, Bengt

    1993-01-01

    According to the authors, benchmarking exerts a powerful leverage effect on an organization and they consider some of the factors which justify their claim. Describes how to implement benchmarking and exactly what to benchmark. Explains benchlearning which integrates education, leadership development and organizational dynamics with the actual work being done and how to make it work more efficiently in terms of quality and productivity.

  2. Split-plot designs for multistage experimentation

    DEFF Research Database (Denmark)

    Kulahci, Murat; Tyssedal, John

    2016-01-01

    at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based...... be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided....

  3. Standard test method for splitting tensile strength for brittle nuclear waste forms

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1989-01-01

    1.1 This test method is used to measure the static splitting tensile strength of cylindrical specimens of brittle nuclear waste forms. It provides splitting tensile-strength data that can be used to compare the strength of waste forms when tests are done on one size of specimen. 1.2 The test method is applicable to glass, ceramic, and concrete waste forms that are sufficiently homogeneous (Note 1) but not to coated-particle, metal-matrix, bituminous, or plastic waste forms, or concretes with large-scale heterogeneities. Cementitious waste forms with heterogeneities >1 to 2 mm and 5 mm can be tested using this procedure provided the specimen size is increased from the reference size of 12.7 mm diameter by 6 mm length, to 51 mm diameter by 100 mm length, as recommended in Test Method C 496 and Practice C 192. Note 1—Generally, the specimen structural or microstructural heterogeneities must be less than about one-tenth the diameter of the specimen. 1.3 This test method can be used as a quality control chec...

  4. Quality management benchmarking: FDA compliance in pharmaceutical industry.

    Science.gov (United States)

    Jochem, Roland; Landgraf, Katja

    2010-01-01

    By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.

  5. Towards benchmarking an in-stream water quality model

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available A method of model evaluation is presented which utilises a comparison with a benchmark model. The proposed benchmarking concept is one that can be applied to many hydrological models but, in this instance, is implemented in the context of an in-stream water quality model. The benchmark model is defined in such a way that it is easily implemented within the framework of the test model, i.e. the approach relies on two applications of the same model code rather than the application of two separate model codes. This is illustrated using two case studies from the UK, the Rivers Aire and Ouse, with the objective of simulating a water quality classification, general quality assessment (GQA, which is based on dissolved oxygen, biochemical oxygen demand and ammonium. Comparisons between the benchmark and test models are made based on GQA, as well as a step-wise assessment against the components required in its derivation. The benchmarking process yields a great deal of important information about the performance of the test model and raises issues about a priori definition of the assessment criteria.

  6. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Science.gov (United States)

    Murata, Isao; Ohta, Masayuki; Kusaka, Sachie; Sato, Fuminobu; Miyamaru, Hiroyuki

    2017-09-01

    There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author's group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is "equally" due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A)) making neutrons conveying the contribution, indirect controbution of neutrons (B) making the neutrons (A) and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  7. Thought Experiment to Examine Benchmark Performance for Fusion Nuclear Data

    Directory of Open Access Journals (Sweden)

    Murata Isao

    2017-01-01

    Full Text Available There are many benchmark experiments carried out so far with DT neutrons especially aiming at fusion reactor development. These integral experiments seemed vaguely to validate the nuclear data below 14 MeV. However, no precise studies exist now. The author’s group thus started to examine how well benchmark experiments with DT neutrons can play a benchmarking role for energies below 14 MeV. Recently, as a next phase, to generalize the above discussion, the energy range was expanded to the entire region. In this study, thought experiments with finer energy bins have thus been conducted to discuss how to generally estimate performance of benchmark experiments. As a result of thought experiments with a point detector, the sensitivity for a discrepancy appearing in the benchmark analysis is “equally” due not only to contribution directly conveyed to the deterctor, but also due to indirect contribution of neutrons (named (A making neutrons conveying the contribution, indirect controbution of neutrons (B making the neutrons (A and so on. From this concept, it would become clear from a sensitivity analysis in advance how well and which energy nuclear data could be benchmarked with a benchmark experiment.

  8. Hydrogen Production from Semiconductor-based Photocatalysis via Water Splitting

    Directory of Open Access Journals (Sweden)

    Jeffrey C. S. Wu

    2012-10-01

    Full Text Available Hydrogen is the ideal fuel for the future because it is clean, energy efficient, and abundant in nature. While various technologies can be used to generate hydrogen, only some of them can be considered environmentally friendly. Recently, solar hydrogen generated via photocatalytic water splitting has attracted tremendous attention and has been extensively studied because of its great potential for low-cost and clean hydrogen production. This paper gives a comprehensive review of the development of photocatalytic water splitting for generating hydrogen, particularly under visible-light irradiation. The topics covered include an introduction of hydrogen production technologies, a review of photocatalytic water splitting over titania and non-titania based photocatalysts, a discussion of the types of photocatalytic water-splitting approaches, and a conclusion for the current challenges and future prospects of photocatalytic water splitting. Based on the literatures reported here, the development of highly stable visible–light-active photocatalytic materials, and the design of efficient, low-cost photoreactor systems are the key for the advancement of solar-hydrogen production via photocatalytic water splitting in the future.

  9. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Nowell, Lisa H., E-mail: lhnowell@usgs.gov [U.S. Geological Survey, California Water Science Center, Placer Hall, 6000 J Street, Sacramento, CA 95819 (United States); Norman, Julia E., E-mail: jnorman@usgs.gov [U.S. Geological Survey, Oregon Water Science Center, 2130 SW 5" t" h Avenue, Portland, OR 97201 (United States); Ingersoll, Christopher G., E-mail: cingersoll@usgs.gov [U.S. Geological Survey, Columbia Environmental Research Center, 4200 New Haven Road, Columbia, MO 65021 (United States); Moran, Patrick W., E-mail: pwmoran@usgs.gov [U.S. Geological Survey, Washington Water Science Center, 934 Broadway, Suite 300, Tacoma, WA 98402 (United States)

    2016-04-15

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  10. Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides

    International Nuclear Information System (INIS)

    Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.

    2016-01-01

    Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics

  11. Optimal field splitting for large intensity-modulated fields

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Ranka, Sanjay; Li, Jonathan; Palta, Jatinder

    2004-01-01

    The multileaf travel range limitations on some linear accelerators require the splitting of a large intensity-modulated field into two or more adjacent abutting intensity-modulated subfields. The abutting subfields are then delivered as separate treatment fields. This workaround not only increases the treatment delivery time but it also increases the total monitor units (MU) delivered to the patient for a given prescribed dose. It is imperative that the cumulative intensity map of the subfields is exactly the same as the intensity map of the large field generated by the dose optimization algorithm, while satisfying hardware constraints of the delivery system. In this work, we describe field splitting algorithms that split a large intensity-modulated field into two or more intensity-modulated subfields with and without feathering, with optimal MU efficiency while satisfying the hardware constraints. Compared to a field splitting technique (without feathering) used in a commercial planning system, our field splitting algorithm (without feathering) shows a decrease in total MU of up to 26% on clinical cases and up to 63% on synthetic cases

  12. An Engineered Split Intein for Photoactivated Protein Trans-Splicing.

    Directory of Open Access Journals (Sweden)

    Stanley Wong

    Full Text Available Protein splicing is mediated by inteins that auto-catalytically join two separated protein fragments with a peptide bond. Here we engineered a genetically encoded synthetic photoactivatable intein (named LOVInC, by using the light-sensitive LOV2 domain from Avena sativa as a switch to modulate the splicing activity of the split DnaE intein from Nostoc punctiforme. Periodic blue light illumination of LOVInC induced protein splicing activity in mammalian cells. To demonstrate the broad applicability of LOVInC, synthetic protein systems were engineered for the light-induced reassembly of several target proteins such as fluorescent protein markers, a dominant positive mutant of RhoA, caspase-7, and the genetically encoded Ca2+ indicator GCaMP2. Spatial precision of LOVInC was demonstrated by targeting activity to specific mammalian cells. Thus, LOVInC can serve as a general platform for engineering light-based control for modulating the activity of many different proteins.

  13. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  14. The national hydrologic bench-mark network

    Science.gov (United States)

    Cobb, Ernest D.; Biesecker, J.E.

    1971-01-01

    The United States is undergoing a dramatic growth of population and demands on its natural resources. The effects are widespread and often produce significant alterations of the environment. The hydrologic bench-mark network was established to provide data on stream basins which are little affected by these changes. The network is made up of selected stream basins which are not expected to be significantly altered by man. Data obtained from these basins can be used to document natural changes in hydrologic characteristics with time, to provide a better understanding of the hydrologic structure of natural basins, and to provide a comparative base for studying the effects of man on the hydrologic environment. There are 57 bench-mark basins in 37 States. These basins are in areas having a wide variety of climate and topography. The bench-mark basins and the types of data collected in the basins are described.

  15. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  16. Iterative group splitting algorithm for opportunistic scheduling systems

    KAUST Repository

    Nam, Haewoon; Alouini, Mohamed-Slim

    2014-01-01

    An efficient feedback algorithm for opportunistic scheduling systems based on iterative group splitting is proposed in this paper. Similar to the opportunistic splitting algorithm, the proposed algorithm adjusts (or lowers) the feedback threshold

  17. Benchmarking in the globalised world and its impact on South ...

    African Journals Online (AJOL)

    In order to understand the potential impact of international benchmarking on South African institutions, it is important to explore the future role of benchmarking on the international level. In this regard, examples of transnational benchmarking activities will be considered. As a result of the involvement of South African ...

  18. Electrodeposition of Ni-Mo alloy coatings for water splitting reaction

    Science.gov (United States)

    Shetty, Akshatha R.; Hegde, Ampar Chitharanjan

    2018-04-01

    The present study reports the development of Ni-Mo alloy coatings for water splitting applications, using a citrate bath the inducing effect of Mo (reluctant metal) on electrodeposition, its relationship with their electrocatalytic efficiency were studied. The alkaline water splitting efficiency of Ni-Mo alloy coatings, for both hydrogen evolution reaction (HER) and oxygen evolution reaction were tested using cyclic voltammetry (CV) and chronopotentiometry (CP) techniques. Moreover, the practical utility of these electrode materials were evaluated by measuring the amount of H2 and O2 gas evolved. The variation in electrocatalytic activity with composition, structure, and morphology of the coatings were examined using XRD, SEM, and EDS analyses. The experimental results showed that Ni-Mo alloy coating is the best electrode material for alkaline HER and OER reactions, at lower and higher deposition current densities (c. d.'s) respectively. This behavior is attributed by decreased Mo and increased Ni content of the alloy coating and the number of electroactive centers.

  19. Benchmarking in Foodservice Operations

    National Research Council Canada - National Science Library

    Johnson, Bonnie

    1998-01-01

    .... The design of this study included two parts: (1) eleven expert panelists involved in a Delphi technique to identify and rate importance of foodservice performance measures and rate the importance of benchmarking activities, and (2...

  20. Neither dichotomous nor split, but schema-related negative interpersonal evaluations characterize borderline patients.

    Science.gov (United States)

    Sieswerda, Simkje; Barnow, Sven; Verheul, Roel; Arntz, Arnoud

    2013-02-01

    Cognitive models explain extreme thoughts, affects, and behaviors of patients with Borderline Personality Disorder (BPD) by specific mal-adaptive schemas and dichotomous thinking. Psychodynamic theories ascribe these to splitting. This study expanded the study of Veen and Arntz (2000) and investigated whether extreme evaluations in BPD are (1) dichotomous, negativistic, or split; (2) limited to specific (schema-related) interpersonal situations; and (3) related to traumatic childhood experiences. BPD (n = 18), cluster C personality disorder (n = 16), and nonpatient (n = 17) groups were asked to judge 16 characters portrayed in film fragments in a specific or nonspecific context and with negative, positive, or neutral roles on visual analogue scales. These scales were divided in negative-positive trait opposites related to BPD schemas, negative-positive trait opposites unrelated to BPD schemas, and neutral trait opposites. Interpersonal evaluations of patients with BPD were (1) negativistic; (2) schema related; and (3) partially related to traumatic childhood experiences. Negative evaluations of caring characters in an intimate context particularly characterized BPD. No evidence was found for dichotomous thinking or splitting in BPD.

  1. Experimental Assessment of residential split type air-conditioning systems using alternative refrigerants to R-22 at high ambient temperatures

    International Nuclear Information System (INIS)

    Joudi, Khalid A.; Al-Amir, Qusay R.

    2014-01-01

    Highlights: • R290, R407C and R410A in residential split A/C units at high ambient. • 1 and 2 TR residential air conditioners with R22 alternatives at high ambient. • Residential split unit performance at ambients up to 55 °C with R22 alternatives. - Abstract: Steady state performance of residential air conditioning systems using R22 and alternatives R290, R407C, R410A, at high ambient temperatures, have been investigated experimentally. System performance parameters such as optimum refrigerant charge, coefficient of performance, cooling capacity, power consumption, pressure ratio, power per ton of refrigeration and TEWI environmental factor have been determined. All refrigerants were tested in the cooling mode operation under high ambient air temperatures, up to 55 °C, to determine their suitability. Two split type air conditioner of 1 and 2 TR capacities were used. A psychrometric test facility was constructed consisting of a conditioned cool compartment and an environmental duct serving the condenser. Air inside the conditioned compartment was maintained at 25 °C dry bulb and 19 °C wet bulb for all tests. In the environmental duct, the ambient air temperature was varied from 35 °C to 55 °C in 5 °C increments. The study showed that R290 is the better candidate to replace R22 under high ambient air temperatures. It has lower TEWI values and a better coefficient of performance than the other refrigerants tested. It is suitable as a drop-in refrigerant. R407C has the closest performance to R22, followed by R410A

  2. Point-splitting regularization of composite operators and anomalies

    International Nuclear Information System (INIS)

    Novotny, J.; Schnabl, M.

    2000-01-01

    The point-splitting regularization technique for composite operators is discussed in connection with anomaly calculation. We present a pedagogical and self-contained review of the topic with an emphasis on the technical details. We also develop simple algebraic tools to handle the path ordered exponential insertions used within the covariant and non-covariant version of the point-splitting method. The method is then applied to the calculation of the chiral, vector, trace, translation and Lorentz anomalies within diverse versions of the point-splitting regularization and a connection between the results is described. As an alternative to the standard approach we use the idea of deformed point-split transformation and corresponding Ward-Takahashi identities rather than an application of the equation of motion, which seems to reduce the complexity of the calculations. (orig.)

  3. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    Science.gov (United States)

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  4. Identification of a novel SPLIT-HULL (SPH) gene associated with hull splitting in rice (Oryza sativa L.).

    Science.gov (United States)

    Lee, Gileung; Lee, Kang-Ie; Lee, Yunjoo; Kim, Backki; Lee, Dongryung; Seo, Jeonghwan; Jang, Su; Chin, Joong Hyoun; Koh, Hee-Jong

    2018-07-01

    The split-hull phenotype caused by reduced lemma width and low lignin content is under control of SPH encoding a type-2 13-lipoxygenase and contributes to high dehulling efficiency. Rice hulls consist of two bract-like structures, the lemma and palea. The hull is an important organ that helps to protect seeds from environmental stress, determines seed shape, and ensures grain filling. Achieving optimal hull size and morphology is beneficial for seed development. We characterized the split-hull (sph) mutant in rice, which exhibits hull splitting in the interlocking part between lemma and palea and/or the folded part of the lemma during the grain filling stage. Morphological and chemical analysis revealed that reduction in the width of the lemma and lignin content of the hull in the sph mutant might be the cause of hull splitting. Genetic analysis indicated that the mutant phenotype was controlled by a single recessive gene, sph (Os04g0447100), which encodes a type-2 13-lipoxygenase. SPH knockout and knockdown transgenic plants displayed the same split-hull phenotype as in the mutant. The sph mutant showed significantly higher linoleic and linolenic acid (substrates of lipoxygenase) contents in spikelets compared to the wild type. It is probably due to the genetic defect of SPH and subsequent decrease in lipoxygenase activity. In dehulling experiment, the sph mutant showed high dehulling efficiency even by a weak tearing force in a dehulling machine. Collectively, the results provide a basis for understanding of the functional role of lipoxygenase in structure and maintenance of hulls, and would facilitate breeding of easy-dehulling rice.

  5. 15 CFR 30.28 - “Split shipments” by air.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false âSplit shipmentsâ by air. 30.28... Transactions § 30.28 “Split shipments” by air. When a shipment by air covered by a single EEI submission is... showing the portion of the split shipment carried on that flight, a notation will be made showing the air...

  6. THE IMPORTANCE OF BENCHMARKING IN MAKING MANAGEMENT DECISIONS

    Directory of Open Access Journals (Sweden)

    Adriana-Mihaela IONESCU

    2016-06-01

    Full Text Available Launching a new business or project leads managers to make decisions and choose strategies that will then apply in their company. Most often, they take decisions only on instinct, but there are also companies that use benchmarking studies. Benchmarking is a highly effective management tool and is useful in the new competitive environment that has emerged from the need of organizations to constantly improve their performance in order to be competitive. Using this benchmarking process, organizations try to find the best practices applied in a business, learn from famous leaders and identify ways to increase their performance and competitiveness. Thus, managers gather information about market trends and about competitors, especially about the leaders in the field, and use these information in finding ideas and setting of guidelines for development. Benchmarking studies are often used in businesses of commerce, real estate, and industry and high-tech software.

  7. The International Criticality Safety Benchmark Evaluation Project (ICSBEP)

    International Nuclear Information System (INIS)

    Briggs, J.B.

    2003-01-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organisation for Economic Cooperation and Development (OECD) - Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Yugoslavia, Kazakhstan, Israel, Spain, and Brazil are now participating. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled 'International Handbook of Evaluated Criticality Safety Benchmark Experiments.' The 2003 Edition of the Handbook contains benchmark model specifications for 3070 critical or subcritical configurations that are intended for validating computer codes that calculate effective neutron multiplication and for testing basic nuclear data. (author)

  8. Benchmark calculations for fusion blanket development

    International Nuclear Information System (INIS)

    Sawan, M.E.; Cheng, E.T.

    1985-01-01

    Benchmark problems representing the leading fusion blanket concepts are presented. Benchmark calculations for self-cooled Li/sub 17/Pb/sub 83/ and helium-cooled blankets were performed. Multigroup data libraries generated from ENDF/B-IV and V files using the NJOY and AMPX processing codes with different weighting functions were used. The sensitivity of the TBR to group structure and weighting spectrum increases and Li enrichment decrease with up to 20% discrepancies for thin natural Li/sub 17/Pb/sub 83/ blankets

  9. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  10. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  11. Tantalum-based semiconductors for solar water splitting.

    Science.gov (United States)

    Zhang, Peng; Zhang, Jijie; Gong, Jinlong

    2014-07-07

    Solar energy utilization is one of the most promising solutions for the energy crises. Among all the possible means to make use of solar energy, solar water splitting is remarkable since it can accomplish the conversion of solar energy into chemical energy. The produced hydrogen is clean and sustainable which could be used in various areas. For the past decades, numerous efforts have been put into this research area with many important achievements. Improving the overall efficiency and stability of semiconductor photocatalysts are the research focuses for the solar water splitting. Tantalum-based semiconductors, including tantalum oxide, tantalate and tantalum (oxy)nitride, are among the most important photocatalysts. Tantalum oxide has the band gap energy that is suitable for the overall solar water splitting. The more negative conduction band minimum of tantalum oxide provides photogenerated electrons with higher potential for the hydrogen generation reaction. Tantalates, with tunable compositions, show high activities owning to their layered perovskite structure. (Oxy)nitrides, especially TaON and Ta3N5, have small band gaps to respond to visible-light, whereas they can still realize overall solar water splitting with the proper positions of conduction band minimum and valence band maximum. This review describes recent progress regarding the improvement of photocatalytic activities of tantalum-based semiconductors. Basic concepts and principles of solar water splitting will be discussed in the introduction section, followed by the three main categories regarding to the different types of tantalum-based semiconductors. In each category, synthetic methodologies, influencing factors on the photocatalytic activities, strategies to enhance the efficiencies of photocatalysts and morphology control of tantalum-based materials will be discussed in detail. Future directions to further explore the research area of tantalum-based semiconductors for solar water splitting

  12. Benchmark matrix and guide: Part II.

    Science.gov (United States)

    1991-01-01

    In the last issue of the Journal of Quality Assurance (September/October 1991, Volume 13, Number 5, pp. 14-19), the benchmark matrix developed by Headquarters Air Force Logistics Command was published. Five horizontal levels on the matrix delineate progress in TQM: business as usual, initiation, implementation, expansion, and integration. The six vertical categories that are critical to the success of TQM are leadership, structure, training, recognition, process improvement, and customer focus. In this issue, "Benchmark Matrix and Guide: Part II" will show specifically how to apply the categories of leadership, structure, and training to the benchmark matrix progress levels. At the intersection of each category and level, specific behavior objectives are listed with supporting behaviors and guidelines. Some categories will have objectives that are relatively easy to accomplish, allowing quick progress from one level to the next. Other categories will take considerable time and effort to complete. In the next issue, Part III of this series will focus on recognition, process improvement, and customer focus.

  13. Supermarket Refrigeration System - Benchmark for Hybrid System Control

    DEFF Research Database (Denmark)

    Sloth, Lars Finn; Izadi-Zamanabadi, Roozbeh; Wisniewski, Rafal

    2007-01-01

    This paper presents a supermarket refrigeration system as a benchmark for development of new ideas and a comparison of methods for hybrid systems' modeling and control. The benchmark features switch dynamics and discrete valued input making it a hybrid system, furthermore the outputs are subjected...

  14. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  15. High efficiency beam splitting for H- accelerators

    International Nuclear Information System (INIS)

    Kramer, S.L.; Stipp, V.; Krieger, C.; Madsen, J.

    1985-01-01

    Beam splitting for high energy accelerators has typically involved a significant loss of beam and radiation. This paper reports on a new method of splitting beams for H - accelerators. This technique uses a high intensity flash of light to strip a fraction of the H - beam to H 0 which are then easily separated by a small bending magnet. A system using a 900-watt (average electrical power) flashlamp and a highly efficient collector will provide 10 -3 to 10 -2 splitting of a 50 MeV H - beam. Results on the operation and comparisons with stripping cross sections are presented. Also discussed is the possibility for developing this system to yield a higher stripping fraction

  16. Optimal space-energy splitting in MCNP with the DSA

    International Nuclear Information System (INIS)

    Dubi, A.; Gurvitz, N.

    1990-01-01

    The Direct Statistical Approach (DSA) particle transport theory is based on the possibility of obtaining exact explicit expressions for the dependence of the second moment and calculation time on the splitting parameters. This allows the automatic optimization of the splitting parameters by ''learning'' the bulk parameters from which the problem dependent coefficients of the quality function (second moment time) are constructed. The above procedure was exploited to implement an automatic optimization of the splitting parameters in the Monte Carlo Neutron Photon (MCNP) code. This was done in a number of steps. In the first instance, only spatial surface splitting was considered. In this step, the major obstacle has been the truncation of an infinite series of ''products'' of ''surface path's'' leading from the source to the detector. Encouraging results from the first phase led to the inclusion of full space/energy phase space splitting. (author)

  17. Simplified two and three dimensional HTTR benchmark problems

    International Nuclear Information System (INIS)

    Zhang Zhan; Rahnema, Farzad; Zhang Dingkang; Pounders, Justin M.; Ougouag, Abderrafi M.

    2011-01-01

    To assess the accuracy of diffusion or transport methods for reactor calculations, it is desirable to create heterogeneous benchmark problems that are typical of whole core configurations. In this paper we have created two and three dimensional numerical benchmark problems typical of high temperature gas cooled prismatic cores. Additionally, a single cell and single block benchmark problems are also included. These problems were derived from the HTTR start-up experiment. Since the primary utility of the benchmark problems is in code-to-code verification, minor details regarding geometry and material specification of the original experiment have been simplified while retaining the heterogeneity and the major physics properties of the core from a neutronics viewpoint. A six-group material (macroscopic) cross section library has been generated for the benchmark problems using the lattice depletion code HELIOS. Using this library, Monte Carlo solutions are presented for three configurations (all-rods-in, partially-controlled and all-rods-out) for both the 2D and 3D problems. These solutions include the core eigenvalues, the block (assembly) averaged fission densities, local peaking factors, the absorption densities in the burnable poison and control rods, and pin fission density distribution for selected blocks. Also included are the solutions for the single cell and single block problems.

  18. A large-scale benchmark of gene prioritization methods.

    Science.gov (United States)

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  19. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    Science.gov (United States)

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  20. Developing a benchmark for emotional analysis of music.

    Science.gov (United States)

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.