WorldWideScience

Sample records for core performance benchmarking

  1. Benchmarking Pthreads performance

    Energy Technology Data Exchange (ETDEWEB)

    May, J M; de Supinski, B R

    1999-04-27

    The importance of the performance of threads libraries is growing as clusters of shared memory machines become more popular POSIX threads, or Pthreads, is an industry threads library standard. We have implemented the first Pthreads benchmark suite. In addition to measuring basic thread functions, such as thread creation, we apply the L.ogP model to standard Pthreads communication mechanisms. We present the results of our tests for several hardware platforms. These results demonstrate that the performance of existing Pthreads implementations varies widely; parts of nearly all of these implementations could be further optimized. Since hardware differences do not fully explain these performance variations, optimizations could improve the implementations. 2. Incorporating Threads Benchmarks into SKaMPI is an MPI benchmark suite that provides a general framework for performance analysis [7]. SKaMPI does not exhaustively test the MPI standard. Instead, it

  2. Benchmarking and Performance Management

    Directory of Open Access Journals (Sweden)

    Adrian TANTAU

    2010-12-01

    Full Text Available The relevance of the chosen topic is explained by the meaning of the firm efficiency concept - the firm efficiency means the revealed performance (how well the firm performs in the actual market environment given the basic characteristics of the firms and their markets that are expected to drive their profitability (firm size, market power etc.. This complex and relative performance could be due to such things as product innovation, management quality, work organization, some other factors can be a cause even if they are not directly observed by the researcher. The critical need for the management individuals/group to continuously improve their firm/company’s efficiency and effectiveness, the need for the managers to know which are the success factors and the competitiveness determinants determine consequently, what performance measures are most critical in determining their firm’s overall success. Benchmarking, when done properly, can accurately identify both successful companies and the underlying reasons for their success. Innovation and benchmarking firm level performance are critical interdependent activities. Firm level variables, used to infer performance, are often interdependent due to operational reasons. Hence, the managers need to take the dependencies among these variables into account when forecasting and benchmarking performance. This paper studies firm level performance using financial ratio and other type of profitability measures. It uses econometric models to describe and then propose a method to forecast and benchmark performance.

  3. Thermal Performance Benchmarking (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  4. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  5. Benchmark for Strategic Performance Improvement.

    Science.gov (United States)

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  6. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    Research on relative performance measures, transfer pricing, beyond budgeting initiatives, target costing, piece rates systems and value based management has for decades underlined the importance of external benchmarking in performance management. Research conceptualises external benchmarking...... the conditions upon which the market mechanism is performing within organizations. This paper aims to contribute to research by providing more insight to the conditions for the use of external benchmarking as an element in performance management in organizations. Our study explores a particular type of external...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  7. Performance Targets and External Benchmarking

    DEFF Research Database (Denmark)

    Friis, Ivar; Hansen, Allan; Vámosi, Tamás S.

    as a market mechanism that can be brought inside the firm to provide incentives for continuous improvement and the development of competitive advances. However, whereas extant research primarily has focused on the importance and effects of using external benchmarks, less attention has been directed towards...... towards the conditions for the use of the external benchmarks we provide more insights to some of the issues and challenges that are related to using this mechanism for performance management and advance competitiveness in organizations....

  8. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    Energy Technology Data Exchange (ETDEWEB)

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  9. Thermal Performance Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  10. Thermal Performance Benchmarking: Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  11. TREAT Transient Analysis Benchmarking for the HEU Core

    Energy Technology Data Exchange (ETDEWEB)

    Kontogeorgakos, D. C. [Argonne National Lab. (ANL), Argonne, IL (United States); Connaway, H. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Wright, A. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-05-01

    This work was performed to support the feasibility study on the potential conversion of the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory from the use of high enriched uranium (HEU) fuel to the use of low enriched uranium (LEU) fuel. The analyses were performed by the GTRI Reactor Conversion staff at the Argonne National Laboratory (ANL). The objective of this study was to benchmark the transient calculations against temperature-limited transients performed in the final operating HEU TREAT core configuration. The MCNP code was used to evaluate steady-state neutronics behavior, and the point kinetics code TREKIN was used to determine core power and energy during transients. The first part of the benchmarking process was to calculate with MCNP all the neutronic parameters required by TREKIN to simulate the transients: the transient rod-bank worth, the prompt neutron generation lifetime, the temperature reactivity feedback as a function of total core energy, and the core-average temperature and peak temperature as a functions of total core energy. The results of these calculations were compared against measurements or against reported values as documented in the available TREAT reports. The heating of the fuel was simulated as an adiabatic process. The reported values were extracted from ANL reports, intra-laboratory memos and experiment logsheets and in some cases it was not clear if the values were based on measurements, on calculations or a combination of both. Therefore, it was decided to use the term “reported” values when referring to such data. The methods and results from the HEU core transient analyses will be used for the potential LEU core configurations to predict the converted (LEU) core’s performance.

  12. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  13. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  14. Developing integrated benchmarks for DOE performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  15. Benchmarking: A tool to enhance performance

    Energy Technology Data Exchange (ETDEWEB)

    Munro, J.F. [Oak Ridge National Lab., TN (United States); Kristal, J. [USDOE Assistant Secretary for Environmental Management, Washington, DC (United States); Thompson, G.; Johnson, T. [Los Alamos National Lab., NM (United States)

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  16. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically receive bureaucratic benchmarking information from the administration. We find that more frequent bureaucratic...

  17. VENUS-F: A fast lead critical core for benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Kochetkov, A.; Wagemans, J.; Vittiglio, G. [SCK.CEN, Boeretang 200, 2400 Mol (Belgium)

    2011-07-01

    The zero-power thermal neutron water-moderated facility VENUS at SCK-CEN has been extensively used for benchmarking in the past. In accordance with GEN-IV design tasks (fast reactor systems and accelerator driven systems), the VENUS facility was modified in 2007-2010 into the fast neutron facility VENUS-F with solid core components. This paper introduces the projects GUINEVERE and FREYA, which are being conducted at the VENUS-F facility, and it presents the measurement results obtained at the first critical core. Throughout the projects other fast lead benchmarks also will be investigated. The measurement results of the different configurations can all be used as fast neutron benchmarks. (authors)

  18. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Prior research documents positive effects of benchmarking information provision on performance and attributes this to social comparisons. However, the effects on professional recipients are unclear. Studies of professional control indicate that professional recipients often resist bureaucratic...... controls because of organizational-professional conflicts. We therefore analyze the association between bureaucratic benchmarking information provision and professional performance and suggest that the association is more positive if prior professional performance was low. We test our hypotheses based...... and professional performance but only if prior professional performance was low. Supplemental analyses support the robustness of our results. Findings indicate conditions under which bureaucratic benchmarking information may affect professional performance and advance research on professional control and social...

  19. Performance Benchmarking Student Transportation Operations.

    Science.gov (United States)

    Forsyth, Andy

    2001-01-01

    Student transportation complexities make evaluating a program's cost and quality very difficult. The first step in measuring performance is defining an operation's functional components: level of service delivery, units of service, and cost of services. Other considerations include routing, logistics, and fleet maintenance and support operations.…

  20. Performance Benchmarking of Fast Multipole Methods

    KAUST Repository

    Al-Harthi, Noha A.

    2013-06-01

    The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future. The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q. This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware

  1. Professional Performance and Bureaucratic Benchmarking Information

    DEFF Research Database (Denmark)

    Schneider, Melanie L.; Mahlendorf, Matthias D.; Schäffer, Utz

    Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated with profess......Professionals are often expected to be reluctant with regard to bureaucratic controls because of assumed conflicting values and goals of the organization vis-à-vis the profession. We suggest however, that the provision of bureaucratic benchmarking information is positively associated...... for 191 orthopaedics departments of German hospitals matched with survey data on bureaucratic benchmarking information provision to the chief physician of the respective department. Professional performance is publicly disclosed due to regulatory requirements. At the same time, chief physicians typically...

  2. The PROOF benchmark suite measuring PROOF performance

    Science.gov (United States)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  3. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  4. Benchmarking Performance of Web Service Operations

    OpenAIRE

    Zhang, Shuai

    2011-01-01

    Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatical...

  5. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  6. EVA Health and Human Performance Benchmarking Study

    Science.gov (United States)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  7. Benchmark Evaluation of the HTR-PROTEUS Absorber Rod Worths (Core 4)

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess; Leland M. Montierth

    2014-06-01

    PROTEUS was a zero-power research reactor at the Paul Scherrer Institute (PSI) in Switzerland. The critical assembly was constructed from a large graphite annulus surrounding a central cylindrical cavity. Various experimental programs were investigated in PROTEUS; during the years 1992 through 1996, it was configured as a pebble-bed reactor and designated HTR-PROTEUS. Various critical configurations were assembled with each accompanied by an assortment of reactor physics experiments including differential and integral absorber rod measurements, kinetics, reaction rate distributions, water ingress effects, and small sample reactivity effects [1]. Four benchmark reports were previously prepared and included in the March 2013 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook) [2] evaluating eleven critical configurations. A summary of that effort was previously provided [3] and an analysis of absorber rod worth measurements for Cores 9 and 10 have been performed prior to this analysis and included in PROTEUS-GCR-EXP-004 [4]. In the current benchmark effort, absorber rod worths measured for Core Configuration 4, which was the only core with a randomly-packed pebble loading, have been evaluated for inclusion as a revision to the HTR-PROTEUS benchmark report PROTEUS-GCR-EXP-002.

  8. Benchmarking database performance for genomic data.

    Science.gov (United States)

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  9. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  10. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  11. Benchmark calculation for water reflected STACY cores containing low enriched uranyl nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Miyoshi, Yoshinori; Yamamoto, Toshihiro; Nakamura, Takemi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    In order to validate the availability of criticality calculation codes and related nuclear data library, a series of fundamental benchmark experiments on low enriched uranyl nitrate solution have been performed with a Static Experiment Criticality Facility, STACY in JAERI. The basic core composed of a single tank with water reflector was used for accumulating the systematic data with well-known experimental uncertainties. This paper presents the outline of the core configurations of STACY, the standard calculation model, and calculation results with a Monte Carlo code and JENDL 3.2 nuclear data library. (author)

  12. Ground truth and benchmarks for performance evaluation

    Science.gov (United States)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  13. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    Science.gov (United States)

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  14. Large Core Code Evaluation Working Group Benchmark Problem Four: neutronics and burnup analysis of a large heterogeneous fast reactor. Part 1. Analysis of benchmark results. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, C.L.; Protsik, R.; Lewellen, J.W. (eds.)

    1984-01-01

    The Large Core Code Evaluation Working Group Benchmark Problem Four was specified to provide a stringent test of the current methods which are used in the nuclear design and analyses process. The benchmark specifications provided a base for performing detailed burnup calculations over the first two irradiation cycles for a large heterogeneous fast reactor. Particular emphasis was placed on the techniques for modeling the three-dimensional benchmark geometry, and sensitivity studies were carried out to determine the performance parameter sensitivities to changes in the neutronics and burnup specifications. The results of the Benchmark Four calculations indicated that a linked RZ-XY (Hex) two-dimensional representation of the benchmark model geometry can be used to predict mass balance data, power distributions, regionwise fuel exposure data and burnup reactivities with good accuracy when compared with the results of direct three-dimensional computations. Most of the small differences in the results of the benchmark analyses by the different participants were attributed to ambiguities in carrying out the regionwise flux renormalization calculations throughout the burnup step.

  15. BENCHMARK EVALUATION OF THE START-UP CORE REACTOR PHYSICS MEASUREMENTS OF THE HIGH TEMPERATURE ENGINEERING TEST REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2010-05-01

    The benchmark evaluation of the start-up core reactor physics measurements performed with Japan’s High Temperature Engineering Test Reactor, in support of the Next Generation Nuclear Plant Project and Very High Temperature Reactor Program activities at the Idaho National Laboratory, has been completed. The evaluation was performed using MCNP5 with ENDF/B-VII.0 nuclear data libraries and according to guidelines provided for inclusion in the International Reactor Physics Experiment Evaluation Project Handbook. Results provided include updated evaluation of the initial six critical core configurations (five annular and one fully-loaded). The calculated keff eigenvalues agree within 1s of the benchmark values. Reactor physics measurements that were evaluated include reactivity effects measurements such as excess reactivity during the core loading process and shutdown margins for the fully-loaded core, four isothermal temperature reactivity coefficient measurements for the fully-loaded core, and axial reaction rate measurements in the instrumentation columns of three core configurations. The calculated values agree well with the benchmark experiment measurements. Fully subcritical and warm critical configurations of the fully-loaded core were also assessed. The calculated keff eigenvalues for these two configurations also agree within 1s of the benchmark values. The reactor physics measurement data can be used in the validation and design development of future High Temperature Gas-cooled Reactor systems.

  16. Benchmarking spin-state chemistry in starless core models

    CERN Document Server

    Sipilä, O; Harju, J

    2015-01-01

    Aims. We aim to present simulated chemical abundance profiles for a variety of important species, with special attention given to spin-state chemistry, in order to provide reference results against which present and future models can be compared. Methods. We employ gas-phase and gas-grain models to investigate chemical abundances in physical conditions corresponding to starless cores. To this end, we have developed new chemical reaction sets for both gas-phase and grain-surface chemistry, including the deuterated forms of species with up to six atoms and the spin-state chemistry of light ions and of the species involved in the ammonia and water formation networks. The physical model is kept simple in order to facilitate straightforward benchmarking of other models against the results of this paper. Results. We find that the ortho/para ratios of ammonia and water are similar in both gas-phase and gas-grain models, at late times in particular, implying that the ratios are determined by gas-phase processes. We d...

  17. DEVELOPMENT OF A MARKET BENCHMARK PRICE FOR AGMAS PERFORMANCE EVALUATIONS

    OpenAIRE

    Good, Darrel L.; Irwin, Scott H.; Jackson, Thomas E.

    1998-01-01

    The purpose of this research report is to identify the appropriate market benchmark price to use to evaluate the pricing performance of market advisory services that are included in the annual AgMAS pricing performance evaluations. Five desirable properties of market benchmark prices are identified. Three potential specifications of the market benchmark price are considered: the average price received by Illinois farmers, the harvest cash price, and the average cash price over a two-year crop...

  18. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and ‘s

  19. BIM quickscan: benchmark of BIM performance in the Netherlands

    NARCIS (Netherlands)

    Berlo, L.A.H.M. van; Dijkmans, T.J.A.; Hendriks, H.; Spekkink, D.; Pel, W.

    2012-01-01

    In 2009 a “BIM QuickScan” for benchmarking BIM performance was created in the Netherlands (Sebastian, Berlo 2010). This instrument aims to provide insight into the current BIM performance of a company. The benchmarking instrument combines quantitative and qualitative assessments of the ‘hard’ and

  20. Developing Benchmarks to Measure Teacher Candidates' Performance

    Science.gov (United States)

    Frazier, Laura Corbin; Brown-Hobbs, Stacy; Palmer, Barbara Martin

    2013-01-01

    This paper traces the development of teacher candidate benchmarks at one liberal arts institution. Begun as a classroom assessment activity over ten years ago, the benchmarks, through collaboration with professional development school partners, now serve as a primary measure of teacher candidates' performance in the final phases of the…

  1. The State of Energy and Performance Benchmarking for Enterprise Servers

    Science.gov (United States)

    Fanara, Andrew; Haines, Evan; Howard, Arthur

    To address the server industry’s marketing focus on performance, benchmarking organizations have played a pivotal role in developing techniques to determine the maximum achievable performance level of a system. Generally missing has been an assessment of energy use to achieve that performance. The connection between performance and energy consumption is becoming necessary information for designers and operators as they grapple with power constraints in the data center. While industry and policy makers continue to strategize about a universal metric to holistically measure IT equipment efficiency, existing server benchmarks for various workloads could provide an interim proxy to assess the relative energy efficiency of general servers. This paper discusses ideal characteristics a future energy-performance benchmark might contain, suggests ways in which current benchmarks might be adapted to provide a transitional step to this end, and notes the need for multiple workloads to provide a holistic proxy for a universal metric.

  2. Benchmarking the CRBLASTER Computational Framework on the 350-MHz 49-core Maestro Development Board

    Science.gov (United States)

    Mighell, K. J.

    2012-09-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MBD). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 processor, the Maestro simulator, and finally the 49-core Maestro processor itself. Performance comparisons using the ITC are presented between emulating all floating-point operations in software and doing all floating point operations with hardware assist from an IEEE-754 compliant Aurora FPU (floating point unit) that is attached to each of the 49 cores. Benchmarking of the CRBLASTER computational framework using the memory-intensive L.A.COSMIC cosmic ray rejection algorithm and a computational-intensive Poisson noise generator reveal subtleties of the Maestro hardware design. Lastly, I describe the importance of using real scientific applications during the testing phase of next-generation computer hardware; complex real-world scientific applications can stress hardware in novel ways that may not necessarily be revealed while executing simple applications or unit tests.

  3. Performance Benchmarks for Screening Breast MR Imaging in Community Practice.

    Science.gov (United States)

    Lee, Janie M; Ichikawa, Laura; Valencia, Elizabeth; Miglioretti, Diana L; Wernli, Karen; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Sprague, Brian L; Onega, Tracy; Rauscher, Garth H; Lehman, Constance D

    2017-10-01

    Purpose To compare screening magnetic resonance (MR) imaging performance in the Breast Cancer Surveillance Consortium (BCSC) with Breast Imaging Reporting and Data System (BI-RADS) benchmarks. Materials and Methods This study was approved by the institutional review board and compliant with HIPAA and included BCSC screening MR examinations collected between 2005 and 2013 from 5343 women (8387 MR examinations) linked to regional Surveillance, Epidemiology, and End Results program registries, state tumor registries, and pathologic information databases that identified breast cancer cases and tumor characteristics. Clinical, demographic, and imaging characteristics were assessed. Performance measures were calculated according to BI-RADS fifth edition and included cancer detection rate (CDR), positive predictive value of biopsy recommendation (PPV2), sensitivity, and specificity. Results The median patient age was 52 years; 52% of MR examinations were performed in women with a first-degree family history of breast cancer, 46% in women with a personal history of breast cancer, and 15% in women with both risk factors. Screening MR imaging depicted 146 cancers, and 35 interval cancers were identified (181 total-54 in situ, 125 invasive, and two status unknown). The CDR was 17 per 1000 screening examinations (95% confidence interval [CI]: 15, 20 per 1000 screening examinations; BI-RADS benchmark, 20-30 per 1000 screening examinations). PPV2 was 19% (95% CI: 16%, 22%; benchmark, 15%). Sensitivity was 81% (95% CI: 75%, 86%; benchmark, >80%), and specificity was 83% (95% CI: 82%, 84%; benchmark, 85%-90%). The median tumor size of invasive cancers was 10 mm; 88% were node negative. Conclusion The interpretative performance of screening MR imaging in the BCSC meets most BI-RADS benchmarks and approaches benchmark levels for remaining measures. Clinical practice performance data can inform ongoing benchmark development and help identify areas for quality improvement. (©) RSNA

  4. Performance and Scalability of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for scientific applications. In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for scientific applications.

  5. The Army Pollution Prevention Program: Improving Performance Through Benchmarking.

    Science.gov (United States)

    1995-06-01

    questionnaire, which had with 30 questions addressing 12 key maintenance performance measures. The measures were selected to represent a balanced scorecard of...in practice. While the continuing concern for the hazardous waste stream is genu- ine and well-founded, the Army must seek a more balanced approach...exactly what the benchmarking proc- ess entails. For example, benchmarking is commonly confused with industrial tourism — simply visiting a partner’s site

  6. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    Science.gov (United States)

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc.

  7. Benchmarking the financial performance of local councils in Ireland

    Directory of Open Access Journals (Sweden)

    Robbins Geraldine

    2016-05-01

    Full Text Available It was over a quarter of a century ago that information from the financial statements was used to benchmark the efficiency and effectiveness of local government in the US. With the global adoption of New Public Management ideas, benchmarking practice spread to the public sector and has been employed to drive reforms aimed at improving performance and, ultimately, service delivery and local outcomes. The manner in which local authorities in OECD countries compare and benchmark their performance varies widely. The methodology developed in this paper to rate the relative financial performance of Irish city and county councils is adapted from an earlier assessment tool used to measure the financial condition of small cities in the US. Using our financial performance framework and the financial data in the audited annual financial statements of Irish local councils, we calculate composite scores for each of the thirty-four local authorities for the years 2007–13. This paper contributes composite scores that measure the relative financial performance of local councils in Ireland, as well as a full set of yearly results for a seven-year period in which local governments witnessed significant changes in their financial health. The benchmarking exercise is useful in highlighting those councils that, in relative financial performance terms, are the best/worst performers.

  8. Benchmarking the performance of daily temperature homogenisation algorithms

    Science.gov (United States)

    Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate

    2015-04-01

    This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.

  9. Benchmarking the environmental performances of farms

    NARCIS (Netherlands)

    Snoo, de G.R.

    2006-01-01

    Background, Aim and Scope The usual route for improvement of agricultural practice towards sustainability runs via labelling schemes for products or farm practices. In most approaches requirements are set in absolute terms, disregarding the variation in environmental performance of farms. Another ap

  10. Atmospheric fluidized bed combustion (AFBC) plants: A performance benchmarking study

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, J. A.; Beavers, H.; Bonk, D. [West Virginia University, College of Business and Economics, Division of Business Administration, Morgantown, WV (United States)

    2004-03-31

    Data from a fluidized bed boiler survey distributed during the spring of 2000 to gather data for developing atmospheric fluidized bed combustion (AFCB) performance benchmarks are analyzed. The survey was sent to members of the Council of Industrial Boiler Owners; 35 surveys were usable for analysis. A total of 18 benchmarks were considered. While the results were not such as to permit a definitive set of conclusions, the survey was successful in providing practical information to assist plant owners, operators and developers to understand their operations and to assess potential solutions or to establish preventative maintenance programs. 36 refs., 2 tabs.

  11. How Are You Doing? Key Performance Indicators and Benchmarking

    Science.gov (United States)

    Fahey, John P.

    2011-01-01

    School business officials need to "know and show" that their operations are well managed. To do so, they ask themselves questions, such as "How are they doing? How do they compare with others? Are they making progress fast enough? Are they using the best practices?" Using key performance indicators (KPIs) and benchmarking as regular parts of their…

  12. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns.

  13. Benchmarking Benchmarks

    NARCIS (Netherlands)

    D.C. Blitz (David)

    2011-01-01

    textabstractBenchmarking benchmarks is a bundle of six studies that are inspired by the prevalence of benchmarking in academic finance research as well as in investment practice. Three studies examine if current benchmark asset pricing models adequately describe the cross-section of stock returns. W

  14. A physician's due: measuring physician billing performance, benchmarking results.

    Science.gov (United States)

    Woodcock, Elizabeth W; Browne, Robert C; Jenkins, Jennifer L

    2008-07-01

    A 2008 study focused on four key performance indicators (KPIs) and staffing levels to benchmark the FYO7 performance of physician group billing operations. A comparison of the change in the KPIs from FYO3 to FYO7 for a number of these billing operations disclosed across-the-board improvements. Billing operations did not show significant changes in staffing levels during this time, pointing to the existence of obstacles that prevent staff reductions in this area.

  15. Mars/master coupled system calculation of the OECD MSLB benchmark exercise 3 with refined core thermal-hydraulic nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, J.J.; Joo, H.G.; Cho, B.O.; Zee, S.Q.; Lee, W.J. [Korea Atomic Energy Research Inst., Daejeon (Korea, Republic of)

    2001-07-01

    To assess the performance of KAERI coupled multi-dimensional system thermal- hydraulics (T/H) and three-dimensional (3-D) kinetics code, MARS/MASTER, Exercise III of the OECD main steam line break benchmark problem is solved. The coupled code is capable of employing an individual flow channel for each fuel assembly as well as lumped ones. The basic analysis model of the reference plant consists of four major components: a 3-D core neutronics model, a 3-D thermal-hydraulic model for the reactor vessel employing lumped flow channels, a refined core T/H model and a 1-D T/H model for coolant system. Calculations were performed with and without the refined core T/H model. The results of the basic calculation performed without the refined core T/H model show that the core power distribution evolves to a highly localized shape due to the presence of a stuck rod, as well as asymmetric flow distribution in the reactor core. The results of the refined core T/H model indicate that the local peaking factor can be reduced by as much as 22 % through accurate representation of the local T/H feedback effects. Nonetheless, the global transient behaviors are not significantly affected. (author)

  16. Performance Evaluation and Benchmarking of Next Intelligent Systems

    Energy Technology Data Exchange (ETDEWEB)

    del Pobil, Angel [Jaume-I University; Madhavan, Raj [ORNL; Bonsignorio, Fabio [Heron Robots, Italy

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  17. Defining core elements and outstanding practice in Nutritional Science through collaborative benchmarking.

    Science.gov (United States)

    Samman, Samir; McCarthur, Jennifer O; Peat, Mary

    2006-01-01

    Benchmarking has been adopted by educational institutions as a potentially sensitive tool for improving learning and teaching. To date there has been limited application of benchmarking methodology in the Discipline of Nutritional Science. The aim of this survey was to define core elements and outstanding practice in Nutritional Science through collaborative benchmarking. Questionnaires that aimed to establish proposed core elements for Nutritional Science, and inquired about definitions of " good" and " outstanding" practice were posted to named representatives at eight Australian universities. Seven respondents identified core elements that included knowledge of nutrient metabolism and requirement, food production and processing, modern biomedical techniques that could be applied to understanding nutrition, and social and environmental issues as related to Nutritional Science. Four of the eight institutions who agreed to participate in the present survey identified the integration of teaching with research as an indicator of outstanding practice. Nutritional Science is a rapidly evolving discipline. Further and more comprehensive surveys are required to consolidate and update the definition of the discipline, and to identify the optimal way of teaching it. Global ideas and specific regional requirements also need to be considered.

  18. Quantitative Performance Analysis of the SPEC OMPM2001 Benchmarks

    Directory of Open Access Journals (Sweden)

    Vishal Aslot

    2003-01-01

    Full Text Available The state of modern computer systems has evolved to allow easy access to multiprocessor systems by supporting multiple processors on a single physical package. As the multiprocessor hardware evolves, new ways of programming it are also developed. Some inventions may merely be adopting and standardizing the older paradigms. One such evolving standard for programming shared-memory parallel computers is the OpenMP API. The Standard Performance Evaluation Corporation (SPEC has created a suite of parallel programs called SPEC OMP to compare and evaluate modern shared-memory multiprocessor systems using the OpenMP standard. We have studied these benchmarks in detail to understand their performance on a modern architecture. In this paper, we present detailed measurements of the benchmarks. We organize, summarize, and display our measurements using a Quantitative Model. We present a detailed discussion and derivation of the model. Also, we discuss the important loops in the SPEC OMPM2001 benchmarks and the reasons for less than ideal speedup on our platform.

  19. Career performance trajectories of Olympic swimmers: benchmarks for talent development.

    Science.gov (United States)

    Allen, Sian V; Vandenbogaerde, Tom J; Hopkins, William G

    2014-01-01

    The age-related progression of elite athletes to their career-best performances can provide benchmarks for talent development. The purpose of this study was to model career performance trajectories of Olympic swimmers to develop these benchmarks. We searched the Web for annual best times of swimmers who were top 16 in pool events at the 2008 or 2012 Olympics, from each swimmer's earliest available competitive performance through to 2012. There were 6959 times in the 13 events for each sex, for 683 swimmers, with 10 ± 3 performances per swimmer (mean ± s). Progression to peak performance was tracked with individual quadratic trajectories derived using a mixed linear model that included adjustments for better performance in Olympic years and for the use of full-body polyurethane swimsuits in 2009. Analysis of residuals revealed appropriate fit of quadratic trends to the data. The trajectories provided estimates of age of peak performance and the duration of the age window of trivial improvement and decline around the peak. Men achieved peak performance later than women (24.2 ± 2.1 vs. 22.5 ± 2.4 years), while peak performance occurred at later ages for the shorter distances for both sexes (∼1.5-2.0 years between sprint and distance-event groups). Men and women had a similar duration in the peak-performance window (2.6 ± 1.5 years) and similar progressions to peak performance over four years (2.4 ± 1.2%) and eight years (9.5 ± 4.8%). These data provide performance targets for swimmers aiming to achieve elite-level performance.

  20. Thermal Performance Benchmarking; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilbert

    2015-06-09

    This project proposes to seek out the SOA power electronics and motor technologies to thermally benchmark their performance. The benchmarking will focus on the thermal aspects of the system. System metrics including the junction-to-coolant thermal resistance and the parasitic power consumption (i.e., coolant flow rates and pressure drop performance) of the heat exchanger will be measured. The type of heat exchanger (i.e., channel flow, brazed, folded-fin) and any enhancement features (i.e., enhanced surfaces) will be identified and evaluated to understand their effect on performance. Additionally, the thermal resistance/conductivity of the power module’s passive stack and motor’s laminations and copper winding bundles will also be measured. The research conducted will allow insight into the various cooling strategies to understand which heat exchangers are most effective in terms of thermal performance and efficiency. Modeling analysis and fluid-flow visualization may also be carried out to better understand the heat transfer and fluid dynamics of the systems.

  1. Benchmarking and Performance Improvement at Rocky Flats Environmental Technology Site

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C. [Kaiser-Hill Co., LLC, Golden, CO (United States)], Doyle, D. [USDOE Rocky Flats Office, Golden, CO (United States)], Featherman, W.D. [Project Performance Corp., Sterline, VA (United States)

    1997-12-31

    The Rocky Flats Environmental Technology Site (RFETS) has initiated a major work process improvement campaign using the tools of formalized benchmarking and streamlining. This paper provides insights into some of the process improvement activities performed at Rocky Flats from November 1995 through December 1996. It reviews the background, motivation, methodology, results, and lessons learned from this ongoing effort. The paper also presents important gains realized through process analysis and improvement including significant cost savings, productivity improvements, and an enhanced understanding of site work processes.

  2. Regression Benchmarking: An Approach to Quality Assurance in Performance

    OpenAIRE

    2005-01-01

    The paper presents a short summary of our work in the area of regression benchmarking and its application to software development. Specially, we explain the concept of regression benchmarking, the requirements for employing regression testing in a software project, and methods used for analyzing the vast amounts of data resulting from repeated benchmarking. We present the application of regression benchmarking on a real software project and conclude with a glimpse at the challenges for the fu...

  3. Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).

    Science.gov (United States)

    Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di

    2016-01-01

    For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety.

  4. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    Science.gov (United States)

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  5. NODAL3 Sensitivity Analysis for NEACRP 3D LWR Core Transient Benchmark (PWR

    Directory of Open Access Journals (Sweden)

    Surian Pinem

    2016-01-01

    Full Text Available This paper reports the results of sensitivity analysis of the multidimension, multigroup neutron diffusion NODAL3 code for the NEACRP 3D LWR core transient benchmarks (PWR. The code input parameters covered in the sensitivity analysis are the radial and axial node sizes (the number of radial node per fuel assembly and the number of axial layers, heat conduction node size in the fuel pellet and cladding, and the maximum time step. The output parameters considered in this analysis followed the above-mentioned core transient benchmarks, that is, power peak, time of power peak, power, averaged Doppler temperature, maximum fuel centerline temperature, and coolant outlet temperature at the end of simulation (5 s. The sensitivity analysis results showed that the radial node size and maximum time step give a significant effect on the transient parameters, especially the time of power peak, for the HZP and HFP conditions. The number of ring divisions for fuel pellet and cladding gives negligible effect on the transient solutions. For productive work of the PWR transient analysis, based on the present sensitivity analysis results, we recommend NODAL3 users to use 2×2 radial nodes per assembly, 1×18 axial layers per assembly, the maximum time step of 10 ms, and 9 and 1 ring divisions for fuel pellet and cladding, respectively.

  6. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    Science.gov (United States)

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety.

  7. Neutronic performance of a benchmark 1-MW LPSS

    Energy Technology Data Exchange (ETDEWEB)

    Russell, G.J.; Pitcher, E.J.; Ferguson, P.D. [Los Alamos National Laboratory, NM (United States)

    1995-12-31

    We used split-target/flux-trap-moderator geometry in our 1-MW LPSS computational benchmark performance calculations because the simulation models were readily available. Also, this target/moderator arrangement is a proven LANSCE design and a good neutronic performer. The model has four moderator viewed surfaces, each with a 13x13 cm field-of-view. For our scoping neutronic-performance calculations, we attempted to get as much engineering realism into the target-system mockup as possible. In our present model, we account for target/reflector dilution by cooling; the D{sub 2}O coolant fractions are adequate for 1 MW of 800-MeV protons (1.25 mA). We have incorporated a proton beam entry window and target canisters into the model, as well as (partial) moderator and vacuum canisters. The model does not account for target and moderator cooling lines and baffles, entire moderator canisters, and structural material in the reflector.

  8. Mutual Fund Style, Characteristic-Matched Performance Benchmarks and Activity Measures: A New Approach

    OpenAIRE

    Daniel Buncic; Jon E. Eggins; Robert J. Hill

    2010-01-01

    We propose a new approach for measuring mutual fund style and constructing characteristic-matched performance benchmarks that requires only portfolio holdings and two reference portfolios in each style dimension. The characteristic-matched performance benchmark literature typically follows a bottom-up approach by first matching individual stocks with benchmarks and then obtaining a portfolio’s excess return as a weighted average of the excess returns on each of its constituent stocks. Our app...

  9. 3-D core modelling of RIA transient: the TMI-1 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ferraresi, P. [CEA Cadarache, Institut de Protection et de Surete Nucleaire, Dept. de Recherches en Securite, 13 - Saint Paul Lez Durance (France); Studer, E. [CEA Saclay, Dept. Modelisation de Systemes et Structures, 91 - Gif sur Yvette (France); Avvakumov, A.; Malofeev, V. [Nuclear Safety Institute of Russian Research Center, Kurchatov Institute, Moscow (Russian Federation); Diamond, D.; Bromley, B. [Nuclear Energy and Infrastructure Systems Div., Brookhaven National Lab., BNL, Upton, NY (United States)

    2001-07-01

    The increase of fuel burn up in core management poses actually the problem of the evaluation of the deposited energy during Reactivity Insertion Accidents (RIA). In order to precisely evaluate this energy, 3-D approaches are used more and more frequently in core calculations. This 'best-estimate' approach requires the evaluation of code uncertainties. To contribute to this evaluation, a code benchmark has been launched. A 3-D modelling for the TMI-1 central Ejected Rod Accident with zero and intermediate initial powers was carried out with three different methods of calculation for an inserted reactivity respectively fixed at 1.2 $ and 1.26 $. The studies implemented by the neutronics codes PARCS (BNL) and CRONOS (IPSN/CEA) describe an homogeneous assembly, whereas the BARS (KI) code allows a pin-by-pin representation (CRONOS has both possibilities). All the calculations are consistent, the variation in figures resulting mainly from the method used to build cross sections and reflectors constants. The maximum rise in enthalpy for the intermediate initial power (33 % P{sub N}) calculation is, for this academic calculation, about 30 cal/g. This work will be completed in a next step by an evaluation of the uncertainty induced by the uncertainty on model parameters, and a sensitivity study of the key parameters for a peripheral Rod Ejection Accident. (authors)

  10. Network's cardiology data help member groups benchmark performance, market services.

    Science.gov (United States)

    1997-11-01

    Data-driven cardiac network improves outcomes, reduces costs. Thirty-seven high-volume network member hospitals are using detailed demographic, procedure, and outcomes data in benchmarking and marketing efforts, and network physicians are using the aggregate data on 120,000 angioplasty and bypass procedures in research studies. Here are the details, plus sample reports.

  11. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Science.gov (United States)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  12. Performance of Artificial Intelligence Workloads on the Intel Core 2 Duo Series Desktop Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2010-12-01

    Full Text Available As the processor architecture becomes more advanced, Intel introduced its Intel Core 2 Duo series processors. Performance impact on Intel Core 2 Duo processors are analyzed using SPEC CPU INT 2006 performance numbers. This paper studied the behavior of Artificial Intelligence (AI benchmarks on Intel Core 2 Duo series processors. Moreover, we estimated the task completion time (TCT @1 GHz, @2 GHz and @3 GHz Intel Core 2 Duo series processors frequency. Our results show the performance scalability in Intel Core 2 Duo series processors. Even though AI benchmarks have similar execution time, they have dissimilar characteristics which are identified using principal component analysis and dendogram. As the processor frequency increased from 1.8 GHz to 3.167 GHz the execution time is decreased by ~370 sec for AI workloads. In the case of Physics/Quantum Computing programs it was ~940 sec.

  13. Benchmarking CRBLASTER on the 350-MHz 49-core Maestro Development Board

    CERN Document Server

    Mighell, Kenneth J

    2012-01-01

    I describe the performance of the CRBLASTER computational framework on a 350-MHz 49-core Maestro Development Board (MDB). The 49-core Interim Test Chip (ITC) was developed by the U.S. Government and is based on the intellectual property of the 64-core TILE64 processor of the Tilera Corporation. The Maestro processor is intended for use in the high radiation environments found in space; the ITC was fabricated using IBM 90-nm CMOS 9SF technology and Radiation-Hardening-by-Design (RHDB) rules. CRBLASTER is a parallel-processing cosmic-ray rejection application based on a simple computational framework that uses the high-performance computing industry standard Message Passing Interface (MPI) library. CRBLASTER was designed to be used by research scientists to easily port image-analysis programs based on embarrassingly-parallel algorithms to a parallel-processing environment such as a multi-node Beowulf cluster or multi-core processors using MPI. I describe my experience of porting CRBLASTER to the 64-core TILE64 ...

  14. Water management in the European hospitality sector: Best practice, performance benchmarks and improvement potential

    OpenAIRE

    Styles, David, 1979-; Harald SCHOENBERGER; GALVEZ MARTOS JOSE LUIS

    2015-01-01

    Water stress is a major environmental challenge for many tourism destinations. This paper presents a synthesis of best practice, key performance indicators and performance benchmarks for water management in hospitality enterprises. Widely applicable best practices and associated performance benchmarks were derived at the process level based on techno-economic assessment of commercial options, validated through consultation with expert stakeholders and site visits to observe commercial impleme...

  15. Performance Benchmarking of Tsunami-HySEA Model for NTHMP's Inundation Mapping Activities

    Science.gov (United States)

    Macías, Jorge; Castro, Manuel J.; Ortega, Sergio; Escalante, Cipriano; González-Vida, José Manuel

    2017-08-01

    The Tsunami-HySEA model is used to perform some of the numerical benchmark problems proposed and documented in the "Proceedings and results of the 2011 NTHMP Model Benchmarking Workshop". The final aim is to obtain the approval for Tsunami-HySEA to be used in projects funded by the National Tsunami Hazard Mitigation Program (NTHMP). Therefore, this work contains the numerical results and comparisons for the five benchmark problems (1, 4, 6, 7, and 9) required for such aim. This set of benchmarks considers analytical, laboratory, and field data test cases. In particular, the analytical solution of a solitary wave runup on a simple beach, and its laboratory counterpart, two more laboratory tests: the runup of a solitary wave on a conically shaped island and the runup onto a complex 3D beach (Monai Valley) and, finally, a field data benchmark based on data from the 1993 Hokkaido Nansei-Oki tsunami.

  16. SAT Benchmarks: Development of a College Readiness Benchmark and Its Relationship to Secondary and Postsecondary School Performance. Research Report 2011-5

    Science.gov (United States)

    Wyatt, Jeffrey; Kobrin, Jennifer; Wiley, Andrew; Camara, Wayne J.; Proestler, Nina

    2011-01-01

    The current study was part of an ongoing effort at the College Board to establish college readiness benchmarks on the SAT[R], PSAT/NMSQT[R], and ReadiStep[TM] as well as to provide schools, districts, and states with a view of their students' college readiness. College readiness benchmarks were established based on SAT performance, using a…

  17. Evaluating the Effect of Labeled Benchmarks on Children's Number Line Estimation Performance and Strategy Use.

    Science.gov (United States)

    Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen

    2017-01-01

    Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that

  18. Performance Assessment of Flight Simulator Servo System Based on LQG Performance Benchmark

    Directory of Open Access Journals (Sweden)

    Liu Huibo

    2015-01-01

    Full Text Available Flight simulator is an important application in the field of aerospace as semi-physical simulation equipment. As it requires supreme control precision and stability, it is especially important to search the performance assessment of flight simulator servo system. The traditional researches on flight simulator control performance index is more about dynamic output tracking features but few on input characteristics and effects. Based on Linear Quadratic Gaussian (LQG performance benchmark, this paper makes analyses on high precision flight simulator in three kinds of controller while considering the influences of input and output signals’ effect on controllers. After processing the input and output data, combined with the linear fitting method, we can obtain LQG performance tradeoff curve. Through comparing the controller’s actual performance with the optimal performance, we’ll gain the controller’s control performance index and its potential.

  19. Creation of a Full-Core HTR Benchmark with the Fort St. Vrain Initial Core and Assessment of Uncertainties in the FSV Fuel Composition and Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Martin, William R.; Lee, John C.; baxter, Alan; Wemple, Chuck

    2012-03-31

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performed to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was

  20. Review of California and National Methods for Energy PerformanceBenchmarking of Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Matson, Nance E.; Piette, Mary Ann

    2005-09-05

    This benchmarking review has been developed to support benchmarking planning and tool development under discussion by the California Energy Commission (CEC), Lawrence Berkeley National Laboratory (LBNL) and others in response to the Governor's Executive Order S-20-04 (2004). The Executive Order sets a goal of benchmarking and improving the energy efficiency of California's existing commercial building stock. The Executive Order requires the CEC to propose ''a simple building efficiency benchmarking system for all commercial buildings in the state''. This report summarizes and compares two currently available commercial building energy-benchmarking tools. One tool is the U.S. Environmental Protection Agency's Energy Star National Energy Performance Rating System, which is a national regression-based benchmarking model (referred to in this report as Energy Star). The second is Lawrence Berkeley National Laboratory's Cal-Arch, which is a California-based distributional model (referred to as Cal-Arch). Prior to the time Cal-Arch was developed in 2002, there were several other benchmarking tools available to California consumers but none that were based solely on California data. The Energy Star and Cal-Arch benchmarking tools both provide California with unique and useful methods to benchmark the energy performance of California's buildings. Rather than determine which model is ''better'', the purpose of this report is to understand and compare the underlying data, information systems, assumptions, and outcomes of each model.

  1. The physics benchmark processes for the detector performance studies used in CLIC CDR Volume 3

    CERN Document Server

    Allanach, B.J.; Desch, K.; Ellis, J.; Giudice, G.; Grefe, C.; Kraml, S.; Lastovicka, T.; Linssen, L.; Marschall, J.; Martin, S.P.; Muennich, A.; Poss, S.; Roloff, P.; Simon, F.; Strube, J.; Thomson, M.; Wells, J.D.

    2012-01-01

    This note describes the detector benchmark processes used in volume 3 of the CLIC conceptual design report (CDR), which explores a staged construction and operation of the CLIC accelerator. The goal of the detector benchmark studies is to assess the performance of the CLIC ILD and CLIC SiD detector concepts for different physics processes and at a few CLIC centre-of-mass energies.

  2. Towards a benchmark simulation model for plant-wide control strategy performance evaluation of WWTPs

    DEFF Research Database (Denmark)

    Jeppsson, Ulf; Rosen, Christian; Alex, Jens;

    2006-01-01

    The COST/IWA benchmark simulation model has been available for seven years. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the benchmark has resulted in more than 100 publications, not only in Europe but also...... worldwide, demonstrates the interest in such a tool within the research community In this paper, an extension of the benchmark simulation model no 1 (BSM1) is proposed. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently...... the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one-week BSM1 evaluation period. In the paper, the extended plant...

  3. RZBENCH: Performance evaluation of current HPC architectures using low-level and application benchmarks

    CERN Document Server

    Hager, Georg; Zeiser, Thomas; Wellein, Gerhard

    2007-01-01

    RZBENCH is a benchmark suite that was specifically developed to reflect the requirements of scientific supercomputer users at the University of Erlangen-Nuremberg (FAU). It comprises a number of application and low-level codes under a common build infrastructure that fosters maintainability and expandability. This paper reviews the structure of the suite and briefly introduces the most relevant benchmarks. In addition, some widely known standard benchmark codes are reviewed in order to emphasize the need for a critical review of often-cited performance results. Benchmark data is presented for the HLRB-II at LRZ Munich and a local InfiniBand Woodcrest cluster as well as two uncommon system architectures: A bandwidth-optimized InfiniBand cluster based on single socket nodes ("Port Townsend") and an early version of Sun's highly threaded T2 architecture ("Niagara 2").

  4. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  5. Web Server Benchmark Application WiiBench using Erlang/OTP R11 and Fedora-Core Linux 5.0

    CERN Document Server

    Mutiara, A B

    2007-01-01

    As the web grows and the amount of traffics on the web server increase, problems related to performance begin to appear. Some of the problems, such as the number of users that can access the server simultaneously, the number of requests that can be handled by the server per second (requests per second) to bandwidth consumption and hardware utilization like memories and CPU. To give better quality of service (\\textbf{\\textit{QoS}}), web hosting providers and also the system administrators and network administrators who manage the server need a benchmark application to measure the capabilities of their servers. Later, the application intends to work under Linux/Unix -- like platforms and built using Erlang/OTP R11 as a concurrent oriented language under Fedora Core Linux 5.0. \\textbf{\\textit{WiiBench}} is divided into two main parts, the controller section and the launcher section. Controller is the core of the application. It has several duties, such as read the benchmark scenario file, configure the program b...

  6. Library Benchmarking

    Directory of Open Access Journals (Sweden)

    Wiji Suwarno

    2017-02-01

    Full Text Available The term benchmarking has been encountered in the implementation of total quality (TQM or in Indonesian termed holistic quality management because benchmarking is a tool to look for ideas or learn from the library. Benchmarking is a processof measuring and comparing for continuous business process of systematic and continuous measurement, the process of measuring and comparing for continuous business process of an organization to get information that can help these organization improve their performance efforts.

  7. Financial Benchmarking

    OpenAIRE

    2012-01-01

    This bachelor's thesis is focused on financial benchmarking of TULIPA PRAHA s.r.o. The aim of this work is to evaluate financial situation of the company, identify its strengths and weaknesses and to find out how efficient is the performance of this company in comparison with top companies within the same field by using INFA benchmarking diagnostic system of financial indicators. The theoretical part includes the characteristic of financial analysis, which financial benchmarking is based on a...

  8. Benchmarking the performance of fixed-image receptor digital radiography systems. Part 2: system performance metric.

    Science.gov (United States)

    Lee, Kam L; Bernardo, Michael; Ireland, Timothy A

    2016-06-01

    This is part two of a two-part study in benchmarking system performance of fixed digital radiographic systems. The study compares the system performance of seven fixed digital radiography systems based on quantitative metrics like modulation transfer function (sMTF), normalised noise power spectrum (sNNPS), detective quantum efficiency (sDQE) and entrance surface air kerma (ESAK). It was found that the most efficient image receptors (greatest sDQE) were not necessarily operating at the lowest ESAK. In part one of this study, sMTF is shown to depend on system configuration while sNNPS is shown to be relatively consistent across systems. Systems are ranked on their signal-to-noise ratio efficiency (sDQE) and their ESAK. Systems using the same equipment configuration do not necessarily have the same system performance. This implies radiographic practice at the site will have an impact on the overall system performance. In general, systems are more dose efficient at low dose settings.

  9. Benchmark for Performance: Geothermal Applications in Lincoln Public Schools

    Energy Technology Data Exchange (ETDEWEB)

    Durfee, D.J.; Hughes, P.J.; Martin, M.A.; Sharp, A.T.; Shonder, J.A.

    1999-06-19

    Vertical-bore, geothermal heat pumps (GHPs) have been providing heating and cooling to four new elementary schools located in Lincoln, Nebraska since 1995. According to representatives of the local utility and school district, the systems are providing a comfortable, complaint-free environment with utility costs that are nearly half of that of other schools in the district. Performance data collected from on-site energy management systems and district billing and utility records for all fifty schools in the Lincoln district indicate that only five consume less energy than the best performing GHP school; however these five cool less than 10% of their total floor area, while the GHP schools cool 100% of their floor area. When compared to other new schools (with similar ventilation loads), the GHP schools used approximately 26% less source energy per square foot of floor area. Variations in annual energy performance are evident among the four GHP schools, however, together they still consume less source energy than 70% of all schools in the district. These variations are most likely due to operational differences rather than installed equipment, building orientation, or environmental (bore field) conditions.

  10. Maximizing Use of Extension Beef Cattle Benchmarks Data Derived from Cow Herd Appraisal Performance Software

    Science.gov (United States)

    Ramsay, Jennifer M.; Hanna, Lauren L. Hulsman; Ringwall, Kris A.

    2016-01-01

    One goal of Extension is to provide practical information that makes a difference to producers. Cow Herd Appraisal Performance Software (CHAPS) has provided beef producers with production benchmarks for 30 years, creating a large historical data set. Many such large data sets contain useful information but are underutilized. Our goal was to create…

  11. Maximizing Use of Extension Beef Cattle Benchmarks Data Derived from Cow Herd Appraisal Performance Software

    Science.gov (United States)

    Ramsay, Jennifer M.; Hanna, Lauren L. Hulsman; Ringwall, Kris A.

    2016-01-01

    One goal of Extension is to provide practical information that makes a difference to producers. Cow Herd Appraisal Performance Software (CHAPS) has provided beef producers with production benchmarks for 30 years, creating a large historical data set. Many such large data sets contain useful information but are underutilized. Our goal was to create…

  12. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  13. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    Science.gov (United States)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  14. Performance benchmarking of four cell-free protein expression systems.

    Science.gov (United States)

    Gagoski, Dejan; Polinkovsky, Mark E; Mureev, Sergey; Kunert, Anne; Johnston, Wayne; Gambin, Yann; Alexandrov, Kirill

    2016-02-01

    Over the last half century, a range of cell-free protein expression systems based on pro- and eukaryotic organisms have been developed and have found a range of applications, from structural biology to directed protein evolution. While it is generally accepted that significant differences in performance among systems exist, there is a paucity of systematic experimental studies supporting this notion. Here, we took advantage of the species-independent translation initiation sequence to express and characterize 87 N-terminally GFP-tagged human cytosolic proteins of different sizes in E. coli, wheat germ (WGE), HeLa, and Leishmania-based (LTE) cell-free systems. Using a combination of single-molecule fluorescence spectroscopy, SDS-PAGE, and Western blot analysis, we assessed the expression yields, the fraction of full-length translation product, and aggregation propensity for each of these systems. Our results demonstrate that the E. coli system has the highest expression yields. However, we observe that high expression levels are accompanied by production of truncated species-particularly pronounced in the case of proteins larger than 70 kDa. Furthermore, proteins produced in the E. coli system display high aggregation propensity, with only 10% of tested proteins being produced in predominantly monodispersed form. The WGE system was the most productive among eukaryotic systems tested. Finally, HeLa and LTE show comparable protein yields that are considerably lower than the ones achieved in the E. coli and WGE systems. The protein products produced in the HeLa system display slightly higher integrity, whereas the LTE-produced proteins have the lowest aggregation propensity among the systems analyzed. The high quality of HeLa- and LTE-produced proteins enable their analysis without purification and make them suitable for analysis of multi-domain eukaryotic proteins.

  15. Kaiser Permanente's performance improvement system, Part 1: From benchmarking to executing on strategic priorities.

    Science.gov (United States)

    Schilling, Lisa; Chase, Alide; Kehrli, Sommer; Liu, Amy Y; Stiefel, Matt; Brentari, Ruth

    2010-11-01

    By 2004, senior leaders at Kaiser Permanente, the largest not-for-profit health plan in the United States, recognizing variations across service areas in quality, safety, service, and efficiency, began developing a performance improvement (PI) system to realizing best-in-class quality performance across all 35 medical centers. MEASURING SYSTEMWIDE PERFORMANCE: In 2005, a Web-based data dashboard, "Big Q," which tracks the performance of each medical center and service area against external benchmarks and internal goals, was created. PLANNING FOR PI AND BENCHMARKING PERFORMANCE: In 2006, Kaiser Permanente national and regional continued planning the PI system, and in 2007, quality, medical group, operations, and information technology leaders benchmarked five high-performing organizations to identify capabilities required to achieve consistent best-in-class organizational performance. THE PI SYSTEM: The PI system addresses the six capabilities: leadership priority setting, a systems approach to improvement, measurement capability, a learning organization, improvement capacity, and a culture of improvement. PI "deep experts" (mentors) consult with national, regional, and local leaders, and more than 500 improvement advisors are trained to manage portfolios of 90-120 day improvement initiatives at medical centers. Between the second quarter of 2008 and the first quarter of 2009, performance across all Kaiser Permanente medical centers improved on the Big Q metrics. The lessons learned in implementing and sustaining PI as it becomes fully integrated into all levels of Kaiser Permanente can be generalized to other health care systems, hospitals, and other health care organizations.

  16. Performance benchmarking and incentive regulation. Considerations of directing signals for electricity distribution companies

    Energy Technology Data Exchange (ETDEWEB)

    Honkapuro, S.

    2008-07-01

    After the restructuring process of the power supply industry, which for instance in Finland took place in the mid-1990s, free competition was introduced for the production and sale of electricity. Nevertheless, natural monopolies are found to be the most efficient form of production in the transmission and distribution of electricity, and therefore such companies remained franchised monopolies. To prevent the misuse of the monopoly position and to guarantee the rights of the customers, regulation of these monopoly companies is required. One of the main objectives of the restructuring process has been to increase the cost efficiency of the industry. Simultaneously, demands for the service quality are increasing. Therefore, many regulatory frameworks are being, or have been, reshaped so that companies are provided with stronger incentives for efficiency and quality improvements. Performance benchmarking has in many cases a central role in the practical implementation of such incentive schemes. Economic regulation with performance benchmarking attached to it provides companies with directing signals that tend to affect their investment and maintenance strategies. Since the asset lifetimes in the electricity distribution are typically many decades, investment decisions have far-reaching technical and economic effects. This doctoral thesis addresses the directing signals of incentive regulation and performance benchmarking in the field of electricity distribution. The theory of efficiency measurement and the most common regulation models are presented. The chief contributions of this work are (1) a new kind of analysis of the regulatory framework, so that the actual directing signals of the regulation and benchmarking for the electricity distribution companies are evaluated, (2) developing the methodology and a software tool for analysing the directing signals of the regulation and benchmarking in the electricity distribution sector, and (3) analysing the real

  17. MPI performance evaluation and characterization using a compact application benchmark code

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H.

    1996-06-01

    In this paper the parallel benchmark code PSTSWM is used to evaluate the performance of the vendor-supplied implementations of the MPI message-passing standard on the Intel Paragon, IBM SP2, and Cray Research T3D. This study is meant to complement the performance evaluation of individual MPI commands by providing information on the practical significance of MPI performance on the execution of a communication-intensive application code. In particular, three performance questions are addressed: how important is the communication protocol in determining performance when using MPI, how does MPI performance compare with that of the native communication library, and how efficient are the collective communication routines.

  18. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  19. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  20. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  1. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  2. The Alpha consensus meeting on cryopreservation key performance indicators and benchmarks: proceedings of an expert meeting.

    Science.gov (United States)

    2012-08-01

    This proceedings report presents the outcomes from an international workshop designed to establish consensus on: definitions for key performance indicators (KPIs) for oocyte and embryo cryopreservation, using either slow freezing or vitrification; minimum performance level values for each KPI, representing basic competency; and aspirational benchmark values for each KPI, representing best practice goals. This report includes general presentations about current practice and factors for consideration in the development of KPIs. A total of 14 KPIs were recommended and benchmarks for each are presented. No recommendations were made regarding specific cryopreservation techniques or devices, or whether vitrification is 'better' than slow freezing, or vice versa, for any particular stage or application, as this was considered to be outside the scope of this workshop.

  3. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    Science.gov (United States)

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified.

  4. Predictors of Student Performance in Grades 7 and 8 Mathematics: The Correlation between Benchmark Tests and Performance on the Texas Assessment of Knowledge and Skills (TAKS) Math Tests

    Science.gov (United States)

    Allen, Timothy Dale

    2012-01-01

    School districts throughout Texas have used archived Texas Assessment of Knowledge and Skills (TAKS) tests as a benchmark to predict student performance on future TAKS tests without substantial quantitative evidence that these types of benchmark tests are valid predictors of student performance. The purpose of this quantitative correlational study…

  5. A simulation benchmark to evaluate the performance of advanced control techniques in biological wastewater treatment plants

    Directory of Open Access Journals (Sweden)

    Sotomayor O.A.Z.

    2001-01-01

    Full Text Available Wastewater treatment plants (WWTP are complex systems that incorporate a large number of biological, physicochemical and biochemical processes. They are large and nonlinear systems subject to great disturbances in incoming loads. The primary goal of a WWTP is to reduce pollutants and the second goal is disturbance rejection, in order to obtain good effluent quality. Modeling and computer simulations are key tools in the achievement of these two goals. They are essential to describe, predict and control the complicated interactions of the processes. Numerous control techniques (algorithms and control strategies (structures have been suggested to regulate WWTP; however, it is difficult to make a discerning performance evaluation due to the nonuniformity of the simulated plants used. The main objective of this paper is to present a benchmark of an entire biological wastewater treatment plant in order to evaluate, through simulations, different control techniques. This benchmark plays the role of an activated sludge process used for removal of organic matter and nitrogen from domestic effluents. The development of this simulator is based on models widely accepted by the international community and is implemented in Matlab/Simulink (The MathWorks, Inc. platform. The benchmark considers plant layout and the effects of influent characteristics. It also includes a test protocol for analyzing the open and closed-loop responses of the plant. Examples of control applications in the benchmark are implemented employing conventional PI controllers. The following common control strategies are tested: dissolved oxygen (DO concentration-based control, respirometry-based control and nitrate concentration-based control.

  6. Dataset size and composition impact the reliability of performance benchmarks for peptide-MHC binding predictions

    DEFF Research Database (Denmark)

    Kim, Yohan; Sidney, John; Buus, Søren;

    2014-01-01

    Background: It is important to accurately determine the performance of peptide: MHC binding predictions, as this enables users to compare and choose between different prediction methods and provides estimates of the expected error rate. Two common approaches to determine prediction performance...... are cross-validation, in which all available data are iteratively split into training and testing data, and the use of blind sets generated separately from the data used to construct the predictive method. In the present study, we have compared cross-validated prediction performances generated on our last...... benchmark dataset from 2009 with prediction performances generated on data subsequently added to the Immune Epitope Database (IEDB) which served as a blind set. Results: We found that cross-validated performances systematically overestimated performance on the blind set. This was found not to be due...

  7. EVA Human Health and Performance Benchmarking Study Overview and Development of a Microgravity Protocol

    Science.gov (United States)

    Norcross, Jason; Jarvis, Sarah; Bekdash, Omar; Cupples, Scott; Abercromby, Andrew

    2017-01-01

    The primary objective of this study is to develop a protocol to reliably characterize human health and performance metrics for individuals working inside various EVA suits under realistic spaceflight conditions. Expected results and methodologies developed during this study will provide the baseline benchmarking data and protocols with which future EVA suits and suit configurations (e.g., varied pressure, mass, center of gravity [CG]) and different test subject populations (e.g., deconditioned crewmembers) may be reliably assessed and compared. Results may also be used, in conjunction with subsequent testing, to inform fitness-for-duty standards, as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  8. Do physiological measures predict selected CrossFit(®) benchmark performance?

    Science.gov (United States)

    Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R

    2015-01-01

    CrossFit(®) is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs). Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time), "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=-0.88 and -0.65, respectively) and anaerobic threshold (r=-0.61 and -0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R (2)=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs.

  9. Comparative Neutronics Analysis of DIMPLE S06 Criticality Benchmark with Contemporary Reactor Core Analysis Computer Code Systems

    Directory of Open Access Journals (Sweden)

    Wonkyeong Kim

    2015-01-01

    Full Text Available A high-leakage core has been known to be a challenging problem not only for a two-step homogenization approach but also for a direct heterogeneous approach. In this paper the DIMPLE S06 core, which is a small high-leakage core, has been analyzed by a direct heterogeneous modeling approach and by a two-step homogenization modeling approach, using contemporary code systems developed for reactor core analysis. The focus of this work is a comprehensive comparative analysis of the conventional approaches and codes with a small core design, DIMPLE S06 critical experiment. The calculation procedure for the two approaches is explicitly presented in this paper. Comprehensive comparative analysis is performed by neutronics parameters: multiplication factor and assembly power distribution. Comparison of two-group homogenized cross sections from each lattice physics codes shows that the generated transport cross section has significant difference according to the transport approximation to treat anisotropic scattering effect. The necessity of the ADF to correct the discontinuity at the assembly interfaces is clearly presented by the flux distributions and the result of two-step approach. Finally, the two approaches show consistent results for all codes, while the comparison with the reference generated by MCNP shows significant error except for another Monte Carlo code, SERPENT2.

  10. The grout/glass performance assessment code system (GPACS) with verification and benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.

    1994-12-01

    GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACS is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.

  11. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  12. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  13. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    Science.gov (United States)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  14. Mathematical modeling and modification of an activated sludge benchmark process evaluated by multiple performance criteria

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Wenliang; Yao, Chonghua [East China University of Science and Technology, Shanghai (China); Lu, Xiwu [Southeast University, Nanjing (China)

    2014-08-15

    Optimal modification of an activated sludge process (ASP) evaluated by multiple performance criteria was studied. A benchmark process in BSM1 was taken as a target process. Four indexes of percentage of effluent violation (PEV), energy consumption (OCI), total volume of tanks (TV) and total suspended solid in tank5 (TSSa5), were criteria and eleven process parameters were decision variables, making up the multiple criteria optimization model, which was solved by non-dominated sorting genetic algorithm II (NSGA-II) in MATLAB. Pareto solutions were obtained; one solution (opt1) was selected based on the authors' decision for a further analysis. Results show that the process with opt1 strategy exhibits much better performance of PEV and OCI than with the default, being improved by 74.17% and 9.97% specifically under dry influent and without control. These results indicated that the multiple criterion optimization method is very useful for modification of an ASP.

  15. Interactive benchmarking

    DEFF Research Database (Denmark)

    Lawson, Lartey; Nielsen, Kurt

    2005-01-01

    distance functions. The frontier is given by an explicit quantile, e.g. “the best 90 %”. Using the explanatory model of the inefficiency, the user can adjust the frontiers by submitting state variables that influence the inefficiency. An efficiency study of Danish dairy farms is implemented......We discuss individual learning by interactive benchmarking using stochastic frontier models. The interactions allow the user to tailor the performance evaluation to preferences and explore alternative improvement strategies by selecting and searching the different frontiers using directional...... in the suggested benchmarking tool. The study investigates how different characteristics on dairy farms influences the technical efficiency....

  16. Enhancing Global Competitiveness: Benchmarking Airline Operational Performance in Highly Regulated Environments

    Science.gov (United States)

    Bowen, Brent D.; Headley, Dean E.; Kane, Karisa D.

    1998-01-01

    Enhancing competitiveness in the global airline industry is at the forefront of attention with airlines, government, and the flying public. The seemingly unchecked growth of major airline alliances is heralded as an enhancement to global competition. However, like many mega-conglomerates, mega-airlines will face complications driven by size regardless of the many recitations of enhanced efficiency. Outlined herein is a conceptual model to serve as a decision tool for policy-makers, managers, and consumers of airline services. This model is developed using public data for the United States (U.S.) major airline industry available from the U/S. Department of Transportation, Federal Aviation Administration, the National Aeronautics and Space Administration, the National Transportation Safety Board, and other public and private sector sources. Data points include number of accidents, pilot deviations, operational performance indicators, flight problems, and other factors. Data from these sources provide opportunity to develop a model based on a complex dot product equation of two vectors. A row vector is weighted for importance by a key informant panel of government, industry, and consumer experts, while a column vector is established with the factor value. The resulting equation, known as the national Airline Quality Rating (AQR), where Q is quality, C is weight, and V is the value of the variables, is stated Q=C[i1-19] x V[i1-19]. Looking at historical patterns of AQR results provides the basis for establishment of an industry benchmark for the purpose of enhancing airline operational performance. A 7 year average of overall operational performance provides the resulting benchmark indicator. Applications from this example can be applied to the many competitive environments of the global industry and assist policy-makers faced with rapidly changing regulatory challenges.

  17. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  18. Design and performance benchmark of various architectures of a piezoelectric bimetallic strip heat engine

    Science.gov (United States)

    Boughaleb, J.; Arnaud, A.; Monfray, S.; Cottinet, P. J.; Quenard, S.; Boeuf, F.; Guyomar, D.; Skotnicki, T.

    2016-06-01

    This paper deals with an investigation of a thermal energy harvester based on the coupling of a piezoelectric membrane and a bimetallic strip heat engine. The general working principle of the device consists of a double conversion mechanism: the thermal energy is first converted into mechanical energy by means of a bimetallic strip, then the mechanical energy is converted into electricity with a piezoelectric membrane. This paper deals with the study and optimization of the harvester's design. First, the piezoelectric membrane configuration is studied to find the most efficient way to convert mechanical energy into electricity. A benchmark of various piezoelectric materials is then presented to point out the most efficient materials. Finally, our study focuses on the bimetallic strip's properties: the effect of its dimensions of its thermal hysteresis on the harvester's performances are studied and compared. Thanks to these different steps, we were able to point out the best configuration to convert efficiently thermal heat flux into electricity.

  19. Do physiological measures predict selected CrossFit® benchmark performance?

    Directory of Open Access Journals (Sweden)

    Butcher SJ

    2015-07-01

    Full Text Available Scotty J Butcher,1,2 Tyler J Neyedly,3 Karla J Horvey,1 Chad R Benko2,41Physical Therapy, University of Saskatchewan, 2BOSS Strength Institute, 3Physiology, University of Saskatchewan, 4Synergy Strength and Conditioning, Saskatoon, SK, CanadaPurpose: CrossFit® is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs.Materials and methods: Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time, "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions, and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats, as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift, maximal oxygen consumption (VO2max, and Wingate anaerobic power/capacity testing.Results: Performance of Grace and Fran was related to whole-body strength (CrossFit Total (r=-0.88 and -0.65, respectively and anaerobic threshold (r=-0.61 and -0.53, respectively; however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R2=0.77 and 0.42, respectively. There were no significant associations or predictors for Cindy.Conclusion: CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training

  20. Do physiological measures predict selected CrossFit® benchmark performance?

    Science.gov (United States)

    Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R

    2015-01-01

    Purpose CrossFit® is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit “Workouts of the Day” (WODs). Materials and methods Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs “Grace” (30 clean and jerks for time), “Fran” (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and “Cindy” (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the “CrossFit Total” (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Results Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=−0.88 and −0.65, respectively) and anaerobic threshold (r=−0.61 and −0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R2=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. Conclusion CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs. PMID:26261428

  1. CALiPER Benchmark Report: Performance of Incandescent A Type and Decorative Lamps and LED Replacements

    Energy Technology Data Exchange (ETDEWEB)

    Lingard, R. D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Myer, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Paget, M. L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2008-11-01

    This benchmark report addresses common omnidirectional incandescent lamps - A-type and small decorative, candelabra-type lamps - and their commercially available light-emitting diode (LED) replacements.

  2. FINANCIAL AUDIT AND BENCHMARKING IN THE CONSTRUCTION INDUSTRY - A STEP TOWARDS PERFORMANCE

    Directory of Open Access Journals (Sweden)

    GRIGORE MARIAN

    2015-07-01

    Full Text Available Knowledge on and application of the legislation and professional reasoning in a professional manner related to control and situation in the field, of the control methods and procedures, is one of the essential premises that ensures efficiency and finality in the activity of patrimony control of an entity in the constructions field. A financial audit, which aims at an integrated control, provides conclusions on the entire activity. It fully characterizes the efforts and the results and it can also show faults, deficiencies and frauds in their entirety. The stocks cannot be simply taken into account as they are in a balance sheet and say straight away that the entity has sufficient stocks and it is performant. It is necessary to have a stock audit in order to highlight the accordance between the records and the physical stocks or their movement. The same reasoning also applies to outstanding debts, purchase/selling of real estate, rents of real estates, verification of contractual obligations, declarations regarding and actual payment of taxes to the state budget and the state social security budget etc. The relationship between audit and the benchmarking plan is given precisely by the final result of a performance evaluation and in order to get to a correct result it is necessary to have correct data and financial indicators. Otherwise, the risk is to evaluate an entity as performant and shortly after that it goes into bankruptcy. Benchmarking is a support instrument for decision-making, a continuous evaluation process, a mean of looking for the most performant methods to do a given activity. It is a system of information that allows an entity to show its development strategy, a technique for determining its competitive advantages and to learn about its products, services and operations by comparing them with the best ones. This instrument is part of the flexible management techniques that are based on learning, on initiative, together with ABM

  3. Performance Monitoring of the Data-driven Subspace Predictive Control Systems Based on Historical Objective Function Benchmark

    Institute of Scientific and Technical Information of China (English)

    WANG Lu; LI Ning; LI Shao-Yuan

    2013-01-01

    In this paper,a historical objective function benchmark is proposed to monitor the performance of data-driven subspace predictive control systems.A new criterion for selection of the historical data set can be used to monitor the controller's performance,instead of using traditional methods based on prior knowledge.Under this monitoring framework,users can define their own index based on different demands and can also obtain the historical benchmark with a better sensitivity.Finally,a distillation column simulation example is used to illustrate the validity of the proposed algorithms.

  4. MEASURING THE PERFORMANCE OF GUYANA’S CONSTRUCTION INDUSTRY USING A SET OF PROJECT PERFORMANCE BENCHMARKING METRICS

    Directory of Open Access Journals (Sweden)

    Christopher J. Willis

    2011-10-01

    Full Text Available A study measuring the performance of Guyana’s construction industry using a set of project performance benchmarking metrics was recently completed. The underlying premise of the study was that the aggregated performance of construction projects provides a realistic assessment of the performance of the construction industry, on the basis that construction projects are the mechanism through which the construction industry creates its tangible products. The fact that an influential government agency acted as owner of the study was critical to the data collection phase. The best approach for collecting project performance data in Guyana involves the utilisation of a researcher or team of researchers mining electronic and hard copy project documents. This study analysed approximately 270 construction projects to obtain an indication of the performance of guyana’s construction industry. It was found that sea defence projects performed the worst, whereas health facility projects performed the best. The main implication of this is that sea defence projects are likely to be the least efficient and, given their critical nature, there is an argument for urgent performance improvement interventions.

  5. The COST Benchmark

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Tiesyte, Dalia; Tradisauskas, Nerius

    2006-01-01

    , and more are underway. As a result, there is an increasing need for an independent benchmark for spatio-temporal indexes. This paper characterizes the spatio-temporal indexing problem and proposes a benchmark for the performance evaluation and comparison of spatio-temporal indexes. Notably, the benchmark...

  6. Multi-core processing and scheduling performance in CMS

    Science.gov (United States)

    Hernández, J. M.; Evans, D.; Foulkes, S.

    2012-12-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.

  7. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  8. Performance Benchmarks for Scholarly Metrics Associated with Fisheries and Wildlife Faculty.

    Directory of Open Access Journals (Sweden)

    Robert K Swihart

    our methods, illustrate their use when applied to new data, and suggest future improvements. Our benchmarking approach may provide a useful tool to augment detailed, qualitative assessment of performance.

  9. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Peiyuan [Univ. of Colorado, Boulder, CO (United States); Brown, Timothy [Univ. of Colorado, Boulder, CO (United States); Fullmer, William D. [Univ. of Colorado, Boulder, CO (United States); Hauser, Thomas [Univ. of Colorado, Boulder, CO (United States); Hrenya, Christine [Univ. of Colorado, Boulder, CO (United States); Grout, Ray [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sitaraman, Hariswaran [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-29

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling of the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.

  10. Funnel plot control limits to identify poorly performing healthcare providers when there is uncertainty in the value of the benchmark.

    Science.gov (United States)

    Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun

    2014-04-17

    There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals. Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care.

  11. Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters

    KAUST Repository

    Wu, X.

    2011-07-18

    The NAS Parallel Benchmarks (NPB) are well-known applications with fixed algorithms for evaluating parallel systems and tools. Multicore clusters provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node, and MPI can be used with the communication between nodes. In this paper, we use Scalar Pentadiagonal (SP) and Block Tridiagonal (BT) benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore clusters, Intrepid (BlueGene/P) at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76 %, and the hybrid BT outperforms the MPI BT by up to 8.58 % on up to 10 000 cores on Intrepid and Jaguar. We also use performance tools and MPI trace libraries available on these clusters to further investigate the performance characteristics of the hybrid SP and BT. © 2011 The Author. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.

  12. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  13. HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Mark; Brown, Jed; Shalf, John; Straalen, Brian Van; Strohmaier, Erich; Williams, Sam

    2014-05-05

    This document provides an overview of the benchmark ? HPGMG ? for ranking large scale general purpose computers for use on the Top500 list [8]. We provide a rationale for the need for a replacement for the current metric HPL, some background of the Top500 list and the challenges of developing such a metric; we discuss our design philosophy and methodology, and an overview of the specification of the benchmark. The primary documentation with maintained details on the specification can be found at hpgmg.org and the Wiki and benchmark code itself can be found in the repository https://bitbucket.org/hpgmg/hpgmg.

  14. Benchmarking in the process of donation after brain death: a methodology to identify best performer hospitals.

    Science.gov (United States)

    Matesanz, R; Coll, E; Domínguez-Gil, B; de la Rosa, G; Marazuela, R; Arráez, V; Elorrieta, P; Fernández-García, A; Fernández-Renedo, C; Galán, J; Gómez-Marinero, P; Martín-Delagebasala, C; Martín-Jiménez, S; Masnou, N; Salamero, P; Sánchez-Ibáñez, J; Serna, E; Martínez-Soba, F; Pastor-Rodríguez, A; Bouzas, E; Castro, P

    2012-09-01

    A benchmarking approach was developed in Spain to identify and spread critical success factors in the process of donation after brain death. This paper describes the methodology to identify the best performer hospitals in the period 2003-2007 with 106 hospitals throughout the country participating in the project. The process of donation after brain death was structured into three phases: referral of possible donors after brain death (DBD) to critical care units (CCUs) from outside units, management of possible DBDs within the CCUs and obtaining consent for organ donation. Indicators to assess performance in each phase were constructed and the factors influencing these indicators were studied to ensure that comparable groups of hospitals could be established. Availability of neurosurgery and CCU resources had a positive impact on the referral of possible DBDs to CCUs and those hospitals with fewer annual potential DBDs more frequently achieved 100% consent rates. Hospitals were grouped into each subprocess according to influencing factors. Hospitals with the best results were identified for each phase and hospital group. The subsequent study of their practices will lead to the identification of critical factors for success, which implemented in an adapted way should fortunately lead to increasing organ availability. © Copyright 2012 The American Society of Transplantation and the American Society of Transplant Surgeons.

  15. EFFECT OF DIVIDED CORE ON THE BENDING PERFORMANCES OF TEXTILE REINFORCED FOAM CORE SANDWICH COMPOSITES

    Directory of Open Access Journals (Sweden)

    ALPYILDIZ Tuba

    2016-05-01

    Full Text Available Sandwich composites are generally used in marine applications, wind turbines, space and aircraft vehicles due to their high bending rigidities in addition to their lighter weights. The objective of this study is to investigate the effect of divided foam core and interlayer sheet of glass fabric on the bending performances of sandwich composites which are manufactured with glass fabrics as the facesheets/interlayer sheets and PVC foam as the core material. Sandwich composites with single and divided core are manufactured and compared in terms of flexural behavious via three point bending tests. It is found that the bending performance is enhanced with the use of divided core and using divided core does not affect the behaviour of the sandwich composite against bending deformations. In the case of the plain core sandwich composite, dividing the core is advised for certain applications rather than perforating the core to increase the bending stiffness and strength of the textile reinforced sandwich composites because it is possible to purchase core with any thickness and there is no need for additional process such as perforation. The proposed application could enhance the bending performances without altering the weight and cost of the sandwich composites, which are preferred due to their higher bending rigidities in relation to their lighter weights.

  16. Ward based community road safety performance benchmarking, monitoring and intervention programmes in the City of Johannesburg

    CSIR Research Space (South Africa)

    Ribbens, H

    2008-07-01

    Full Text Available benchmarking, monitoring and intervention programme. Community road safety needs in the respective wards are articulated through the ward councillor. The rationale is that the community exactly knows where these problem areas are, because they suffer as a...

  17. THE EPS OF THE IFRS AS A BENCHMARK OF CORPORATE PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Kiss Agota

    2015-07-01

    Full Text Available The measurement of the corporate’s performance; efficiency and effective use of resources has an increasing role nowadays. In a globalizing and strongly competitive market environment the adequate, up-to-date, reliable and accurate information is inevitable for the companies in order to operate efficiently. Accounting is a part of corporate information system that’s primarily objective is to capture the economic changes and to present their effect on the wealth and income of the companies. The performance of companies is interpreted in many ways and there is an extensive literature that discusses performance measurement and depending on the objective and the interested parties there are several methods from simple indicators to the more complicated models. According to the most frequently used definition, performance measurement is the measurement process of the effectiveness and efficiency of activities (Neely et al., 1995. Based on the accounting information of the companies many performance indicators can be shaped that could be useful benchmarks. The companies listed on the stock exchange must put special emphasis on the measurement of their performance and its presentation in the financial statements compared to the non-listed companies as the investor’s primary aim is to maximize the returns on their investments. The national level regulations in this area are not unified; hence the comparison of companies is problematic. The listed companies must present their financial statements in accordance to the International Financial Reporting Standards (IFRS. The standard boards realized the lack of comparability as a result of the non-unified performance measurement. Their opinion is that earnings per share (EPS is a comparable indicator and shows a consistent picture about the earnings of the companies so in 1997 they issued the IAS 33 ‘Earnings per share’ standard. The IAS 33 provides a standardized method to calculate the EPS that

  18. Performance Benchmarking of tsunami-HySEA for NTHMP Inundation Mapping Activities

    Science.gov (United States)

    González Vida, Jose M.; Castro, Manuel J.; Ortega Acosta, Sergio; Macías, Jorge; Millán, Alejandro

    2016-04-01

    According to the 2006 USA Tsunami Warning and Education Act, the tsunami inundation models used in the National Tsunami Hazard Mitigation Program (NTHMP) projects must be validated against some existing standard problems (see [OAR-PMEL-135], [Proceedings of the 2011 NTHMP Model Benchmarking Workshop]). These Benchmark Problems (BPs) cover different tsunami processes related to the inundation stage that the models must meet to achieve the NTHMP Mapping and Modeling Subcommittee (MMS) approval. Tsunami-HySEA solves the two-dimensional shallow-water system using a high-order path-conservative finite volume method. Values of h, qx and qy in each grid cell represent cell averages of the water depth and momentum components. The numerical scheme is conservative for both mass and momentum in flat bathymetries, and, in general, is mass preserving for arbitrary bathymetries. Tsunami-HySEA implements a PVM-type method that uses the fastest and the slowest wave speeds, similar to HLL method (see [Castro et al, 2012]). A general overview of the derivation of the high order methods is performed in [Castro et al, 2009]. For very big domains, Tsunami-HySEA also implements a two-step scheme similar to leap-frog for the propagation step and a second-order TVD-WAF flux-limiter scheme described in [de la Asunción et al, 2013] for the inundation step. Here, we present the results obtained by the model tsunami-HySEA against the proposed BPs. BP1: Solitary wave on a simple beach (non-breaking - analytic experiment). BP4: Solitary wave on a simple beach (breaking - laboratory experiment). BP6: Solitary wave on a conical island (laboratory experiment). BP7 - Runup on Monai Valley beach (laboratory experiment) and BP9: Okushiri Island tsunami (field experiment). The analysis and results of Tsunami-HySEA model are presented, concluding that the model meets the required objectives for all the BP proposed. References - Castro M.J., E.D. Fernández, A.M. Ferreiro, A. García, C. Parés (2009

  19. A minimum variance benchmark to measure the performance of pension funds in Mexico

    Directory of Open Access Journals (Sweden)

    Oscar V. De la Torre Torres

    2015-01-01

    Full Text Available En el presente artículo proponemos el portafolio de mínima varianza como método de ponderación para un benchmark que mida el desempeno˜ de fondos de pensiones en México. Se contrastó éste portafolio contra los logrados ya sea con la máxima razón de Sharpe o el resultante de una combinación lineal de ambos métodos. Esto se hizo con tres simulaciones de eventos discretos con datos diarios de enero de 2002 a mayo de 2013. Con la razón de Sharpe, la prueba de significancia de la Alfa de Jensen y la prueba de expansión de Huberman y Kandel (1987, se encontró que los portafolios simulados tienen una performance similar. Al utilizar los criterios exposición al riesgo, representatividad de los mercados objeto de inversiín y el nivel de rebalanceo propuestos por Bailey (1992, encontramos que el método de mínima varianza es preferible para medir el desempeño de fondos de pensiones en México.

  20. Benchmarking techniques for evaluation of compression transform performance in ATR applications

    Science.gov (United States)

    Schmalz, Mark S.

    2004-10-01

    Image compression is increasingly employed in applications such as medical imaging, for reducing data storage requirement, and Internet video transmission, to effectively increase channel bandwidth. Similarly, military applications such as automated target recognition (ATR) often employ compression to achieve storage and communication efficiencies, particularly to enhance the effective bandwidth of communication channels whose throughput suffers, for example, from overhead due to error correction/detection or encryption. In the majority of cases, lossy compression is employed due the resultant low bit rates (high compression ratio). However, lossy compression produces artifacts in decompressed imagery that can confound ATR processes applied to such imagery, thereby reducing the probability of detection (Pd) and possibly increasing the rate or number of false alarms (Rfa or Nfa). In this paper, the authors' previous research in performance measurement of compression transforms is extended to include (a) benchmarking algorithms and software tools, (b) a suite of error exemplars that are designed to elicit compression transform behavior in an operationally relevant context, and (c) a posteriori analysis of performance data. The following transforms are applied to a suite of 64 error exemplars: Visual Pattern Image Coding (VPIC [1]), Vector Quantization with a fast codebook search algorithm (VQ [2,3]), JPEG and a preliminary implementation of JPEG 2000 [4,5], and EBLAST [6-8]. Compression ratios range from 2:1 to 200:1, and various noise levels and types are added to the error exemplars to produce a database of 7,680 synthetic test images. Several global and local (e.g., featural) distortion measures are applied to the decompressed test imagery to provide a basis for rate-distortion and rate-performance analysis as a function of noise and compression transform type.

  1. Core design and optimization of high performance low sodium void 1000 MWe heterogeneous oxide LMFBR cores

    Energy Technology Data Exchange (ETDEWEB)

    Barthold, W.P.; Orechwa, Y.; Su, S.F.; Beitel, J.C.; Turski, R.; Lam, P.S.K.; Fuller, E.L.

    1979-01-01

    Radially heterogeneous core configurations are effective means to reduce sodium void reactivity. In general, radially heterogeneous cores can be designed as tightly or loosely coupled cores with center core or center blanket arrangements. Core height, number of core regions and number of fuel pins per assembly are additional variables in an optimization of basic heterogeneous core configurations. An extensive study was carried out to optimize the core configurations for 1000 MWe LMFBRs. All cores were subject to a common set of nuclear, mechanical, and thermal-hydraulic design assumptions. They were restrained by an upper sodium void reactivity limit of $2.50 and a doubling time of approximately 15 to 18 years. The screening and optimization procedures employed lead to two core layouts which were both tightly coupled. A complete nuclear analysis of these two cores (derived from a loosely coupled configuration/derived from a tightly coupled configuration) determined the fissile inventories (4268.4/4213.4 kg at BOEC), burnups (83.90/100.7 MWd/t peak), reactivity swings (0.49/1.8% ..delta..k total), power and flux distributions for different control insertion patterns, the breeding performance (15.7/15.3 yrs CSDT), the safety parameters, such as sodium void reactivity ($2.38/$2.23 at EOEC), isothermal Doppler coefficients for both sodium-in (45.6/46.1 T dk/dT x 10/sup -4/ core at EOEC) and sodium-out conditions (28.6/28.2 T dk/dT x 10/sup -4/ core at EOEC), and the transient behavior which shows very little space-dependence during a 60 cent reactivity step insertion.

  2. Implications of the Trauma Quality Improvement Project inclusion of nonsurvivable injuries in performance benchmarking.

    Science.gov (United States)

    Heaney, Jiselle Bock; Schroll, Rebecca; Turney, Jennifer; Stuke, Lance; Marr, Alan B; Greiffenstein, Patrick; Robledo, Rosemarie; Theriot, Amanda; Duchesne, Juan; Hunt, John

    2017-10-01

    The Trauma Quality Improvement Project (TQIP) uses an injury prediction model for performance benchmarking. We hypothesize that at a Level I high-volume penetrating trauma center, performance outcomes will be biased due to inclusion of patients with nonsurvivable injuries. Retrospective chart review was conducted for all patients included in the institutional TQIP analysis from 2013 to 2014 with length of stay (LOS) less than 1 day to determine survivability of the injuries. Observed (O)/expected (E) mortality ratios were calculated before and after exclusion of these patients. Completeness of data reported to TQIP was examined. Eight hundred twenty-six patients were reported to TQIP including 119 deaths. Nonsurvivable injuries accounted 90.9% of the deaths in patients with an LOS of 1 day or less. The O/E mortality ratio for all patients was 1.061, and the O/E ratio after excluding all patients with LOS less than 1 day found to have nonsurvivable injuries was 0.895. Data for key variables were missing in 63.3% of patients who died in the emergency department, 50% of those taken to the operating room and 0% of those admitted to the intensive care unit. Charts for patients who died with LOS less than 1 day were significantly more likely than those who lived to be missing crucial. This study shows TQIP inclusion of patients with nonsurvivable injuries biases outcomes at an urban trauma center. Missing data results in imputation of values, increasing inaccuracy. Further investigation is needed to determine if these findings exist at other institutions, and whether the current TQIP model needs revision to accurately identify and exclude patients with nonsurvivable injuries. Prognostic and epidemiological, level III.

  3. Core size effects on safety performances of LMRs

    Energy Technology Data Exchange (ETDEWEB)

    Na, Byung Chan; Hahn, Do Hee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    An oxide fuel small size core (1200 MWt) was analyzed in comparison with a large size core (3600 MWt) in order to evaluate the size effects on transient safety performances of liquid-metal reactors (LMRs). In the first part of the study, main static safety parameters (i.e., Doppler coefficient, sodium void effect, etc.) of the two cores were characterized, and the second part of the study was focused on the dynamic behavior of the cores in two representative transient events: the unprotected loss-of-flow (ULOF) and the unprotected transient overpower (UTOP). Margins to fuel melting and sodium boiling have been evaluated for these representative transients. Results show that the small core has a generally better or equivalent level of safety performances during these events. 6 refs., 4 figs., 2 tabs. (Author)

  4. BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data.

    Science.gov (United States)

    Wu, Hongyan; Fujiwara, Toyofumi; Yamamoto, Yasunori; Bolleman, Jerven; Yamaguchi, Atsuko

    2014-01-01

    Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its scalability is poor; Bigdata

  5. Mathematical errors made by high performing candidates writing the National Benchmark Tests

    Directory of Open Access Journals (Sweden)

    Carol A. Bohlmann

    2017-04-01

    Full Text Available When the National Benchmark Tests (NBTs were first considered, it was suggested that the results would assess entry-level students’ academic and quantitative literacy, and mathematical competence, assess the relationships between higher education entry-level requirements and school-level exit outcomes, provide a service to higher education institutions with regard to selection and placement, and assist with curriculum development, particularly in relation to foundation and augmented courses. We recognise there is a need for better communication of the findings arising from analysis of test data, in order to inform teaching and learning and thus attempt to narrow the gap between basic education outcomes and higher education requirements. Specifically, we focus on identification of mathematical errors made by those who have performed in the upper third of the cohort of test candidates. This information may help practitioners in basic and higher education. The NBTs became operational in 2009. Data have been systematically accumulated and analysed. Here, we provide some background to the data, discuss some of the issues relevant to mathematics, present some of the common errors and problems in conceptual understanding identified from data collected from Mathematics (MAT tests in 2012 and 2013, and suggest how this could be used to inform mathematics teaching and learning. While teachers may anticipate some of these issues, it is important to note that the identified problems are exhibited by the top third of those who wrote the Mathematics NBTs. This group will constitute a large proportion of first-year students in mathematically demanding programmes. Our aim here is to raise awareness in higher education and at school level of the extent of the common errors and problems in conceptual understanding of mathematics. We cannot analyse all possible interventions that could be put in place to remediate the identified mathematical problems, but we do

  6. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  7. Performance implications from sizing a VM on multi-core systems: A Data analytic application s view

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Seung-Hwan [ORNL; Horey, James L [ORNL; Begoli, Edmon [ORNL; Yao, Yanjun [University of Tennessee, Knoxville (UTK); Cao, Qing [University of Tennessee, Knoxville (UTK)

    2013-01-01

    In this paper, we present a quantitative performance analysis of data analytics applications running on multi-core virtual machines. Such environments form the core of cloud computing. In addition, data analytics applications, such as Cassandra and Hadoop, are becoming increasingly popular on cloud computing platforms. This convergence necessitates a better understanding of the performance and cost implications of such hybrid systems. For example, the very rst step in hosting applications in virtualized environments, requires the user to con gure the number of virtual processors and the size of memory. To understand performance implications of this step, we benchmarked three Yahoo Cloud Serving Benchmark (YCSB) workloads in a virtualized multi-core environment. Our measurements indicate that the performance of Cassandra for YCSB workloads does not heavily depend on the processing capacity of a system, while the size of the data set is critical to performance relative to allocated memory. We also identi ed a strong relationship between the running time of workloads and various hardware events (last level cache loads, misses, and CPU migrations). From this analysis, we provide several suggestions to improve the performance of data analytics applications running on cloud computing environments.

  8. Benchmarking in University Toolbox

    Directory of Open Access Journals (Sweden)

    Katarzyna Kuźmicz

    2015-06-01

    Full Text Available In the face of global competition and rising challenges that higher education institutions (HEIs meet, it is imperative to increase innovativeness and efficiency of their management. Benchmarking can be the appropriate tool to search for a point of reference necessary to assess institution’s competitive position and learn from the best in order to improve. The primary purpose of the paper is to present in-depth analysis of benchmarking application in HEIs worldwide. The study involves indicating premises of using benchmarking in HEIs. It also contains detailed examination of types, approaches and scope of benchmarking initiatives. The thorough insight of benchmarking applications enabled developing classification of benchmarking undertakings in HEIs. The paper includes review of the most recent benchmarking projects and relating them to the classification according to the elaborated criteria (geographical range, scope, type of data, subject, support and continuity. The presented examples were chosen in order to exemplify different approaches to benchmarking in higher education setting. The study was performed on the basis of the published reports from benchmarking projects, scientific literature and the experience of the author from the active participation in benchmarking projects. The paper concludes with recommendations for university managers undertaking benchmarking, derived on the basis of the conducted analysis.

  9. Concepts for benchmarking of homogenisation algorithm performance on the global scale

    Directory of Open Access Journals (Sweden)

    K. Willett

    2014-06-01

    Full Text Available The International Surface Temperature Initiative (ISTI is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global scale synthetic analogs to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real world data do not afford us. Hence algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.

  10. Benchmarking the environmental performance of specialized dairy production systems: selection of a set indocators

    NARCIS (Netherlands)

    Mu, W.; Middelaar, van C.E.; Bloemhof, J.M.; Boer, de I.J.M.

    2014-01-01

    Benchmarking the environmental impacts of dairy production systems across the world can provide insights into their potential for improvement. However, collection of high-quality data for an environmental impact assessment can be difficult and time consuming. Based on a dataset of 55 dairy farms

  11. Benchmarking in Mobarakeh Steel Company

    OpenAIRE

    Sasan Ghasemi; Mohammad Nazemi; Mehran Nejati

    2008-01-01

    Benchmarking is considered as one of the most effective ways of improving performance in companies. Although benchmarking in business organizations is a relatively new concept and practice, it has rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan's Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aims to share the process deployed for the benchmarking project in this company and illustrate how th...

  12. Benchmarking in Mobarakeh Steel Company

    Directory of Open Access Journals (Sweden)

    Sasan Ghasemi

    2008-05-01

    Full Text Available Benchmarking is considered as one of the most effective ways of improving performance incompanies. Although benchmarking in business organizations is a relatively new concept and practice, ithas rapidly gained acceptance worldwide. This paper introduces the benchmarking project conducted in Esfahan’s Mobarakeh Steel Company, as the first systematic benchmarking project conducted in Iran. It aimsto share the process deployed for the benchmarking project in this company and illustrate how the projectsystematic implementation led to succes.

  13. Multi-core processing and scheduling performance in CMS

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-...

  14. High Performance Homes That Use 50% Less Energy Than the DOE Building America Benchmark Building

    Energy Technology Data Exchange (ETDEWEB)

    Christian, J.

    2011-01-01

    This document describes lessons learned from designing, building, and monitoring five affordable, energy-efficient test houses in a single development in the Tennessee Valley Authority (TVA) service area. This work was done through a collaboration of Habitat for Humanity Loudon County, the US Department of Energy (DOE), TVA, and Oak Ridge National Laboratory (ORNL).The houses were designed by a team led by ORNL and were constructed by Habitat's volunteers in Lenoir City, Tennessee. ZEH5, a two-story house and the last of the five test houses to be built, provided an excellent model for conducting research on affordable high-performance houses. The impressively low energy bills for this house have generated considerable interest from builders and homeowners around the country who wanted a similar home design that could be adapted to different climates. Because a design developed without the project constraints of ZEH5 would have more appeal for the mass market, plans for two houses were developed from ZEH5: a one-story design (ZEH6) and a two-story design (ZEH7). This report focuses on ZEH6, identical to ZEH5 except that the geothermal heat pump is replaced with a SEER 16 air source unit (like that used in ZEH4). The report also contains plans for the ZEH6 house. ZEH5 and ZEH6 both use 50% less energy than the DOE Building America protocol for energyefficient buildings. ZEH5 is a 4 bedroom, 2.5 bath, 2632 ft2 house with a home energy rating system (HERS) index of 43, which qualifies it for federal energy-efficiency incentives (a HERS rating of 0 is a zero-energy house, and a conventional new house would have a HERS rating of 100). This report is intended to help builders and homeowners build similar high-performance houses. Detailed specifications for the envelope and the equipment used in ZEH5 are compared with the Building America Benchmark building, and detailed drawings, specifications, and lessons learned in the construction and analysis of data gleaned

  15. Toward Alternative Metrics for Measuring Performance within Operational Contracting Squadrons: An Application of Benchmarking Techniques

    Science.gov (United States)

    1993-09-01

    Cooper (1991), state the sampling technique selected should depend upon the requirements of the project, its objectives, and the availability of funds...it was drawn (Emory, 1991:275). Quota sampling is a form of nonprobability sampling in which certain criteria are imposed. For this study, the...CONTRACTING SQUADRONS: AN APPLICATION OF BENCHMARKING TECHNIQUES THESIE Mark W. Fahrenkamp Mark P. Garst Captain, USAF Captain, USAF AFIT/GCM/LSM/93S-4 94

  16. High performance carbon nanotube-Si core-shell wires with a rationally structured core for lithium ion battery anodes.

    Science.gov (United States)

    Fan, Yu; Zhang, Qing; Lu, Congxiang; Xiao, Qizhen; Wang, Xinghui; Tay, Beng Kang

    2013-02-21

    Core-shell Si nanowires are very promising anode materials. Here, we synthesize vertically aligned carbon nanotubes (CNTs) with relatively large diameters and large inter-wire spacing as core wires and demonstrate a CNT-Si core-shell wire composite as a lithium ion battery (LIB) anode. Owing to the rationally engineered core structure, the composite shows good capacity retention and rate performance. The excellent performance is superior to most core-shell nanowires previously reported.

  17. Comparative performance of some popular artificial neural network algorithms on benchmark and function approximation problems

    Indian Academy of Sciences (India)

    V K Dhar; A K Tickoo; R Koul; B P Dubey

    2010-02-01

    We report an inter-comparison of some popular algorithms within the artificial neural network domain (viz., local search algorithms, global search algorithms, higher-order algorithms and the hybrid algorithms) by applying them to the standard benchmarking problems like the IRIS data, XOR/N-bit parity and two-spiral problems. Apart from giving a brief description of these algorithms, the results obtained for the above benchmark problems are presented in the paper. The results suggest that while Levenberg–Marquardt algorithm yields the lowest RMS error for the N-bit parity and the two-spiral problems, higher-order neuron algorithm gives the best results for the IRIS data problem. The best results for the XOR problem are obtained with the neuro-fuzzy algo- rithm. The above algorithms were also applied for solving several regression problems such as cos() and a few special functions like the gamma function, the complimentary error function and the upper tail cumulative 2-distribution function. The results of these regression problems indicate that, among all the ANN algorithms used in the present study, Levenberg–Marquardt algorithm yields the best results. Keeping in view the highly non-linear behaviour and the wide dynamic range of these functions, it is suggested that these functions can also be considered as standard benchmark problems for function approximation using artificial neural networks.

  18. 基于NAS Benchmarks的ORC性能测试%The Performance of ORC with NAS Benchmarks

    Institute of Scientific and Technical Information of China (English)

    林海波; 汤志忠

    2003-01-01

    Itanium is the first generation product processor based on IA-64 architecture. ORC(Open Research Compil-er )provides an open source IPF(Itanium Processor Family)research compiler infrastructure. We have compiled andrun NAS Benchmarks on the Itanium machine. This paper briefly describes the performance of orcc, sgicc and gcc inthe following 3 ways: execution time, compilation time, and executable file size. The results show that orcc has near-ly the same performance as sgicc, which is 2 fold faster over gcc in the aspect of execution time. We also find that evenwith the best-optimized program, the utilization ratio of process resources is no more that 70%.

  19. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  20. Summa of Benchmark Performance Test%Benchmark性能测试综述

    Institute of Scientific and Technical Information of China (English)

    王良

    2006-01-01

    基准(Benchmark)测试是一种应用广泛、内容繁杂的测试技术,也是目前最主要的信息系统性能测试技术.文章对Benchmark测试的规范和测试方法做了归纳总结,给出了选用Benchmark测试的建议和开发Benchmark测试规范需要解决的问题.最后介绍了有代表性的Benchmark测试规范和程序集.

  1. The Effect of Core Configuration on Thermal Barrier Thermal Performance

    Science.gov (United States)

    DeMange, Jeffrey J.; Bott, Robert H.; Druesedow, Anne S.

    2015-01-01

    Thermal barriers and seals are integral components in the thermal protection systems (TPS) of nearly all aerospace vehicles. They are used to minimize heat transfer through interfaces and gaps and protect underlying temperature-sensitive components. The core insulation has a significant impact on both the thermal and mechanical properties of compliant thermal barriers. Proper selection of an appropriate core configuration to mitigate conductive, convective and radiative heat transfer through the thermal barrier is challenging. Additionally, optimization of the thermal barrier for thermal performance may have counteracting effects on mechanical performance. Experimental evaluations have been conducted to better understand the effect of insulation density on permeability and leakage performance, which can significantly impact the resistance to convective heat transfer. The effect of core density on mechanical performance was also previously investigated and will be reviewed. Simple thermal models were also developed to determine the impact of various core parameters on downstream temperatures. An extended understanding of these factors can improve the ability to design and implement these critical TPS components.

  2. Analysing Student Performance Using Sparse Data of Core Bachelor Courses

    Science.gov (United States)

    Saarela, Mirka; Karkkainen, Tommi

    2015-01-01

    Curricula for Computer Science (CS) degrees are characterized by the strong occupational orientation of the discipline. In the BSc degree structure, with clearly separate CS core studies, the learning skills for these and other required courses may vary a lot, which is shown in students' overall performance. To analyze this situation, we apply…

  3. Comparative performance of some popular ANN algorithms on benchmark and function approximation problems

    CERN Document Server

    Dhar, V K; Dubey, R Koul B P

    2009-01-01

    We report an inter-comparison of some popular algorithms within the artificial neural network domain (viz., Local search algorithms, global search algorithms, higher order algorithms and the hybrid algorithms) by applying them to the standard benchmarking problems like the IRIS data, XOR/N-Bit parity and Two Spiral. Apart from giving a brief description of these algorithms, the results obtained for the above benchmark problems are presented in the paper. The results suggest that while Levenberg-Marquardt algorithm yields the lowest RMS error for the N-bit Parity and the Two Spiral problems, Higher Order Neurons algorithm gives the best results for the IRIS data problem. The best results for the XOR problem are obtained with the Neuro Fuzzy algorithm. The above algorithms were also applied for solving several regression problems such as cos(x) and a few special functions like the Gamma function, the complimentary Error function and the upper tail cumulative $\\chi^2$-distribution function. The results of these ...

  4. Benchmarking of the construct of dimensionless correlations regarding batch bubble columns with suspended solids: Performance of the Pressure Transform Approach

    CERN Document Server

    Hristov, Jordan

    2010-01-01

    Benchmark of dimensionless data correlations pertinent to batch bubble columns (BC) with suspended solids has been performed by the pressure transform approach (PTA). The main efforts have addressed the correct definition of dimensionless groups referring to the fact that solids dynamics and the bubble dynamics have different velocity and length scales. The correct definition of the initial set of variable in the classical dimensional analysis depends mainly on the experience of the investigator while the pressure transform approach (PTA) avoids errors at this initial stage. PTA addresses the physics of the phenomena occurring in complex systems involving many phases and allows straightforward definitions of dimensionless numbers.

  5. Energy efficiency benchmarks and the performance of LEED rated buildings for Information Technology facilities in Bangalore, India

    Energy Technology Data Exchange (ETDEWEB)

    Sabapathy, Ashwin; Ragavan, Santhosh K.V.; Vijendra, Mahima; Nataraja, Anjana G. [Enzen Global Solutions Pvt Ltd, 90, Hosur Road, Madiwala, Bangalore 560 068 (India)

    2010-11-15

    This paper provides a summary of an energy benchmarking study that uses performance data of a sample of Information Technology facilities in Bangalore. Information provided by the sample of occupiers was used to develop an Energy Performance Index (EPI) and an Annual Average hourly Energy Performance Index (AAhEPI), which takes into account the variations in operation hours and days for these facilities. The EPI and AAhEPI were modelled to identify the factors that influence energy efficiency. Employment density, size of facility, operating hours per week, type of chiller and age of facility were found to be significant factors in regression models with EPI and AAhEPI as dependent variables. Employment density, size of facility and operating hours per week were standardised and used in a separate regression analysis. Parameter estimates from this regression were used to normalize the EPI and AAhEPI for variance in the independent variables. Three benchmark ranges - the bottom third, middle third and top third - were developed for the two normalised indices. The normalised EPI and AAhEPI of LEED rated building, which were also part of the sample, indicate that, on average, LEED rated buildings outperform the other buildings. (author)

  6. Relationship between core stability, functional movement, and performance.

    Science.gov (United States)

    Okada, Tomoko; Huxel, Kellie C; Nesser, Thomas W

    2011-01-01

    The purpose of this study was to determine the relationship between core stability, functional movement, and performance. Twenty-eight healthy individuals (age = 24.4 ± 3.9 yr, height = 168.8 ± 12.5 cm, mass = 70.2 ± 14.9 kg) performed several tests in 3 categories: core stability (flexion [FLEX], extension [EXT], right and left lateral [LATr/LATl]), functional movement screen (FMS) (deep squat [DS], trunk-stability push-up [PU], right and left hurdle step [HSr/HSl], in-line lunge [ILLr/ILLl], shoulder mobility [SMr/SMl], active straight leg raise [ASLRr/ASLRl], and rotary stability [RSr/RSl]), and performance tests (backward medicine ball throw [BOMB], T-run [TR], and single leg squat [SLS]). Statistical significance was set at p ≤ 0.05. There were significant correlations between SLS and FLEX (r = 0.500), LATr (r = 0.495), and LATl (r = 0.498). The TR correlated significantly with both LATr (r = 0.383) and LATl (r = 0.448). Of the FMS, BOMB was significantly correlated with HSr (r = 0.415), SMr (r = 0.388), PU (r = 0.407), and RSr (r = 0.391). The TR was significantly related with HSr (r = 0.518), ILLl (r = 0.462) and SMr (r = 0.392). The SLS only correlated significantly with SMr (r = 0.446). There were no significant correlations between core stability and FMS. Moderate to weak correlations identified suggest core stability and FMS are not strong predictors of performance. In addition, existent assessments do not satisfactorily confirm the importance of core stability on functional movement. Despite the emphasis fitness professionals have placed on functional movement and core training for increased performance, our results suggest otherwise. Although training for core and functional movement are important to include in a fitness program, especially for injury prevention, they should not be the primary emphasis of any training program.

  7. Benchmark Results and Theoretical Treatments for Valence-to-Core X-ray Emission Spectroscopy in Transition Metal Compounds

    Energy Technology Data Exchange (ETDEWEB)

    Mortensen, Devon R.; Seidler, Gerald T.; Kas, Joshua J.; Govind, Niranjan; Schwartz, Craig; Pemmaraju, Das; Prendergast, David

    2017-09-20

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment.

  8. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  9. Hadoop平台基准性能测试研究%Research on the Benchmark Performance of Hadoop Platform

    Institute of Scientific and Technical Information of China (English)

    张新玲; 颜秉珩

    2015-01-01

    With the rise of big data ,the software system based on the open source distributed computing framework Ha‐doop platform deeply go into each field of social life .Hadoop platform is a large data platform under an open source apche , has the characteristics of distribution ,virtualization ,high reliability ,high scalability .With the 8 years'development from 2006 to the present ,the integration component upgrade has been to 2 .0 from 1 .0 .This paper starts from the Hadoop sys‐tem structure ,analyzes and comparative studies the benchmark performance of Hadoop platform 1 .0 and 2 .0 .Benchmark testing on the testdfsIO ,yarn ,hive ,through the benchmark of platform after upgrade ,find the advantage of 2 .0 ,as a ref‐erence to integrated Hadoop platform .%Hadoop平台是apche下的一个开源大数据平台,具有分布性、虚拟化、高可靠性、高可伸缩性、通用性等特点。Hadoop平台发展至今,集成组件已从1.0发展到2.0。从Hadoop体系结构入手,分析了Hadoop平台1.0和2.0平台的基准测试性能并进行了对比。研究了testdfsIO、yarn、hive的基准测试,通过对升级后平台的基准测试,分析了2.0的优势,为集成 Hadoop平台提供参考。

  10. Benchmark selection

    DEFF Research Database (Denmark)

    Hougaard, Jens Leth; Tvede, Mich

    2002-01-01

    Within a production theoretic framework, this paper considers an axiomatic approach to benchmark selection. It is shown that two simple and weak axioms; efficiency and comprehensive monotonicity characterize a natural family of benchmarks which typically becomes unique. Further axioms are added...

  11. Effect of operating temperature on LMFBR core performance

    Energy Technology Data Exchange (ETDEWEB)

    Noyes, R.C.; Bergeron, R.J.; di Lauro, G.F.; Kulwich, M.R.; Stuteville, D.W.

    1977-04-11

    The purpose of the study is to provide an engineering evaluation of high and low temperature LMFBR core designs. The study was conducted by C-E supported by HEDL expertise in the areas of materials behavior, fuel performance and fabrication/fuel cycle cost. The evaluation is based primarily on designs and analyses prepared by AI, GE and WARD during Phase I of the PLBR studies.

  12. Performance and Benchmarking of Multisurface UHF RFID Tags for Readability and Reliability

    Directory of Open Access Journals (Sweden)

    Joshua Bolton

    2017-01-01

    Full Text Available As the price of passive radio frequency identification (RFID tags continues to decrease, more and more companies are considering item-level tagging. Although the use of RFID is simple, its proper application should be studied to achieve maximum efficiency and utilization in the industry. This paper is intended to demonstrate the test results of various multisurface UHF tags from different manufacturers for their readability under varying conditions such as orientation of tags with respect to reader, distance of tag from the reader, and materials used for embedding tags. These conditions could affect the reliability of RFID systems used for varied applications. In this paper, we implement a Design for Six Sigma Research (DFSS-R methodology that allows for reliability testing of RFID systems. In this paper, we have showcased our results about the benchmarking of UHF RFID tags and have put forward an important observation about the blind spots observed at different distances and orientations along different surfaces, which is primarily due to the polarity of the antenna chosen.

  13. Performance of FORTRAN and C GPU Extensions for a Benchmark Suite of Fourier Pseudospectral Algorithms

    CERN Document Server

    Cloutier, B; Rigge, P

    2012-01-01

    A comparison of PGI OpenACC, FORTRAN CUDA, and Nvidia CUDA pseudospectral methods on a single GPU and GCC FORTRAN on single and multiple CPU cores is reported. The GPU implementations use CuFFT and the CPU implementations use FFTW. Porting pre-existing FORTRAN codes to utilize a GPUs is efficient and easy to implement with OpenACC and CUDA FORTRAN. Example programs are provided.

  14. How to Advance TPC Benchmarks with Dependability Aspects

    Science.gov (United States)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  15. Enhancing Fatigue Performance of Sandwich Composites with Nanophased Core

    Directory of Open Access Journals (Sweden)

    S. Zainuddin

    2010-01-01

    Full Text Available We report fatigue performance of sandwich composites with nanophased core under shear load. Nanophased core was made from polyurethane foam dispersed with carbon nanofiber (CNF. CNFs were dispersed into part-A of liquid polyurethane through a sonication process and the loading of nanoparticles was 1.0 wt%. After dispersion, part-A was mixed with part-B, cast into a mold, and allowed to cure. Nanophased foam was then used to fabricate sandwich composites. Static shear tests revealed that strength and modulus of nanophased foams were 33% and 19% higher than those of unreinforced (neat foams. Next, shear fatigue tests were conducted at a frequency of 3 Hz and stress ratio (R of 0.1. S-N curves were generated and fatigue performances were compared. Number of cycles to failure for nanophased sandwich was significantly higher than that of the neat ones. For example, at 57% of ultimate shear strength, nanophased sandwich would survive 400,000 cycles more than its neat counterpart. SEM micrographs indicated stronger cell structures with nanophased foams. These stronger cells strengthened the sub-interface zones underneath the actual core-skin interface. High toughness of the sub-interface layer delayed initiation of fatigue cracks and thereby increased the fatigue life of nanophased sandwich composites.

  16. Benchmarking the performance of pairwise homogenization of surface temperatures in the United States

    Science.gov (United States)

    Menne, M. J.; Williams, C. N.; Thorne, P. W.

    2013-09-01

    Changes in the circumstances behind in situ temperature measurements often lead to shifts in individual station records that can lead to over or under-estimates of the local and regional temperature trends. Since these shifts are comparable in magnitude to climate change signals, homogeneity "corrections" are necessary to make the records suitable for climate analysis. To quantify the effectiveness of surface temperature homogenization in the United States, a randomized perturbed ensemble of the pairwise homogenization algorithm was run against a suite of benchmark analogs to real monthly temperature data from the United States Cooperative Observer Program, which includes the subset of stations known as the United States Historical Climatology Network (USHCN). Results indicate that all randomized versions of the algorithm consistently produce homogenized data closer to the true climate signal in the presence of widespread systematic shifts in the data. When applied to the real-world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of shifts in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum). Trend bounds defined by the ensemble output indicate that maximum temperature trends are positive for the past 30, 50 and 100 years, and that these maximums contain pervasive negative shifts that cause the unhomogenized (raw) trends to fall below the lowest of the ensemble of homogenized trends. Moreover, because the residual impact of undetected/uncorrected shifts in the homogenized analogs is one-tailed when the imposed shifts have a positive or negative sign preference, it is likely that maximum temperature trends have been underestimated in the real-world homogenized temperature data from the USHCN. Trends for minimum temperature are also positive

  17. Study and Implementation of OLAP Performance Benchmark%OLAP性能测试方法研究与实现

    Institute of Scientific and Technical Information of China (English)

    赵博; 叶晓俊

    2011-01-01

    随着商业智能市场的逐步扩大,联机分析处理(OLAP)系统的使用质量评估已经成为数据库应用的研究热点.作为效用特性的OLAP系统性能评估需要一个性能基准.以OLAP委员会推出的APB-1 性能基准为基础,首先设计了面向多维数据库的立方体(Cube)模型以及相应的多维表达式(MDX)查询模板,在Cube模型设计的过程中修改了APB-1基准ROLAP星型模型的不足之处;接着在测试数据一致和测试参数一致的前提下,通过对设计的MOLAP模型查询结果与ROLAP模型查询结果进行对比分析,证明了MOLAP模型及MDX查询模板设计的正确性;然后给出了OLAP性能测试流程,描述了支持ROLAP和MOLAP性能测试的工具框架及其主要模块.最后使用该测试框架在商业数据库管理系统上对ROLAP和MOLAP进行并发查询实践,验证了框架的有效性.提出的方法及技术实现为未来OLAP产品性能的测试和评价提供多维数据模型、业务模型和工具的支持.%With the expansion of business intelligence (BI) market, usability evaluation of on-line analytical processing (OLAP) systems has become a hot issue in database industries. How to measure the performance of OLAP systems is an important efficiency aspect that needs to be resolved with an OLAP performance benchmark. This paper presents a unified testing approach for efficiency evaluation of both ROLAP and MOLAP implementation systems based on the APB-1 benchmark introduced by OLAP committee. Firstly, an adequate cube model for MOLAP systems and the corresponding MDX query templates for APB-1 benchmark have been proposed, in which the shortcomings of APB-1 benchmark and their ROLAP star model are discussed during the cube modeldesigning process. Then, experimental results come from new MOLAP model and ROLAP model implementation are compared under the same testing dataset and parameters in order to validate the accuracy of MOLAP model and the design correctness

  18. Food safety performance indicators to benchmark food safety output of food safety management systems

    NARCIS (Netherlands)

    Jacxsens, L.; Uyttendaele, M.; Devlieghere, F.; Rovira, J.; Oses Gomez, S.; Luning, P.A.

    2010-01-01

    There is a need to measure the food safety performance in the agri-food chain without performing actual microbiological analysis. A food safety performance diagnosis, based on seven indicators and corresponding assessment grids have been developed and validated in nine European food businesses. Vali

  19. Food safety performance indicators to benchmark food safety output of food safety management systems

    NARCIS (Netherlands)

    Jacxsens, L.; Uyttendaele, M.; Devlieghere, F.; Rovira, J.; Oses Gomez, S.; Luning, P.A.

    2010-01-01

    There is a need to measure the food safety performance in the agri-food chain without performing actual microbiological analysis. A food safety performance diagnosis, based on seven indicators and corresponding assessment grids have been developed and validated in nine European food businesses.

  20. Food safety performance indicators to benchmark food safety output of food safety management systems.

    Science.gov (United States)

    Jacxsens, L; Uyttendaele, M; Devlieghere, F; Rovira, J; Gomez, S Oses; Luning, P A

    2010-07-31

    There is a need to measure the food safety performance in the agri-food chain without performing actual microbiological analysis. A food safety performance diagnosis, based on seven indicators and corresponding assessment grids have been developed and validated in nine European food businesses. Validation was conducted on the basis of an extensive microbiological assessment scheme (MAS). The assumption behind the food safety performance diagnosis is that food businesses which evaluate the performance of their food safety management system in a more structured way and according to very strict and specific criteria will have a better insight in their actual microbiological food safety performance, because food safety problems will be more systematically detected. The diagnosis can be a useful tool to have a first indication about the microbiological performance of a food safety management system present in a food business. Moreover, the diagnosis can be used in quantitative studies to get insight in the effect of interventions on sector or governmental level.

  1. High Performance Ethernet Packet Processor Core for Next Generation Networks

    Directory of Open Access Journals (Sweden)

    Raja Jitendra Nayaka

    2012-10-01

    Full Text Available As the demand for high speed Internet significantly increasing to meet the requirement of large datatransfers, real-time communication and High Definition ( HD multimedia transfer over IP, the IP basednetwork products architecture must evolve and change. Application specific processors require highperformance, low power and high degree of programmability is the limitation in many general processorbased applications. This paper describes the design of Ethernet packet processor for system-on-chip (SoCwhich performs all core packet processing functions, including segmentation and reassembly, packetizationclassification, route and queue management which will speedup switching/routing performance making itmore suitable for Next Generation Networks (NGN. Ethernet packet processor design can be configuredfor use with multiple projects targeted to a FPGA device the system is designed to support 1/10/20/40/100Gigabit links with a speed and performance advantage. VHDL has been used to implement and simulatedthe required functions in FPGA.

  2. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  3. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  4. Toxicological Benchmarks for Wildlife

    Energy Technology Data Exchange (ETDEWEB)

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red

  5. Elevations in core and muscle temperature impairs repeated sprint performance

    DEFF Research Database (Denmark)

    Drust, B.; Rasmussen, P.; Mohr, Magni

    2005-01-01

    following the hyperthermic sprints compared to control. CONCLUSION: Although an elevated muscle temperature is expected to promote sprint performance, power output during the repeated sprints was reduced by hyperthermia. The impaired performance does not seem to relate to the accumulation of recognized...... on a cycle ergometer in normal (approximately 20 degrees C, control) and hot (40 degrees C, hyperthermia) environments. RESULTS: Completion of the intermittent protocol in the heat elevated core and muscle temperatures (39.5 +/- 0.2 degrees C; 40.2 +/- 0.4 degrees C), heart rate (178 +/- 11 beats min(-1......)), rating of perceived exertion (RPE) (18 +/- 1) and noradrenaline (38.9 +/- 13.2 micromol l(-1)) (all P power output were similar across the environmental conditions. However, mean power over the last four sprints declined to a larger extent...

  6. Low cost satellite data for monthly irrigation performance monitoring: benchmarks from Nilo Coelho, Brazil

    NARCIS (Netherlands)

    Bastiaanssen, W.G.M.; Brito, R.A.L.; Bos, M.; Souza, R.A.; Cavalcanti, E.B.; Bakker, M.M.

    2001-01-01

    Irrigation performance indicators can help water managers to understand how an irrigation scheme operates under actual circumstances. The new contribution of remote sensing data is the opportunity to study the crop growing conditions at scales ranging from individual fields to scheme level. Public

  7. User-Centric Approach for Benchmark RDF Data Generator in Big Data Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Purohit, Sumit; Paulson, Patrick R.; Rodriguez, Luke R.

    2016-02-05

    This research focuses on user-centric approach of building such tools and proposes a flexible, extensible, and easy to use framework to support performance analysis of Big Data systems. Finally, case studies from two different domains are presented to validate the framework.

  8. Assessing IT Management's Performance : A Design Theory for Strategic IT Benchmarking

    NARCIS (Netherlands)

    Ebner, Katharina; Mueller, Benjamin; Urbach, Nils; Riempp, Gerold; Krcmar, Helmut

    2016-01-01

    Given the continued economic pressure on information technology (IT) organizations, the effective and efficient delivery of IT remains a crucial issue for IT executives in order to optimize their department's performance. Due to company specifics, however, an absolute assessment of IT organizations'

  9. Performance of T12 and T8 Fluorescent Lamps and Troffers and LED Linear Replacement Lamps CALiPER Benchmark Report

    Energy Technology Data Exchange (ETDEWEB)

    Myer, Michael; Paget, Maria L.; Lingard, Robert D.

    2009-01-16

    The Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) Program was established in 2006 to investigate the performance of light-emitting diode (LED) based luminaires and replacement lamps. To help users better compare LED products with conventional lighting technologies, CALiPER has also performed benchmark research and testing of traditional (i.e., non-LED) lamps and fixtures. This benchmark report addresses standard 4-foot fluorescent lamps (i.e., T12 and T8) and the 2-foot by 4-foot recessed troffers in which they are commonly used. This report also examines available LED replacements for T12 and T8 fluorescent lamps, and their application in fluorescent troffers. The construction and operation of linear fluorescent lamps and troffers are discussed, as well as fluorescent lamp and fixture performance, based on manufacturer data and CALiPER benchmark testing. In addition, the report describes LED replacements for linear fluorescent lamps, and compares their bare lamp and in situ performance with fluorescent benchmarks on a range of standard lighting measures, including power usage, light output and distribution, efficacy, correlated color temperature, and the color rendering index. Potential performance and application issues indicated by CALiPER testing results are also examined.

  10. Performance evaluation of open core gasifier on multi-fuels

    Energy Technology Data Exchange (ETDEWEB)

    Bhoi, P.R.; Singh, R.N.; Sharma, A.M.; Patel, S.R. [Thermo Chemical Conversion Division, Sardar Patel Renewable Energy Research Institute (SPRERI), Vallabh Vidyanagar 388 120, Gujarat (India)

    2006-06-15

    Sardar Patel renewable energy research institute (SPRERI) has designed and developed open core, throat-less, down draft gasifier and installed it at the institute. The gasifier was designed for loose agricultural residues like groundnut shells. The purpose of the study is to evaluate the gasifier on multi-fuels such as babul wood (Prosopis juliflora), groundnut shell briquettes, groundnut shell, mixture of wood (Prosopis juliflora) and groundnut shell in the ratio of 1:1 and cashew nut shell. The gasifier performance was evaluated in terms of fuel consumption rate, calorific value of producer gas and gasification efficiency. Gasification efficiency of babul wood (Prosopis juliflora), groundnut shell briquettes, groundnut shell, mixture of Prosopis juliflora and groundnut shell in the ratio of 1:1 and cashew nut shell were 72%, 66%, 70%, 64%, 70%, respectively. Study revealed that babul wood (Prosopis juliflora), groundnut shell briquettes, groundnut shell, mixture of wood (Prosopis juliflora) and groundnut shell in the ratio of 1:1 and cashew nut shell were satisfactorily gasified in open core down draft gasifier. The study also showed that there was flow problem with groundnut shell. (author)

  11. Applications of Integral Benchmark Data

    Energy Technology Data Exchange (ETDEWEB)

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. (Skip) Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  12. The Telemedicine benchmark--a general tool to measure and compare the performance of video conferencing equipment in the telemedicine area.

    Science.gov (United States)

    Klutke, P J; Mattioli, P; Baruffaldi, F; Toni, A; Englmeier, K H

    1999-09-01

    In this paper, we describe the 'Telemedicine Benchmark' (TMB), which is a set of standard procedures, protocols and measurements to test reliability and levels of performance of data exchange in a telemedicine session. We have put special emphasis on medical imaging, i.e. digital image transfer, joint viewing and editing and 3D manipulation. With the TMB, we can compare the aptitude of different video conferencing software systems for telemedicine issues and the effect of different network technologies (ISDN, xDSL, ATM, Ethernet). The evaluation criteria used are length of delays and functionality. For the application of the TMB, a data set containing radiological images and medical reports was set up. Considering the Benchmark protocol, this data set has to be exchanged between the partners of the session. The Benchmark covers file transfer, whiteboard usage, application sharing and volume data analysis and compression. The TMB has proven to be a useful tool in several evaluation issues.

  13. Relationship between the TCAP and the Pearson Benchmark Assessment in Elementary Students' Reading and Math Performance in a Northeastern Tennessee School District

    Science.gov (United States)

    Dugger-Roberts, Cherith A.

    2014-01-01

    The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…

  14. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    Energy Technology Data Exchange (ETDEWEB)

    Arnis Judzis

    2004-07-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting April 2004 through June 2004. The DOE and TerraTek continue to wait for Novatek on the optimization portion of the testing program (they are completely rebuilding their fluid hammer). The latest indication is that the Novatek tool would be ready for retesting only 4Q 2004 or later. Smith International's hammer was tested in April of 2004 (2Q 2004 report). Accomplishments included the following: (1) TerraTek re-tested the ''optimized'' fluid hammer provided by Smith International during April 2004. Many improvements in mud hammer rates of penetration were noted over Phase 1 benchmark testing from November 2002. (2) Shell Exploration and Production in The Hague was briefed on various drilling performance projects including Task 8 ''Cutter Impact Testing''. Shell interest and willingness to assist in the test matrix as an Industry Advisor is appreciated. (3) TerraTek participated in a DOE/NETL Review meeting at Morgantown on April 15, 2004. The discussions were very helpful and a program related to the Mud Hammer optimization project was noted--Terralog modeling work on percussion tools. (4) Terralog's Dr. Gang Han witnessed some of the full-scale optimization testing of the Smith International hammer in order to familiarize him with downhole tools. TerraTek recommends that modeling first start with single cutters/inserts and progress in complexity. (5) The final equipment problem on the impact testing task was resolved through the acquisition of a high data rate laser based displacement instrument. (6) TerraTek provided Novatek much engineering support for the future re-testing of their optimized tool. Work was conducted on slip ring [electrical] specifications and tool collar sealing in the

  15. Reliability and Practicality of the Core Score: Four Dynamic Core Stability Tests Performed in a Physician Office Setting.

    Science.gov (United States)

    Friedrich, Jason; Brakke, Rachel; Akuthota, Venu; Sullivan, William

    2017-07-01

    Pilot study to determine the practicality and inter-rater reliability of the "Core Score," a composite measure of 4 clinical core stability tests. Repeated measures. Academic hospital physician clinic. 23 healthy volunteers with mean age of 32 years (12 females, 11 males). All subjects performed 4 core stability maneuvers under direct observation from 3 independent physicians in sequence. Inter-rater reliability and time necessary to perform examination. The Core Score scale is 0 to 12, with 12 reflecting the best core stability. The mean composite score of all 4 tests for all subjects was 9.54 (SD, 1.897; range, 4-12). The intraclass correlation coefficients (ICC 1,1) for inter-rater reliability for the composite Core Score and 4 individual tests were 0.68 (Core Score), 0.14 (single-leg squat), 0.40 (supine bridge), 0.69 (side bridge), and 0.46 (prone bridge). The time required for a single examiner to assess a given subject's core stability in all 4 maneuvers averaged 4 minutes (range, 2-6 minutes). Even without specialized equipment, a clinically practical and moderately reliable measure of core stability may be possible. Further research is necessary to optimize this measure for clinical application. Despite the known value of core stability to athletes and patients with low back pain, there is currently no reliable and practical means for rating core stability in a typical office-based practice. This pilot study provides a starting point for future reliability research on clinical core stability assessments.

  16. Organizational Benchmarks for Test Utilization Performance: An Example Based on Positivity Rates for Genetic Tests.

    Science.gov (United States)

    Rudolf, Joseph; Jackson, Brian R; Wilson, Andrew R; Smock, Kristi J; Schmidt, Robert L

    2017-04-01

    Health care organizations are under increasing pressure to deliver value by improving test utilization management. Many factors, including organizational factors, could affect utilization performance. Past research has focused on the impact of specific interventions in single organizations. The impact of organizational factors is unknown. The objective of this study is to determine whether testing patterns are subject to organizational effects, ie, are utilization patterns for individual tests correlated within organizations. Comparative analysis of ordering patterns (positivity rates for three genetic tests) across 659 organizations. Hierarchical regression was used to assess the impact of organizational factors after controlling for test-level factors (mutation prevalence) and hospital bed size. Test positivity rates were correlated within organizations. Organizations have a statistically significant impact on the positivity rate of three genetic tests.

  17. PERFORMANCE ANALYSIS OF MESSAGE PASSING INTERFACE COLLECTIVE COMMUNICATION ON INTEL XEON QUAD-CORE GIGABIT ETHERNET AND INFINIBAND CLUSTERS

    Directory of Open Access Journals (Sweden)

    Roswan Ismail

    2013-01-01

    Full Text Available The performance of MPI implementation operations still presents critical issues for high performance computing systems, particularly for more advanced processor technology. Consequently, this study concentrates on benchmarking MPI implementation on multi-core architecture by measuring the performance of Open MPI collective communication on Intel Xeon dual quad-core Gigabit Ethernet and InfiniBand clusters using SKaMPI. It focuses on well known collective communication routines such as MPI-Bcast, MPI-AlltoAll, MPI-Scatter and MPI-Gather. From the collection of results, MPI collective communication on InfiniBand clusters had distinctly better performance in terms of latency and throughput. The analysis indicates that the algorithm used for collective communication performed very well for all message sizes except for MPI-Bcast and MPI-Alltoall operation of inter-node communication. However, InfiniBand provides the lowest latency for all operations since it provides applications with an easy to use messaging service, compared to Gigabit Ethernet, which still requests the operating system for access to one of the server communication resources with the complex dance between an application and a network.

  18. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    Science.gov (United States)

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  19. Benchmarking: applications to transfusion medicine.

    Science.gov (United States)

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  20. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  1. HS06 Benchmark for an ARM Server

    Science.gov (United States)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  2. HS06 Benchmark for an ARM Server

    CERN Document Server

    Kluth, Stefan

    2013-01-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  3. Liver tumour segmentation using contrast-enhanced multi-detector CT data: performance benchmarking of three semiautomated methods

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Jia-Yin [National University of Singapore, Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, Singapore (Singapore); Agency for Science, Technology and Research, Institute for Infocomm Research, Singapore (Singapore); Wong, Damon W.K.; Tian, Qi; Xiong, Wei; Liu, Jimmy J. [Agency for Science, Technology and Research, Institute for Infocomm Research, Singapore (Singapore); Ding, Feng; Venkatesh, Sudhakar K.; Qi, Ying-Yi [National University of Singapore, Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, Singapore (Singapore); Leow, Wee-Kheng [National University of Singapore, School of Computing, Singapore (Singapore)

    2010-07-15

    Automatic tumour segmentation and volumetry is useful in cancer staging and treatment outcome assessment. This paper presents a performance benchmarking study on liver tumour segmentation for three semiautomatic algorithms: 2D region growing with knowledge-based constraints (A1), 2D voxel classification with propagational learning (A2) and Bayesian rule-based 3D region growing (A3). CT data from 30 patients were studied, and 47 liver tumours were isolated and manually segmented by experts to obtain the reference standard. Four datasets with ten tumours were used for algorithm training and the remaining 37 tumours for testing. Three evaluation metrics, relative absolute volume difference (RAVD), volumetric overlap error (VOE) and average symmetric surface distance (ASSD), were computed based on computerised and reference segmentations. A1, A2 and A3 obtained mean/median RAVD scores of 17.93/10.53%, 17.92/9.61% and 34.74/28.75%, mean/median VOEs of 30.47/26.79%, 25.70/22.64% and 39.95/38.54%, and mean/median ASSDs of 2.05/1.41 mm, 1.57/1.15 mm and 4.12/3.41 mm, respectively. For each metric, we obtained significantly lower values of A1 and A2 than A3 (P < 0.01), suggesting that A1 and A2 outperformed A3. Compared with the reference standard, the overall performance of A1 and A2 is promising. Further development and validation is necessary before reliable tumour segmentation and volumetry can be widely used clinically. (orig.)

  4. Preliminary Benchmark Evaluation of Japan’s High Temperature Engineering Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    John Darrell Bess

    2009-05-01

    A benchmark model of the initial fully-loaded start-up core critical of Japan’s High Temperature Engineering Test Reactor (HTTR) was developed to provide data in support of ongoing validation efforts of the Very High Temperature Reactor Program using publicly available resources. The HTTR is a 30 MWt test reactor utilizing graphite moderation, helium coolant, and prismatic TRISO fuel. The benchmark was modeled using MCNP5 with various neutron cross-section libraries. An uncertainty evaluation was performed by perturbing the benchmark model and comparing the resultant eigenvalues. The calculated eigenvalues are approximately 2-3% greater than expected with an uncertainty of ±0.70%. The primary sources of uncertainty are the impurities in the core and reflector graphite. The release of additional HTTR data could effectively reduce the benchmark model uncertainties and bias. Sensitivity of the results to the graphite impurity content might imply that further evaluation of the graphite content could significantly improve calculated results. Proper characterization of graphite for future Next Generation Nuclear Power reactor designs will improve computational modeling capabilities. Current benchmarking activities include evaluation of the annular HTTR cores and assessment of the remaining start-up core physics experiments, including reactivity effects, reactivity coefficient, and reaction-rate distribution measurements. Long term benchmarking goals might include analyses of the hot zero-power critical, rise-to-power tests, and other irradiation, safety, and technical evaluations performed with the HTTR.

  5. Doing Good While Performing Well at Flemish Universities: Benchmarking Higher Education Institutions in Terms of Social Inclusion and Market Performance

    Science.gov (United States)

    Haezendonck, Elvira; Willems, Kim; Hillemann, Jenny

    2017-01-01

    Universities, and higher education institutions in general, are ever more influenced by output-driven performance indicators and models that originally stem from the profit-organisational context. As a result, universities are increasingly considering management tools that support them in the (decision) process for attaining their strategic goals.…

  6. Doing Good While Performing Well at Flemish Universities: Benchmarking Higher Education Institutions in Terms of Social Inclusion and Market Performance

    Science.gov (United States)

    Haezendonck, Elvira; Willems, Kim; Hillemann, Jenny

    2017-01-01

    Universities, and higher education institutions in general, are ever more influenced by output-driven performance indicators and models that originally stem from the profit-organisational context. As a result, universities are increasingly considering management tools that support them in the (decision) process for attaining their strategic goals.…

  7. Multi-core processing and scheduling performance in CMS

    CERN Document Server

    Hernandez Calama , Jose

    2012-01-01

    than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since mul...

  8. Nanostructure Core Fiber With Enhanced Performances: Design, Fabrication and Devices

    DEFF Research Database (Denmark)

    Yu, X.; Yan, Min; Ren, G.B.;

    2009-01-01

    We report a new type of silica-based all-solid fiber with a 2-D nanostructure core. The nanostructure core fiber (NCF) is formed by a 2-D array of high-index rods of sub-wavelength dimensions. We theoretically study the birefringence property of such fibers over a large wavelength range. Large...

  9. PNNL Information Technology Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  10. 基于标杆超越的绩效管理流程再造%Process Reengineering of Benchmarking-based Performance Management

    Institute of Scientific and Technical Information of China (English)

    严长远

    2012-01-01

    标杆超越是企业管理变革实践的先进方法,它是适应我国当前社会的一个重要方法,在国内外企业管理中都有较好应用.本文首先分析了标杆超越内涵、特点及价值,然后例证了一个基于标杆超越的绩效管理的流程的实践,最后提出了实施标杆超越应注意的问题,希望能对相关企业有借鉴意义.%Benchmarking is advanced method of enterprise management change and practice, it is an important method of adapting to current society, and it is better applied in the management of domestic and foreign enterprises. The article firstly analyzes the content, features and value of benchmarking, and then illustrates the practice of process of performance management based on benchmarking, and finally puts forward the problems should be noted for the implementation of benchmarking, hoping to provide reference for related enterprise.

  11. HPCS HPCchallenge Benchmark Suite

    Science.gov (United States)

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  12. Benchmarking Danish Industries

    DEFF Research Database (Denmark)

    Gammelgaard, Britta; Bentzen, Eric; Aagaard Andreassen, Mette

    2003-01-01

    compatible survey. The International Manufacturing Strategy Survey (IMSS) doesbring up the question of supply chain management, but unfortunately, we did not have access to thedatabase. Data from the members of the SCOR-model, in the form of benchmarked performance data,may exist, but are nonetheless...

  13. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)

    1993-11-01

    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  14. Benchmarking and accounting for the (private) cloud

    Science.gov (United States)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  15. Critical review of the impact of core stability on upper extremity athletic injury and performance.

    Science.gov (United States)

    Silfies, Sheri P; Ebaugh, David; Pontillo, Marisa; Butowicz, Courtney M

    2015-09-01

    Programs designed to prevent or rehabilitate athletic injuries or improve athletic performance frequently focus on core stability. This approach is based upon the theory that poor core stability increases the risk of poor performance and/or injury. Despite the widespread use of core stability training amongst athletes, the question of whether or not sufficient evidence exists to support this practice remains to be answered. 1) Open a dialogue on the definition and components of core stability. 2) Provide an overview of current science linking core stability to musculoskeletal injuries of the upper extremity. 3) Provide an overview of evidence for the association between core stability and athletic performance. Core stability is the ability to control the position and movement of the trunk for optimal production, transfer, and control of forces to and from the upper and lower extremities during functional activities. Muscle capacity and neuromuscular control are critical components of core stability. A limited body of evidence provides some support for a link between core stability and upper extremity injuries amongst athletes who participate in baseball, football, or swimming. Likewise, few studies exist to support a relationship between core stability and athletic performance. A limited body of evidence exists to support the use of core stability training in injury prevention or performance enhancement programs for athletes. Clearly more research is needed to inform decision making when it comes to inclusion or emphasis of core training when designing injury prevention and rehabilitation programs for athletes.

  16. Critical review of the impact of core stability on upper extremity athletic injury and performance

    Directory of Open Access Journals (Sweden)

    Sheri P. Silfies

    2015-10-01

    Full Text Available BACKGROUND: Programs designed to prevent or rehabilitate athletic injuries or improve athletic performance frequently focus on core stability. This approach is based upon the theory that poor core stability increases the risk of poor performance and/or injury. Despite the widespread use of core stability training amongst athletes, the question of whether or not sufficient evidence exists to support this practice remains to be answered.OBJECTIVES: 1 Open a dialogue on the definition and components of core stability. 2 Provide an overview of current science linking core stability to musculoskeletal injuries of the upper extremity. 3 Provide an overview of evidence for the association between core stability and athletic performance.DISCUSSION: Core stability is the ability to control the position and movement of the trunk for optimal production, transfer, and control of forces to and from the upper and lower extremities during functional activities. Muscle capacity and neuromuscular control are critical components of core stability. A limited body of evidence provides some support for a link between core stability and upper extremity injuries amongst athletes who participate in baseball, football, or swimming. Likewise, few studies exist to support a relationship between core stability and athletic performance.CONCLUSIONS: A limited body of evidence exists to support the use of core stability training in injury prevention or performance enhancement programs for athletes. Clearly more research is needed to inform decision making when it comes to inclusion or emphasis of core training when designing injury prevention and rehabilitation programs for athletes.

  17. Benchmarking of LSTM Networks

    OpenAIRE

    Breuel, Thomas M.

    2015-01-01

    LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperf...

  18. A pilot study of core stability and athletic performance: is there a relationship?

    Science.gov (United States)

    Sharrock, Chris; Cropper, Jarrod; Mostad, Joel; Johnson, Matt; Malone, Terry

    2011-06-01

    Correlation study To objectively evaluate the relationship between core stability and athletic performance measures in male and female collegiate athletes. The relationship between core stability and athletic performance has yet to be quantified in the available literature. The current literature does not demonstrate whether or not core strength relates to functional performance. Questions remain regarding the most important components of core stability, the role of sport specificity, and the measurement of core stability in relation to athletic performance. A sample of 35 volunteer student athletes from Asbury College (NAIA Division II) provided informed consent. Participants performed a series of five tests: double leg lowering (core stability test), the forty yard dash, the T-test, vertical jump, and a medicine ball throw. Participants performed three trials of each test in a randomized order. Correlations between the core stability test and each of the other four performance tests were determined using a General Linear Model. Medicine ball throw negatively correlated to the core stability test (r -0.389, p=0.023). Participants that performed better on the core stability test had a stronger negative correlation to the medicine ball throw (r =-0.527). Gender was the most strongly correlated variable to core strength, males with a mean measurement of double leg lowering of 47.43 degrees compared to females having a mean of 54.75 degrees. There appears to be a link between a core stability test and athletic performance tests; however, more research is needed to provide a definitive answer on the nature of this relationship. Ideally, specific performance tests will be able to better define and to examine relationships to core stability. Future studies should also seek to determine if there are specific sub-categories of core stability which are most important to allow for optimal training and performance for individual sports.

  19. Characterizing the impact of using spare-cores on application performance

    Energy Technology Data Exchange (ETDEWEB)

    Sancho Pitarch, Jose Carlos [Los Alamos National Laboratory; Kerbyson, Darren J [Los Alamos National Laboratory; Lang, Mike [Los Alamos National Laboratory

    2010-01-01

    Increased parallelism on a single processor is driving improvements in peak-performance at both the node and system levels. However achievable performance, in particular from production scientific applications, is not always directly proportional to the core count. Performance is often limited by constraints in the memory hierarchy and also by a node interconnectivity. Even on state-of-the-art processors, containing between four and eight cores, many applications cannot take full advantage of the compute-performance of all cores. This trend is expected to increase on future processors as the core count per processor increases. In this work we characterize the use of spare-cores, cores that do not provide any improvements in application performance, on current multi-core processors. By using a pulse-width modulation method, we examine the possible performance profile of using a spare-core and quantify under what situations its use will not impact application performance. We show that, for current AMD and Intel multi-core processors, spare-cores can be used for substantial computational tasks but can impact application performance when using shared caches or when significantly accessing main memory.

  20. Metrics for Success: Strategies for Enabling Core Facility Performance and Assessing Outcomes.

    Science.gov (United States)

    Turpen, Paula B; Hockberger, Philip E; Meyn, Susan M; Nicklin, Connie; Tabarini, Diane; Auger, Julie A

    2016-04-01

    Core Facilities are key elements in the research portfolio of academic and private research institutions. Administrators overseeing core facilities (core administrators) require assessment tools for evaluating the need and effectiveness of these facilities at their institutions. This article discusses ways to promote best practices in core facilities as well as ways to evaluate their performance across 8 of the following categories: general management, research and technical staff, financial management, customer base and satisfaction, resource management, communications, institutional impact, and strategic planning. For each category, we provide lessons learned that we believe contribute to the effective and efficient overall management of core facilities. If done well, we believe that encouraging best practices and evaluating performance in core facilities will demonstrate and reinforce the importance of core facilities in the research and educational mission of institutions. It will also increase job satisfaction of those working in core facilities and improve the likelihood of sustainability of both facilities and personnel.

  1. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  2. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  3. CORE

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Hundebøll, Martin

    2013-01-01

    different flows. Instead of maintaining these approaches separate, we propose a protocol (CORE) that brings together these coding mechanisms. Our protocol uses random linear network coding (RLNC) for intra- session coding but allows nodes in the network to setup inter- session coding regions where flows...... intersect. Routes for unicast sessions are agnostic to other sessions and setup beforehand, CORE will then discover and exploit intersecting routes. Our approach allows the inter-session regions to leverage RLNC to compensate for losses or failures in the overhearing or transmitting process. Thus, we...... increase the benefits of XORing by exploiting the underlying RLNC structure of individual flows. This goes beyond providing additional reliability to each individual session and beyond exploiting coding opportunistically. Our numerical results show that CORE outperforms both forwarding and COPE...

  4. Performance of the widely used Minnesota density functionals for the prediction of heat of formations, ionization potentials of some benchmarked first row transition metal complexes.

    Science.gov (United States)

    Shil, Suranjan; Bhattacharya, Debojit; Sarkar, Sonali; Misra, Anirban

    2013-06-13

    We have computed and investigated the performance of Minnesota density functionals especially the M05, M06, and M08 suite of complementary density functionals for the prediction of the heat of formations (HOFs) and the ionization potentials (IPs) of various benchmark complexes containing nine different first row transition metals. The eight functionals of M0X family, namely, the M05, M05-2X, M06-L, M06, M06-2X, M06-HF, M08-SO, and M08-HX are taken for the computation of the above-mentioned physical properties of such metal complexes along with popular Los Alamos National Laboratory 2 double-ζ (LANL2DZ) basis set. Total 54 benchmark systems are taken for HOF calculation, whereas the 47 systems among these benchmark complexes are chosen for the calculation of IPs because of lack of experimental results on rest of the seven systems. The computed values of HOFs and IPs are compared with the experimental results obtained from the literature. The deviation of these computed values from the actual experimental results is calculated for each eight different M0X functionals to judge their performances in evaluating these properties. Finally, a clear relationship between the exchange correlation energy of eight M0X functionals and their efficiency are made to predict the different physical properties.

  5. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  6. Critical review of the impact of core stability on upper extremity athletic injury and performance

    OpenAIRE

    Silfies,Sheri P.; David Ebaugh; Marisa Pontillo; Butowicz,Courtney M.

    2015-01-01

    BACKGROUND: Programs designed to prevent or rehabilitate athletic injuries or improve athletic performance frequently focus on core stability. This approach is based upon the theory that poor core stability increases the risk of poor performance and/or injury. Despite the widespread use of core stability training amongst athletes, the question of whether or not sufficient evidence exists to support this practice remains to be answered. OBJECTIVES: 1) Open a dialogue on the definition and comp...

  7. Critical review of the impact of core stability on upper extremity athletic injury and performance

    OpenAIRE

    2015-01-01

    BACKGROUND: Programs designed to prevent or rehabilitate athletic injuries or improve athletic performance frequently focus on core stability. This approach is based upon the theory that poor core stability increases the risk of poor performance and/or injury. Despite the widespread use of core stability training amongst athletes, the question of whether or not sufficient evidence exists to support this practice remains to be answered. OBJECTIVES: 1) Open a dialogue on the definition and comp...

  8. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  9. A study on improving the performance of a research reactor's equilibrium core

    Directory of Open Access Journals (Sweden)

    Muhammad Atta

    2013-01-01

    Full Text Available Utilizing low enriched uranium silicide fuel (U3Si2-Al of existing uranium density (3.285 g/cm3, different core configurations have been studied in search of an equilibrium core with an improved performance for the Pakistan Research Reactor-1. Furthermore, we have extended our analysis to the performance of higher density silicide fuels with a uranium density of 4.0 and 4.8 U g/cm3. The criterion used in selecting the best performing core was that of “unit flux time cycle length per 235U mass per cycle”. In order to analyze core performance by improving neutron moderation, utilizing higher-density fuel, the effect of the coolant channel width was also studied by reducing the number of plates in the standard/control fuel element. Calculations employing computer codes WIMSD/4 and CITATION were performed. A ten energy group structure for fission neutrons was used for the generation of microscopic cross-sections through WIMSD/4. To search the equilibrium core, two-dimensional core modelling was performed in CITATION. Performance indicators have shown that the higher-density uranium silicide-fuelled core (U density 4.8 g/cm3 without any changes in standard/control fuel elements, comprising of 15 standard and 4 control fuel elements, is the best performing of all analyzed cores.

  10. Performance of High-frequency High-flux Magnetic Cores at Cryogenic Temperatures

    Science.gov (United States)

    Gerber, Scott S.; Hammoud, Ahmad; Elbuluk, Malik E.; Patterson, Richard L.

    2002-01-01

    Three magnetic powder cores and one ferrite core, which are commonly used in inductor and transformer design for switch mode power supplies, were selected for investigation at cryogenic temperatures. The powder cores are Molypermalloy Core (MPC), High Flux Core (HFC), and Kool Mu Core (KMC). The performance of four inductors utilizing these cores has been evaluated as a function of temperature from 20 C to -180 C. All cores were wound with the same wire type and gauge to obtain equal values of inductance at room temperature. Each inductor was evaluated in terms of its inductance, quality (Q) factor, resistance, and dynamic hysteresis characteristics (B-H loop) as a function of temperature and frequency. Both sinusoidal and square wave excitations were used in these investigations. Measured data obtained on the inductance showed that both the MPC and the HFC cores maintain a constant inductance value, whereas with the KMC and ferrite core hold a steady value in inductance with frequency but decrease as temperature is decreased. All cores exhibited dependency, with varying degrees, in their quality factor and resistance on test frequency and temperature. Except for the ferrite, all cores exhibited good stability in the investigated properties with temperature as well as frequency. Details of the experimental procedures and test results are presented and discussed in the paper.

  11. Kvantitativ benchmark - Produktionsvirksomheder

    DEFF Research Database (Denmark)

    Sørensen, Ole H.; Andersen, Vibeke

    Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet.......Rapport med resultatet af kvantitativ benchmark over produktionsvirksomhederne i VIPS projektet....

  12. Benchmarking in Student Affairs.

    Science.gov (United States)

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  13. PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    CERN Document Server

    Dreher, Patrick; Hill, Chris; Gadepally, Vijay; Kuszmaul, Bradley; Kepner, Jeremy

    2016-01-01

    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictio...

  14. ORNL instrumentation performance for Slab Core Test Facility (SCTF)-Core I Reflood Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, J E; Hess, R A; Hylton, J O

    1983-11-01

    Instrumentation was developed for making measurements in experimental refill-reflood test facilities. These unique instrumentation systems were designed to survive the severe environmental conditions that exist during a simulated pressurized water reactor loss-of-coolant accident (LOCA). Measurement of in-vessel fluid phenomena such as two-phase flow velocity and void fraction and film thickness and film velocity are required for better understanding of reactor behavior during LOCAs. The Advanced Instrumentation for Reflood Studies (AIRS) Program fabricated and delivered instrumentation systems and data reduction software algorithms that allowed the above measurements to be made. Data produced by AIRS sensors during three experimental runs in the Japanese Slab Core Test Facility are presented. Although many of the sensors failed before any useful data could be obtained, the remaining probes gave encouraging and useful results. These results are the first of their kind produced during simulated refill-reflood stage of a LOCA near actual thermohydrodynamic conditions.

  15. High performance multi-core iron oxide nanoparticles for magnetic hyperthermia: microwave synthesis, and the role of core-to-core interactions

    Science.gov (United States)

    Blanco-Andujar, C.; Ortega, D.; Southern, P.; PankhurstJoint Last Authors., Q. A.; Thanh, N. T. K.

    2015-01-01

    The adoption of magnetic hyperthermia as either a stand-alone or adjunct therapy for cancer is still far from being optimised due to the variable performance found in many iron oxide nanoparticle systems, including commercially available formulations. Herein, we present a reproducible and potentially scalable microwave-based method to make stable citric acid coated multi-core iron oxide nanoparticles, with exceptional magnetic heating parameters, viz. intrinsic loss parameters (ILPs) of up to 4.1 nH m2 kg-1, 35% better than the best commercial equivalents. We also probe the core-to-core magnetic interactions in the particles via remanence-derived Henkel and ΔM plots. These reveal a monotonic dependence of the ILP on the magnetic interaction field Hint, and show that the interactions are demagnetising in nature, and act to hinder the magnetic heating mechanism.The adoption of magnetic hyperthermia as either a stand-alone or adjunct therapy for cancer is still far from being optimised due to the variable performance found in many iron oxide nanoparticle systems, including commercially available formulations. Herein, we present a reproducible and potentially scalable microwave-based method to make stable citric acid coated multi-core iron oxide nanoparticles, with exceptional magnetic heating parameters, viz. intrinsic loss parameters (ILPs) of up to 4.1 nH m2 kg-1, 35% better than the best commercial equivalents. We also probe the core-to-core magnetic interactions in the particles via remanence-derived Henkel and ΔM plots. These reveal a monotonic dependence of the ILP on the magnetic interaction field Hint, and show that the interactions are demagnetising in nature, and act to hinder the magnetic heating mechanism. Electronic supplementary information (ESI) available: Reproducibility studies and additional characterisation data including SQUID Magnetometry, TEM, ATR-FTIR, XRD and Mossbauer spectroscopy. See DOI: 10.1039/c4nr06239f

  16. Benchmarking in academic pharmacy departments.

    Science.gov (United States)

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  17. Core self-evaluations and job performance: the role of the perceived work environment.

    Science.gov (United States)

    Kacmar, K Michele; Collins, Brian J; Harris, Kenneth J; Judge, Timothy A

    2009-11-01

    Using trait activation theory as a framework, the authors examined the moderating role of two situational variables-perceptions of organizational politics and perceptions of leader effectiveness-on the relationship between core self-evaluations and job performance. Results from two samples (N = 137 and N = 226) indicate that employee perceptions of their work environment moderated the relationship between their core self-evaluations and supervisor ratings of their performance. In particular, those with higher core self-evaluations received higher performance ratings in environments perceived as favorable than in environments perceived as unfavorable.

  18. Optimization of Deep Drilling Performance--Development and Benchmark Testing of Advanced Diamond Product Drill Bits & HP/HT Fluids to Significantly Improve Rates of Penetration

    Energy Technology Data Exchange (ETDEWEB)

    Alan Black; Arnis Judzis

    2003-10-01

    This document details the progress to date on the OPTIMIZATION OF DEEP DRILLING PERFORMANCE--DEVELOPMENT AND BENCHMARK TESTING OF ADVANCED DIAMOND PRODUCT DRILL BITS AND HP/HT FLUIDS TO SIGNIFICANTLY IMPROVE RATES OF PENETRATION contract for the year starting October 2002 through September 2002. The industry cost shared program aims to benchmark drilling rates of penetration in selected simulated deep formations and to significantly improve ROP through a team development of aggressive diamond product drill bit--fluid system technologies. Overall the objectives are as follows: Phase 1--Benchmark ''best in class'' diamond and other product drilling bits and fluids and develop concepts for a next level of deep drilling performance; Phase 2--Develop advanced smart bit--fluid prototypes and test at large scale; and Phase 3--Field trial smart bit--fluid concepts, modify as necessary and commercialize products. Accomplishments to date include the following: 4Q 2002--Project started; Industry Team was assembled; Kick-off meeting was held at DOE Morgantown; 1Q 2003--Engineering meeting was held at Hughes Christensen, The Woodlands Texas to prepare preliminary plans for development and testing and review equipment needs; Operators started sending information regarding their needs for deep drilling challenges and priorities for large-scale testing experimental matrix; Aramco joined the Industry Team as DEA 148 objectives paralleled the DOE project; 2Q 2003--Engineering and planning for high pressure drilling at TerraTek commenced; 3Q 2003--Continuation of engineering and design work for high pressure drilling at TerraTek; Baker Hughes INTEQ drilling Fluids and Hughes Christensen commence planning for Phase 1 testing--recommendations for bits and fluids.

  19. An evaluation of the metal fuel core performance for commercial FBR requirements

    Energy Technology Data Exchange (ETDEWEB)

    Ohta, H.; Yokoo, T. [Central Research Inst. of Electric Power Industry, Komae, Tokyo (Japan)

    2001-07-01

    Neutronic and thermal hydraulic design studies are conducted on the 3,900 MWt and 800 MWt metal fuel fast breeder reactor (FBR) cores that achieve a high burnup of 150 GWd/t, which is considered to be one of the goals of future commercial FBRs. The results show that a large-scale metal fuel core with a homogeneous configuration is a promising design for future commercial FBRs from the view point of economics, core safety, effective use of uranium resources and reduction of environmental load. In small-scale high-burnup cores, a radial heterogeneous design that can improve neutronic performance and safety parameters is suitable. (author)

  20. The effects of isolated and integrated 'core stability' training on athletic performance measures: a systematic review.

    Science.gov (United States)

    Reed, Casey A; Ford, Kevin R; Myer, Gregory D; Hewett, Timothy E

    2012-08-01

    Core stability training, operationally defined as training focused to improve trunk and hip control, is an integral part of athletic development, yet little is known about its direct relation to athletic performance. This systematic review focuses on identification of the association between core stability and sports-related performance measures. A secondary objective was to identify difficulties encountered when trying to train core stability with the goal of improving athletic performance. A systematic search was employed to capture all articles related to athletic performance and core stability training that were identified using the electronic databases MEDLINE, CINAHL and SPORTDiscus™ (1982-June 2011). A systematic approach was used to evaluate 179 articles identified for initial review. Studies that performed an intervention targeted toward the core and measured an outcome related to athletic or sport performances were included, while studies with a participant population aged 65 years or older were excluded. Twenty-four in total met the inclusionary criteria for review. Studies were evaluated using the Physical Therapy Evidence Database (PEDro) scale. The 24 articles were separated into three groups, general performance (n = 8), lower extremity (n = 10) and upper extremity (n = 6), for ease of discussion. In the majority of studies, core stability training was utilized in conjunction with more comprehensive exercise programmes. As such, many studies saw improvements in skills of general strengths such as maximum squat load and vertical leap. Surprisingly, not all studies reported measurable increases in specific core strength and stability measures following training. Additionally, investigations that targeted the core as the primary goal for improved outcome of training had mixed results. Core stability is rarely the sole component of an athletic development programme, making it difficult to directly isolate its affect on athletic performance

  1. Relation Between Buffer Size and RISC Core Performance

    Institute of Scientific and Technical Information of China (English)

    ZHOULi; YAOQingdong; LIUPeng; LIDongxiao

    2003-01-01

    In high definition television (HDTV)source decoder system, all data and instructions processed by function models are inputted and outputted via bus which is controlled by bus arbitration unit (BAU). Effi-cient bus architecture plays an important role in optimiz-ing system performance. This paper proposes two models to evaluate how buffer size and priority scheme in BAU can affect HDTV decoder system performance. Store proba-bility and buffer size (SP-BS) model splits time of write into two parts which is determined by write buffer size and priority scheme respectively, and obtains the quantita-tive relation between system performance and write buffer size for different priority scheme. Capability performance formula (CPF) model is used to investigate the influence of read buffer size on system performance that is in di-rect proportion to the number of read request arriving at BAU. According to the models, optimal buffer size with the highest performance/cost ratio for be nchmarks can beobtained.

  2. Benchmarking v ICT

    OpenAIRE

    Blecher, Jan

    2009-01-01

    The aim of this paper is to describe benefits of benchmarking IT in wider context and benchmarking scope at all. I specify benchmarking as a process and mention basic rules and guidelines. Further I define IT benchmarking domains and describe possibilities of their use. Best known type of IT benchmark is cost benchmark which represents only a subset of benchmark opportunities. In this paper, is cost benchmark rather an imaginary first step to benchmarking contribution to company. IT benchmark...

  3. 20 CFR 666.140 - Which individuals receiving services are included in the core indicators of performance?

    Science.gov (United States)

    2010-04-01

    ... included in the core indicators of performance? 666.140 Section 666.140 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR PERFORMANCE ACCOUNTABILITY UNDER TITLE I OF THE WORKFORCE... the core indicators of performance? (a)(1) The core indicators of performance apply to all...

  4. Benchmarking Query Execution Robustness

    Science.gov (United States)

    Wiener, Janet L.; Kuno, Harumi; Graefe, Goetz

    Benchmarks that focus on running queries on a well-tuned database system ignore a long-standing problem: adverse runtime conditions can cause database system performance to vary widely and unexpectedly. When the query execution engine does not exhibit resilience to these adverse conditions, addressing the resultant performance problems can contribute significantly to the total cost of ownership for a database system in over-provisioning, lost efficiency, and increased human administrative costs. For example, focused human effort may be needed to manually invoke workload management actions or fine-tune the optimization of specific queries.

  5. PREDICTION OF FUNCTIONAL MOVEMENT SCREEN™ PERFORMANCE FROM LOWER EXTREMITY RANGE OF MOTION AND CORE TESTS.

    Science.gov (United States)

    Chimera, Nicole J; Knoeller, Shelby; Cooper, Ron; Kothe, Nicholas; Smith, Craig; Warren, Meghan

    2017-04-01

    There are varied reports in the literature regarding the association of the Functional Movement Screen™ (FMS™) with injury. The FMS™ has been correlated with hamstring range of motion and plank hold times; however, limited research is available on the predictability of lower extremity range of motion (ROM) and core function on FMS™ performance. The purpose of this study was to examine whether active lower extremity ROM measurements and core functional tests predict FMS™ performance. The authors hypothesized that lower extremity ROM and core functional tests would predict FMS™ composite score (CS) and performance on individual FMS™ fundamental movement patterns. Descriptive cohort study. Forty recreationally active participants had active lower extremity ROM measured, performed two core functional tests, the single leg wall sit hold (SLWS) and the repetitive single leg squat (RSLS), and performed the FMS™. Independent t tests were used to assess differences between right and left limb ROM measures and outcomes of core functional tests. Linear and ordinal logistic regressions were used to determine the best predictors of FMS™ CS and fundamental movement patterns, respectively. On the left side, reduced DF and SLWS significantly predicted lower FMS™ CS. On the right side only reduced DF significantly predicted lower FMS™ CS. Ordinal logistic regression models for the fundamental movement patterns demonstrated that reduced DF ROM was significantly associated with lower performance on deep squat. Reduced left knee extension was significantly associated with better performance in left straight leg raise; while reduced right hip flexion was significantly associated with reduced right straight leg raise. Lower SLWS was associated with reduced trunk stability performance. FMS™ movement patterns were affected by lower extremity ROM and core function. Researchers should consider lower FMS™ performance as indicative of underlying issues in ROM and

  6. DSP Platform Benchmarking : DSP Platform Benchmarking

    OpenAIRE

    Xinyuan, Luo

    2009-01-01

    Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The...

  7. Photonic-Networks-on-Chip for High Performance Radiation Survivable Multi-Core Processor Systems

    Science.gov (United States)

    2013-12-01

    TR-14-7 Photonic-Networks-on-Chip for High Performance Radiation Survivable Multi-Core Processor Systems Approved for public release...Networks-on-Chip for High Performance Radiation Survivable Multi-Core Processor Systems DTRA01-03-D-0026 Prof. Luke Lester and Prof. Ganesh...release; distribution is unlimited. The University of New Mexico has undertaken a study to determine the effects of radiation on Quantum Dot Photonic

  8. Performance Analysis of an EDFA Utilizing a Partially Doped Core Fiber (PDCF)

    Science.gov (United States)

    Ahad, M. A.; Paul, M. C.; Muhd-Yassin, S. Z.; Mansoor, A.; Abdul-Rashid, H. A.

    2016-09-01

    The effect of transversal design in Erbium-doped fiber amplifiers' gain and noise figure performance is illustrated in this work. In this work, we investigate experimentally a single pass 980 nm pumped EDFA with partially doped Erbium core fiber (PDCF), which has the core partially doped with Erbium ions. Later, the enumerated results for PDCF are compared with a standard fully doped EDF, having similar Erbium ion doping concentration. The PDCF Amplifier gain and noise figure performance is studied against different pump power and signal power at different operating wavelengths. The noise figure indicates improvement due to reduced spontaneous emission from un-doped region of the core.

  9. Developing a theory of the strategic core of teams: a role composition model of team performance.

    Science.gov (United States)

    Humphrey, Stephen E; Morgeson, Frederick P; Mannor, Michael J

    2009-01-01

    Although numerous models of team performance have been articulated over the past 20 years, these models have primarily focused on the individual attribute approach to team composition. The authors utilized a role composition approach, which investigates how the characteristics of a set of role holders impact team effectiveness, to develop a theory of the strategic core of teams. Their theory suggests that certain team roles are most important for team performance and that the characteristics of the role holders in the "core" of the team are more important for overall team performance. This theory was tested in 778 teams drawn from 29 years of major league baseball (1974'-2002). Results demonstrate that although high levels of experience and job-related skill are important predictors of team performance, the relationships between these constructs and team performance are significantly stronger when the characteristics are possessed by core role holders (as opposed to non-core role holders). Further, teams that invest more of their financial resources in these core roles are able to leverage such investments into significantly improved performance. These results have implications for team composition models, as they suggest a new method for considering individual contributions to a team's success that shifts the focus onto core roles. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  10. Relationship of core self-evaluations to goal setting, motivation, and performance.

    Science.gov (United States)

    Erez, A; Judge, T A

    2001-12-01

    A newly developed personality taxonomy suggests that self-esteem, locus of control, generalized self-efficacy, and neuroticism form a broad personality trait termed core self-evaluations. The authors hypothesized that this broad trait is related to motivation and performance. To test this hypothesis, 3 studies were conducted. Study 1 showed that the 4 dispositions loaded on 1 higher order factor. Study 2 demonstrated that the higher order trait was related to task motivation and performance in a laboratory setting. Study 3 showed that the core trait was related to task activity, productivity as measured by sales volume, and the rated performance of insurance agents. Results also revealed that the core self-evaluations trait was related to goal-setting behavior. In addition, when the 4 core traits were investigated as 1 nomological network, they proved to be more consistent predictors of job behaviors than when used in isolation.

  11. Benchmarking von Krankenhausinformationssystemen – eine vergleichende Analyse deutschsprachiger Benchmarkingcluster

    Directory of Open Access Journals (Sweden)

    Jahn, Franziska

    2015-08-01

    Full Text Available Benchmarking is a method of strategic information management used by many hospitals today. During the last years, several benchmarking clusters have been established within the German-speaking countries. They support hospitals in comparing and positioning their information system’s and information management’s costs, performance and efficiency against other hospitals. In order to differentiate between these benchmarking clusters and to provide decision support in selecting an appropriate benchmarking cluster, a classification scheme is developed. The classification scheme observes both general conditions and examined contents of the benchmarking clusters. It is applied to seven benchmarking clusters which have been active in the German-speaking countries within the last years. Currently, performance benchmarking is the most frequent benchmarking type, whereas the observed benchmarking clusters differ in the number of benchmarking partners and their cooperation forms. The benchmarking clusters also deal with different benchmarking subjects. Assessing costs and quality application systems, physical data processing systems, organizational structures of information management and IT services processes are the most frequent benchmarking subjects. There is still potential for further activities within the benchmarking clusters to measure strategic and tactical information management, IT governance and quality of data and data-processing processes. Based on the classification scheme and the comparison of the benchmarking clusters, we derive general recommendations for benchmarking of hospital information systems.

  12. The impact of cell site re-homing on the performance of umts core networks

    CERN Document Server

    Ouyang, Ye; 10.5121/ijngn.2010.2105

    2010-01-01

    Mobile operators currently prefer optimizing their radio networks via re-homing or cutting over the cell sites in 2G or 3G networks. The core network, as the parental part of radio network, is inevitably impacted by the re-homing in radio domain. This paper introduces the cell site re-homing in radio network and analyzes its impact on the performance of GSM/UMTS core network. The possible re-homing models are created and analyzed for core networks. The paper concludes that appropriate re-homing in radio domain, using correct algorithms, not only optimizes the radio network but also helps improve the QoS of the core network and saves the carriers' OPEX and CAPEX on their core networks.

  13. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    Science.gov (United States)

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes.

  14. 2001 benchmarking guide.

    Science.gov (United States)

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  15. Relationships between core factors of knowledge management in hospital nursing organisations and outcomes of nursing performance.

    Science.gov (United States)

    Lee, Eun Ju; Kim, Hong Soon; Kim, Hye Young

    2014-12-01

    The study was conducted to investigate the levels of implementation of knowledge management and outcomes of nursing performance, to examine the relationships between core knowledge management factors and nursing performance outcomes and to identify core knowledge management factors affecting these outcomes. Effective knowledge management is very important to achieve strong organisational performance. The success or failure of knowledge management depends on how effectively an organisation's members share and use their knowledge. Because knowledge management plays a key role in enhancing nursing performance, identifying the core factors and investigating the level of knowledge management in a given hospital are priorities to ensure a high quality of nursing for patients. The study employed a descriptive research procedure. The study sample consisted of 192 nurses registered in three large healthcare organisations in South Korea. The variables demographic characteristics, implementation of core knowledge management factors and outcomes of nursing performance were examined and analysed in this study. The relationships between the core knowledge management factors and outcomes of nursing performance as well as the factors affecting the performance outcomes were investigated. A knowledge-sharing culture and organisational learning were found to be core factors affecting nursing performance. The study results provide basic data that can be used to formulate effective knowledge management strategies for enhancing nursing performance in hospital nursing organisations. In particular, prioritising the adoption of a knowledge-sharing culture and organisational learning in knowledge management systems might be one method for organisations to more effectively manage their knowledge resources and thus to enhance the outcomes of nursing performance and achieve greater business competitiveness. The study results can contribute to the development of effective and efficient

  16. [Benchmarking in health care: conclusions and recommendations].

    Science.gov (United States)

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  17. Benchmarking the GENE and GYRO codes through the relative roles of electromagnetic and E  ×  B stabilization in JET high-performance discharges

    Science.gov (United States)

    Bravenec, R.; Citrin, J.; Candy, J.; Mantica, P.; Görler, T.; contributors, JET

    2016-12-01

    Nonlinear gyrokinetic simulations using the GENE code have previously predicted a significant nonlinear enhanced electromagnetic stabilization in certain JET discharges with high neutral-beam power and low core magnetic shear (Citrin et al 2013 Phys. Rev. Lett. 111 155001, 2015 Plasma Phys. Control. Fusion 57 014032). This dominates over the impact of E  ×  B flow shear in these discharges. Furthermore, fast ions were shown to be a major contributor to the electromagnetic stabilization. These conclusions were based on results from the GENE gyrokinetic turbulence code. In this work we verify these results using the GYRO code. Comparing results (linear frequencies, eigenfunctions, and nonlinear fluxes) from different gyrokinetic codes as a means of verification (benchmarking) is only convincing if the codes agree for more than one discharge. Otherwise, agreement may simply be fortuitous. Therefore, we analyze three discharges, all with a carbon wall: a simplified, two-species, circular geometry case based on an actual JET discharge; an L-mode discharge with a significant fast-ion pressure fraction; and a low-triangularity high-β hybrid discharge. All discharges were analyzed at normalized toroidal flux coordinate ρ  =  0.33 where significant ion temperature peaking is observed. The GYRO simulations support the conclusion that electromagnetic stabilization is strong, and dominates E  ×  B shear stabilization.

  18. Aquatic Life Benchmarks

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aquatic Life Benchmarks is an EPA-developed set of criteria for freshwater species. These benchmarks are based on toxicity values reviewed by EPA and used in the...

  19. The ITER core imaging x-ray spectrometer: x-ray calorimeter performance.

    Science.gov (United States)

    Beiersdorfer, P; Brown, G V; Clementson, J; Dunn, J; Morris, K; Wang, E; Kelley, R L; Kilbourne, C A; Porter, F S; Bitter, M; Feder, R; Hill, K W; Johnson, D; Barnsley, R

    2010-10-01

    We describe the anticipated performance of an x-ray microcalorimeter instrument on ITER. As part of the core imaging x-ray spectrometer, the instrument will augment the imaging crystal spectrometers by providing a survey of the concentration of heavy ion plasma impurities in the core and possibly ion temperature values from the emission lines of different elemental ions located at various radial positions.

  20. Parallel Performance of MPI Sorting Algorithms on Dual-Core Processor Windows-Based Systems

    CERN Document Server

    Elnashar, Alaa Ismail

    2011-01-01

    Message Passing Interface (MPI) is widely used to implement parallel programs. Although Windowsbased architectures provide the facilities of parallel execution and multi-threading, little attention has been focused on using MPI on these platforms. In this paper we use the dual core Window-based platform to study the effect of parallel processes number and also the number of cores on the performance of three MPI parallel implementations for some sorting algorithms.

  1. An Effective Approach for Benchmarking Implementation

    OpenAIRE

    B. M. Deros; Tan, J.; M.N.A. Rahman; N. A.Q.M. Daud

    2011-01-01

    Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty res...

  2. Perceptual hashing algorithms benchmark suite

    Institute of Scientific and Technical Information of China (English)

    Zhang Hui; Schmucker Martin; Niu Xiamu

    2007-01-01

    Numerous perceptual hashing algorithms have been developed for identification and verification of multimedia objects in recent years. Many application schemes have been adopted for various commercial objects. Developers and users are looking for a benchmark tool to compare and evaluate their current algorithms or technologies. In this paper, a novel benchmark platform is presented. PHABS provides an open framework and lets its users define their own test strategy, perform tests, collect and analyze test data. With PHABS, various performance parameters of algorithms can be tested, and different algorithms or algorithms with different parameters can be evaluated and compared easily.

  3. The relationship of core strength and activation and performance on three functional movement screens.

    Science.gov (United States)

    Johnson, Caleb D; Whitehead, Paul N; Pletcher, Erin R; Faherty, Mallory S; Lovaleka, Mita T; Eagle, Shawn R; Keenan, Karen A

    2017-04-18

    Current measures of core stability utilized by clinicians and researchers suffer from several shortcomings. Three functional movement screens appear, at face-value, to be dependent on the ability to activate and control core musculature. These three screens may present a viable alternative to current measures of core stability. Thirty-nine subjects completed a deep squat, trunk stability push-up, and rotary stability screen. Scores on the three screens were summed to calculate a composite score (COMP). During the screens, muscle activity was collected to determine the length of time that the bilateral erector spinae, rectus abdominus, external oblique, and gluteus medius muscles were active. Strength was assessed for core muscles (trunk flexion/extension, trunk rotation, hip abduction/adduction) and accessory muscles (knee flexion/extension, and pectoralis major). Two ordinal logistic regression equations were calculated with COMP as the outcome variable, and: 1) core strength and accessory strength, 2) only core strength. The first model was significant in predicting COMP (p=.004) (Pearson's Chi-Square=149.132, p=.435; Nagelkerke's R-Squared=.369). The second model was significant in predicting COMP (p=.001) (Pearson's Chi Square=148.837, p=.488; Nagelkerke's R-Squared=.362). The core muscles were found to be active for the majority of screens, with percentages of "time active" for each muscle ranging from 54%-86%. In conclusion, performance on the three screens is predicted by core strength, even when accounting for "accessory" strength variables. Further, it appears the screens elicit wide-ranging activation of core muscles. While more investigation is needed, these screens, collectively, appear to be a good assessment of core strength.

  4. Benchmarking concentrating photovoltaic systems

    Science.gov (United States)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  5. A core competency-based objective structured clinical examination (OSCE) can predict future resident performance.

    Science.gov (United States)

    Wallenstein, Joshua; Heron, Sheryl; Santen, Sally; Shayne, Philip; Ander, Douglas

    2010-10-01

    This study evaluated the ability of an objective structured clinical examination (OSCE) administered in the first month of residency to predict future resident performance in the Accreditation Council for Graduate Medical Education (ACGME) core competencies. Eighteen Postgraduate Year 1 (PGY-1) residents completed a five-station OSCE in the first month of postgraduate training. Performance was graded in each of the ACGME core competencies. At the end of 18 months of training, faculty evaluations of resident performance in the emergency department (ED) were used to calculate a cumulative clinical evaluation score for each core competency. The correlations between OSCE scores and clinical evaluation scores at 18 months were assessed on an overall level and in each core competency. There was a statistically significant correlation between overall OSCE scores and overall clinical evaluation scores (R = 0.48, p competencies of patient care (R = 0.49, p competencies. An early-residency OSCE has the ability to predict future postgraduate performance on a global level and in specific core competencies. Used appropriately, such information can be a valuable tool for program directors in monitoring residents' progress and providing more tailored guidance. © 2010 by the Society for Academic Emergency Medicine.

  6. Effect of core strength and endurance training on performance in college students: randomized pilot study.

    Science.gov (United States)

    Schilling, Jim F; Murphy, Jeff C; Bonney, John R; Thich, Jacob L

    2013-07-01

    Core training continues to be emphasized with the proposed intent of improving athletic performance. The purpose of this investigation was to discover if core isometric endurance exercises were superior to core isotonic strengthening exercises and if either influenced specific endurance, strength, and performance measures. Ten untrained students were randomly assigned to core isometric endurance (n = 5) and core isotonic strength training (n = 5). Each performed three exercises, two times per week for six weeks. A repeated measures ANOVA was used to compare the measurements for the dependent variables and significance by bonferroni post-hoc testing. The training protocols were compared using a 2 × 3 mixed model ANOVA. Improvement in trunk flexor and extensor endurance (p strength (p strength group. Improvement in trunk flexor and right lateral endurance (p strength in the squat (p < 0.05) were found with the endurance group. Neither training protocol claimed superiority and both were ineffective in improving performance. Published by Elsevier Ltd.

  7. Benchmarking semantic web technology

    CERN Document Server

    García-Castro, R

    2009-01-01

    This book addresses the problem of benchmarking Semantic Web Technologies; first, from a methodological point of view, proposing a general methodology to follow in benchmarking activities over Semantic Web Technologies and, second, from a practical point of view, presenting two international benchmarking activities that involved benchmarking the interoperability of Semantic Web technologies using RDF(S) as the interchange language in one activity and OWL in the other.The book presents in detail how the different resources needed for these interoperability benchmarking activities were defined:

  8. Establishing benchmarks and metrics for utilization management.

    Science.gov (United States)

    Melanson, Stacy E F

    2014-01-01

    The changing environment of healthcare reimbursement is rapidly leading to a renewed appreciation of the importance of utilization management in the clinical laboratory. The process of benchmarking of laboratory operations is well established for comparing organizational performance to other hospitals (peers) and for trending data over time through internal benchmarks. However, there are relatively few resources available to assist organizations in benchmarking for laboratory utilization management. This article will review the topic of laboratory benchmarking with a focus on the available literature and services to assist in managing physician requests for laboratory testing. © 2013.

  9. The contextual benchmark method: benchmarking e-government services

    NARCIS (Netherlands)

    Jansen, Jurjen; Vries, de Sjoerd; Schaik, van Paul

    2010-01-01

    This paper offers a new method for benchmarking e-Government services. Government organizations no longer doubt the need to deliver their services on line. Instead, the question that is more relevant is how well the electronic services offered by a particular organization perform in comparison with

  10. Adaptive Integration and Optimization of Automated and Neural Processing Systems - Establishing Neural and Behavioral Benchmarks of Optimized Performance

    Science.gov (United States)

    2012-07-01

    33  Figure 29. ERP image plot showing the P3 amplitude for each target trial sorted by reaction time performance...Correlation between RT performance and P3 peak latency. Q1 = first quartile, Q4 = last quartile... partnership that capitalizes on the strengths of the ICB, SAIC, and ARL/HRED co- investigators. Critically, this work has already helped support High

  11. Performance Test of Core Protection and Monitoring Algorithm with DLL for SMART Simulator Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Bonseung; Hwang, Daehyun; Kim, Keungkoo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    A multi-purpose best-estimate simulator for SMART is being established, which is intended to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of SMART. In keeping with these intentions, a real-time model of the digital core protection and monitoring systems was developed and the real-time performance of the models was verified for various simulation scenarios. In this paper, a performance test of the core protection and monitoring algorithm with a DLL file for the SMART simulator implementation was performed. A DLL file of the simulator application code was made and several real-time evaluation tests were conducted for the steady-state and transient conditions with simulated system variables. A performance test of the core protection and monitoring algorithms for the SMART simulator was performed. A DLL file of the simulator version code was made and several real-time evaluation tests were conducted for various scenarios with a DLL file and simulated system variables. The results of all test cases showed good agreement with the reference results and some features caused by algorithm change were properly reflected to the DLL results. Therefore, it was concluded that the SCOPS{sub S}SIM and SCOMS{sub S}SIM algorithms and calculational capabilities are appropriate for the core protection and monitoring program in the SMART simulator.

  12. Randomized benchmarking of multiqubit gates.

    Science.gov (United States)

    Gaebler, J P; Meier, A M; Tan, T R; Bowler, R; Lin, Y; Hanneke, D; Jost, J D; Home, J P; Knill, E; Leibfried, D; Wineland, D J

    2012-06-29

    We describe an extension of single-qubit gate randomized benchmarking that measures the error of multiqubit gates in a quantum information processor. This platform-independent protocol evaluates the performance of Clifford unitaries, which form a basis of fault-tolerant quantum computing. We implemented the benchmarking protocol with trapped ions and found an error per random two-qubit Clifford unitary of 0.162±0.008, thus setting the first benchmark for such unitaries. By implementing a second set of sequences with an extra two-qubit phase gate inserted after each step, we extracted an error per phase gate of 0.069±0.017. We conducted these experiments with transported, sympathetically cooled ions in a multizone Paul trap-a system that can in principle be scaled to larger numbers of ions.

  13. Benchmarking monthly homogenization algorithms

    Directory of Open Access Journals (Sweden)

    V. K. C. Venema

    2011-08-01

    Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.

    Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve

  14. Engine Benchmarking - Final CRADA Report

    Energy Technology Data Exchange (ETDEWEB)

    Wallner, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  15. Reaction rate distribution measurement and the core performance evaluation in the prototype FBR Monju

    Energy Technology Data Exchange (ETDEWEB)

    Usami, S.; Suzuoki, Z.; Deshimaru, T. [Monju Construction Office, Japan Nuclear Cycle Development Institute, Fukui-ken (Japan); Nakashima, F. [Tsuruga head Office, Japan Nuclear Cycle Development Institute, Fukui-ken (Japan)

    2001-07-01

    Monju is a prototype fast breeder reactor designed to have an output of 280 MW (714 MWt), fueled with mixed oxides of plutonium and uranium and cooled by liquid sodium. The principal data on plant design and performance are shown in Table 1. Monju attained initial criticality in April 1994 and the reactor physics tests were carried out from May through November 1994. The reaction rate distribution measurement by the foil activation method was one of these tests and was carried out in order to verify the core performance and to contribute to the development of the core design methods. On the basis of the reaction rate measurement data, the Monju initial core breeding ratio and the power distribution were evaluated. (author)

  16. Enhanced Device and Circuit-Level Performance Benchmarking of Graphene Nanoribbon Field-Effect Transistor against a Nano-MOSFET with Interconnects

    Directory of Open Access Journals (Sweden)

    Huei Chaeng Chin

    2014-01-01

    Full Text Available Comparative benchmarking of a graphene nanoribbon field-effect transistor (GNRFET and a nanoscale metal-oxide-semiconductor field-effect transistor (nano-MOSFET for applications in ultralarge-scale integration (ULSI is reported. GNRFET is found to be distinctly superior in the circuit-level architecture. The remarkable transport properties of GNR propel it into an alternative technology to circumvent the limitations imposed by the silicon-based electronics. Budding GNRFET, using the circuit-level modeling software SPICE, exhibits enriched performance for digital logic gates in 16 nm process technology. The assessment of these performance metrics includes energy-delay product (EDP and power-delay product (PDP of inverter and NOR and NAND gates, forming the building blocks for ULSI. The evaluation of EDP and PDP is carried out for an interconnect length that ranges up to 100 μm. An analysis, based on the drain and gate current-voltage (Id-Vd and Id-Vg, for subthreshold swing (SS, drain-induced barrier lowering (DIBL, and current on/off ratio for circuit implementation is given. GNRFET can overcome the short-channel effects that are prevalent in sub-100 nm Si MOSFET. GNRFET provides reduced EDP and PDP one order of magnitude that is lower than that of a MOSFET. Even though the GNRFET is energy efficient, the circuit performance of the device is limited by the interconnect capacitances.

  17. Design and Performance of South Ukraine Nuclear Power Plant Mixed Cores

    Energy Technology Data Exchange (ETDEWEB)

    Abdullayev, A. M.; Baydulin, V.; Zhukov, A. I.; Latorre, Richard

    2011-09-24

    In 2010, 42 Westinghouse fuel assemblies (WFAs) were loaded into the core of South Ukraine Nuclear Power Plant (SUNPP) Unit 3 after four successful cycles with 6 Westinghouse Lead Test Assemblies. The scope of safety substantiating documents required for the regulatory approval of this mixed core was extended considerably, particularly with development and implementation of new methodologies and 3-D kinetic codes. Additional verification for all employed codes was also performed. Despite the inherent hydraulic non-uniformity of a mixed core, it was possible to demonstrate that all design and operating restrictions for three different types of fuel (TVS-M, TVSA and WFA) loaded in the core were conservatively met. This paper provides the main results from the first year of operation of the core loaded with 42 WFAs, the predicted parameters for the transition and equilibrium cycles with WFAs, comparisons of predicted versus measured core parameters, as well as the acceptable margin evaluation results for reactivity accidents using the 3-D kinetic codes. To date WFA design parameters have been confirmed by operation experience.

  18. The design and performance of IceCube DeepCore

    Science.gov (United States)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Allen, M. M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brown, A. M.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Chirkin, D.; Christy, B.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; Cruz Silva, A. H.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Degner, T.; Demirörs, L.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Dunkman, M.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Góra, D.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, B.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kroll, G.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madsen, J.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Rodrigues, J. P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Stüer, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.

    2012-05-01

    The IceCube neutrino observatory in operation at the South Pole, Antarctica, comprises three distinct components: a large buried array for ultrahigh energy neutrino detection, a surface air shower array, and a new buried component called DeepCore. DeepCore was designed to lower the IceCube neutrino energy threshold by over an order of magnitude, to energies as low as about 10 GeV. DeepCore is situated primarily 2100 m below the surface of the icecap at the South Pole, at the bottom center of the existing IceCube array, and began taking physics data in May 2010. Its location takes advantage of the exceptionally clear ice at those depths and allows it to use the surrounding IceCube detector as a highly efficient active veto against the principal background of downward-going muons produced in cosmic-ray air showers. DeepCore has a module density roughly five times higher than that of the standard IceCube array, and uses photomultiplier tubes with a new photocathode featuring a quantum efficiency about 35% higher than standard IceCube PMTs. Taken together, these features of DeepCore will increase IceCube's sensitivity to neutrinos from WIMP dark matter annihilations, atmospheric neutrino oscillations, galactic supernova neutrinos, and point sources of neutrinos in the northern and southern skies. In this paper we describe the design and initial performance of DeepCore.

  19. Optimization of Mud Hammer Drilling Performance--A Program to Benchmark the Viability of Advanced Mud Hammer Drilling

    Energy Technology Data Exchange (ETDEWEB)

    Arnis Judzis

    2006-03-01

    Operators continue to look for ways to improve hard rock drilling performance through emerging technologies. A consortium of Department of Energy, operator and industry participants put together an effort to test and optimize mud driven fluid hammers as one emerging technology that has shown promise to increase penetration rates in hard rock. The thrust of this program has been to test and record the performance of fluid hammers in full scale test conditions including, hard formations at simulated depth, high density/high solids drilling muds, and realistic fluid power levels. This paper details the testing and results of testing two 7 3/4 inch diameter mud hammers with 8 1/2 inch hammer bits. A Novatek MHN5 and an SDS Digger FH185 mud hammer were tested with several bit types, with performance being compared to a conventional (IADC Code 537) tricone bit. These tools functionally operated in all of the simulated downhole environments. The performance was in the range of the baseline ticone or better at lower borehole pressures, but at higher borehole pressures the performance was in the lower range or below that of the baseline tricone bit. A new drilling mode was observed, while operating the MHN5 mud hammer. This mode was noticed as the weight on bit (WOB) was in transition from low to high applied load. During this new ''transition drilling mode'', performance was substantially improved and in some cases outperformed the tricone bit. Improvements were noted for the SDS tool while drilling with a more aggressive bit design. Future work includes the optimization of these or the next generation tools for operating in higher density and higher borehole pressure conditions and improving bit design and technology based on the knowledge gained from this test program.

  20. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  1. SAT Participation and Performance and the Attainment of College and Career Readiness Benchmark Scores for the Class of 2013. Memorandum

    Science.gov (United States)

    Sanderson, Geoffrey T.

    2013-01-01

    This memorandum describes the SAT participation and performance for the Montgomery County (Maryland) Public Schools (MCPS) Class of 2013 compared with the graduating seniors in Maryland and the nation. Detailed results of SAT and ACT by high school and student group for graduates in 2011-2013 are included. MCPS students continue to outperform the…

  2. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project, Spring 2008

    Science.gov (United States)

    Council of the Great City Schools, 2008

    2008-01-01

    This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…

  3. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    Science.gov (United States)

    Council of the Great City Schools, 2008

    2008-01-01

    This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…

  4. Study of Core Competency Elements and Factors Affecting Performance Efficiency of Government Teachers in Northeastern Thailand

    Science.gov (United States)

    Chansirisira, Pacharawit

    2012-01-01

    The research aimed to investigate the core competency elements and the factors affecting the performance efficiency of the civil service teachers in the northeastern region, Thailand. The research procedure consisted of two steps. In the first step, the data were collected using a questionnaire with the reliability (Cronbach's Alpha) of 0.90. The…

  5. Energy key performance indicators : a european benchmark and assessment of meaningful indicators for the use of energy in large corporations

    OpenAIRE

    Friedrichs, Katja

    2013-01-01

    This study aims to identify and analyze energy key performance indicators among large European companies. Energy usage has become a very meaningful topic for both internal management as well as external stakeholders of a company. A review of current literature suggests that while environmental indicators in general have found broad attention and plenty of theories concerning good and meaningful indicators are published, no study investigating actually applied energy indicators ...

  6. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal 1997. Volume 3 - Calculations Performed in the Russian Federation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-06-01

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the Russian Federation during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the contaminated benchmarks that the United States and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  7. Highest performance in 3D metal cutting at smallest footprint: benchmark of a robot based system vs. parameters of gantry systems

    Science.gov (United States)

    Scheller, Torsten; Bastick, André; Michel-Triller, Robert; Manzella, Christon

    2014-02-01

    In the automotive industry as well as in other industries ecological aspects regarding energy savings are driving new technologies and materials, e.g. lightweight materials as aluminium or press hardened steels. Processing such parts especially complex 3D shaped parts laser manufacturing has become the key process offering highest efficiency. The most established systems for 3D cutting applications are based on gantry systems. The disadvantage of those systems is their huge footprint to realize the required stability and work envelope. Alternatively a robot based system might be of advantage if accuracy, speed and overall performance would be capable processing automotive parts. With the BIM "beam in motion" system, JENOPTIK Automatisierungstechnik GmbH has developed a modular robot based laser processing machine, which meets all OEM specs processing press hardened steel parts. A benchmark of the BIM versus a gantry system was done regarding all required parameters to fulfil OEM specifications for press hardened steel parts. As a result a highly productive, accurate and efficient system can be described based on one or multiple robot modules working simultaneously together. The paper presents the improvements on the robot machine concept BIM addressed in 2012 [1] leading to an industrial proven system approach for the automotive industry. It further compares the performance and the parameters for 3D cutting applications of the BIM system versus a gantry system by samples of applied parts. Finally an overview of suitable applications for processing complex 3D parts with high productivity at small footprint is given.

  8. Performance evaluation of the Hermite scheme on many-core accelerators

    Science.gov (United States)

    Nakasato, Naohito

    2016-02-01

    We are developing a software library to calculate gravitational interaction for the Hermite scheme on parallel computing systems supported by OpenCL API. Our library is partly compatible with a standard GRAPE-6A interface and is easily usable in existing N-body codes. Since our library is based on OpenCL standard API, our library is working on many parallel computing systems such as a multi-core CPU, a GPU, and a many-core architecture. We report the performance evaluation of our library on computing platforms from various vendors.

  9. The Conic Benchmark Format

    DEFF Research Database (Denmark)

    Friberg, Henrik A.

    This document constitutes the technical reference manual of the Conic Benchmark Format with le extension: .cbf or .CBF. It unies linear, second-order cone (also known as conic quadratic) and semidenite optimization with mixed-integer variables. The format has been designed with benchmark libraries...... in mind, and therefore focuses on compact and easily parsable representations. The problem structure is separated from the problem data, and the format moreover facilitate benchmarking of hotstart capability through sequences of changes....

  10. Toward the 5nm technology: layout optimization and performance benchmark for logic/SRAMs using lateral and vertical GAA FETs

    Science.gov (United States)

    Huynh-Bao, Trong; Ryckaert, Julien; Sakhare, Sushil; Mercha, Abdelkarim; Verkest, Diederik; Thean, Aaron; Wambacq, Piet

    2016-03-01

    In this paper, we present a layout and performance analysis of logic and SRAM circuits for vertical and lateral GAA FETs using 5nm (iN5) design rules. Extreme ultra-violet lithography (EUVL) processes are exploited to print the critical features: 32 nm gate pitch and 24 nm metal pitch. Layout architectures and patterning compromises for enabling the 5nm node will be discussed in details. A distinct standard-cell template for vertical FETs is proposed and elaborated for the first time. To assess electrical performances, a BSIM-CMG model has been developed and calibrated with TCAD simulations, which accounts for the quasi-ballistic transport in the nanowire channel. The results show that the inbound power rail layout construct for vertical devices could achieve the highest density while the interleaving diffusion template can maximize the port accessibility. By using a representative critical path circuit of a generic low power SoCs, it is shown that the VFET-based circuit is 40% more energy efficient than LFET designs at iso-performance. Regarding SRAMs, benefits given by vertical channel orientation in VFETs has reduced the SRAM area by 20%~30% compared to lateral SRAMs. A double exposures with EUV canner is needed to reach a minimum tip-to-tip (T2T) of 16 nm for middle-of-line (MOL) layers. To enable HD SRAMs with two metal layers, a fully self-aligned gate contact for LFETs and 2D routing of the top electrode for VFETs are required. The standby leakage of vertical SRAMs is 4~6X lower than LFET-based SRAMs at iso-performance and iso-area. The minimum operating voltage (Vmin) of vertical SRAMs is 170 mV lower than lateral SRAMs. A high-density SRAM bitcell of 0.014 um2 can be obtained for the iN5 technology node, which fully follows the SRAM scaling trend for the 45nm nodes and beyond.

  11. Hypervelocity Impact Performance of Open Cell Foam Core Sandwich Panel Structures

    Science.gov (United States)

    Ryan, Shannon; Christiansen, Eric; Lear, Dana

    2009-01-01

    Metallic foams are a relatively new class of materials with low density and novel physical, mechanical, thermal, electrical and acoustic properties. Although incompletely characterized, they offer comparable mechanical performance to traditional spacecraft structural materials (i.e. honeycomb sandwich panels) without detrimental through-thickness channeling cells. There are two competing types of metallic foams: open cell and closed cell. Open cell foams are considered the more promising technology due to their lower weight and higher degree of homogeneity. Leading micrometeoroid and orbital debris shields (MMOD) incorporate thin plates separated by a void space (i.e. Whipple shield). Inclusion of intermediate fabric layers, or multiple bumper plates have led to significant performance enhancements, yet these shields require additional non-ballistic mass for installation (fasteners, supports, etc.) that can consume up to 35% of the total shield weight [1]. Structural panels, such as open cell foam core sandwich panels, that are also capable of providing sufficient MMOD protection, represent a significant potential for increased efficiency in hypervelocity impact shielding from a systems perspective through a reduction in required non-ballistic mass. In this paper, the results of an extensive impact test program on aluminum foam core sandwich panels are reported. The effect of pore density, and core thickness on shielding performance have been evaluated over impact velocities ranging from 2.2 - 9.3 km/s at various angles. A number of additional tests on alternate sandwich panel configurations of comparable-weight have also been performed, including aluminum honeycomb sandwich panels (see Figure 1), Nomex honeycomb core sandwich panels, and 3D aluminum honeycomb sandwich panels. A total of 70 hypervelocity impact tests are reported, from which an empirical ballistic limit equation (BLE) has been derived. The BLE is in the standard form suitable for implementation in

  12. Radiation Detection Computational Benchmark Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  13. Sensitivity Analysis of OECD Benchmark Tests in BISON

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schmidt, Rodney C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williamson, Richard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  14. Highly efficient photocatalytic performance of graphene-ZnO quasi-shell-core composite material.

    Science.gov (United States)

    Bu, Yuyu; Chen, Zhuoyuan; Li, Weibing; Hou, Baorong

    2013-12-11

    In the present paper, the graphene-ZnO composite with quasi-shell-core structure was successfully prepared using a one-step wet chemical method. The photocatalytic Rhodamine B degradation property and the photoelectrochemical performance of the graphene-ZnO quasi-shell-core composite are dependent on the amount of graphene oxide that is added. When the amount of graphene oxide added is 10 mg, the graphene-ZnO quasi-shell-core composite possesses the optimal photocatalytic degradation efficiency and the best photoelectrochemical performance. An efficient interfacial electric field is established on the interface between the graphene and ZnO, which significantly improves the separation efficiency of the photogenerated electron-hole pairs and thus dramatically increases its photoelectrochemical performance. In addition to the excellent photocatalytic and photoelectrochemical properties, the electron migration ability of the grephene-ZnO quasi-shell-core composite is significantly enhanced due to the graphene coating on ZnO surface; therefore, this material has great potential for application as a substrate material to accept electrons in dye solar cell and in narrow bandgap semiconductor quantum dot sensitized solar cells.

  15. 20 CFR 641.730 - How will the Department assist grantees in the transition to the new core performance indicators?

    Science.gov (United States)

    2010-04-01

    ... the transition to the new core performance indicators? 641.730 Section 641.730 Employees' Benefits... EMPLOYMENT PROGRAM Performance Accountability § 641.730 How will the Department assist grantees in the transition to the new core performance indicators? (a) General transition provision. As soon as...

  16. Impact of structural distortions on the performance of hollow-core photonic bandgap fibers.

    Science.gov (United States)

    Fokoua, Eric Numkam; Richardson, David J; Poletti, Francesco

    2014-02-10

    We present a generic model for studying numerically the performance of hollow-core photonic bandgap fibers (HC-PBGFs) with arbitrary cross-sectional distortions. Fully vectorial finite element simulations reveal that distortions beyond the second ring of air holes have an impact on the leakage loss and bandwidth of the fiber, but do not significantly alter its surface scattering loss which remains the dominant contribution to the overall fiber loss (providing that a sufficient number of rings of air holes (≥ 5) are used). We have found that while most types of distortions in the first two rings are generally detrimental, enlarging the core defect while keeping equidistant and on a circular boundary the glass nodes surrounding the core may produce losses half those compared to "idealized" fiber designs and with no penalty in terms of the transmission bandwidth.

  17. Neuromuscular and athletic performance following core strength training in elite youth soccer: Role of instability.

    Science.gov (United States)

    Prieske, O; Muehlbauer, T; Borde, R; Gube, M; Bruhn, S; Behm, D G; Granacher, U

    2016-01-01

    Cross-sectional studies revealed that inclusion of unstable elements in core-strengthening exercises produced increases in trunk muscle activity and thus potential extra stimuli to induce more pronounced performance enhancements in youth athletes. Thus, the purpose of the study was to investigate changes in neuromuscular and athletic performance following core strength training performed on unstable (CSTU) compared with stable surfaces (CSTS) in youth soccer players. Thirty-nine male elite soccer players (age: 17 ± 1 years) were assigned to two groups performing a progressive core strength-training program for 9 weeks (2-3 times/week) in addition to regular in-season soccer training. CSTS group conducted core exercises on stable (i.e., floor, bench) and CSTU group on unstable (e.g., Thera-Band® Stability Trainer, Togu© Swiss ball) surfaces. Measurements included tests for assessing trunk muscle strength/activation, countermovement jump height, sprint time, agility time, and kicking performance. Statistical analysis revealed significant main effects of test (pre vs post) for trunk extensor strength (5%, P < 0.05, d = 0.86), 10-20-m sprint time (3%, P < 0.05, d = 2.56), and kicking performance (1%, P < 0.01, d = 1.28). No significant Group × test interactions were observed for any variable. In conclusion, trunk muscle strength, sprint, and kicking performance improved following CSTU and CSTS when conducted in combination with regular soccer training.

  18. Methodology for Benchmarking IPsec Gateways

    Directory of Open Access Journals (Sweden)

    Adam Tisovský

    2012-08-01

    Full Text Available The paper analyses forwarding performance of IPsec gateway over the rage of offered loads. It focuses on the forwarding rate and packet loss particularly at the gateway’s performance peak and at the state of gateway’s overload. It explains possible performance degradation when the gateway is overloaded by excessive offered load. The paper further evaluates different approaches for obtaining forwarding performance parameters – a widely used throughput described in RFC 1242, maximum forwarding rate with zero packet loss and us proposed equilibrium throughput. According to our observations equilibrium throughput might be the most universal parameter for benchmarking security gateways as the others may be dependent on the duration of test trials. Employing equilibrium throughput would also greatly shorten the time required for benchmarking. Lastly, the paper presents methodology and a hybrid step/binary search algorithm for obtaining value of equilibrium throughput.

  19. Benchmarking Variable Selection in QSAR.

    Science.gov (United States)

    Eklund, Martin; Norinder, Ulf; Boyer, Scott; Carlsson, Lars

    2012-02-01

    Variable selection is important in QSAR modeling since it can improve model performance and transparency, as well as reduce the computational cost of model fitting and predictions. Which variable selection methods that perform well in QSAR settings is largely unknown. To address this question we, in a total of 1728 benchmarking experiments, rigorously investigated how eight variable selection methods affect the predictive performance and transparency of random forest models fitted to seven QSAR datasets covering different endpoints, descriptors sets, types of response variables, and number of chemical compounds. The results show that univariate variable selection methods are suboptimal and that the number of variables in the benchmarked datasets can be reduced with about 60 % without significant loss in model performance when using multivariate adaptive regression splines MARS and forward selection.

  20. In silico target predictions: defining a benchmarking data set and comparison of performance of the multiclass Naïve Bayes and Parzen-Rosenblatt window.

    Science.gov (United States)

    Koutsoukas, Alexios; Lowe, Robert; Kalantarmotamedi, Yasaman; Mussa, Hamse Y; Klaffke, Werner; Mitchell, John B O; Glen, Robert C; Bender, Andreas

    2013-08-26

    In this study, two probabilistic machine-learning algorithms were compared for in silico target prediction of bioactive molecules, namely the well-established Laplacian-modified Naïve Bayes classifier (NB) and the more recently introduced (to Cheminformatics) Parzen-Rosenblatt Window. Both classifiers were trained in conjunction with circular fingerprints on a large data set of bioactive compounds extracted from ChEMBL, covering 894 human protein targets with more than 155,000 ligand-protein pairs. This data set is also provided as a benchmark data set for future target prediction methods due to its size as well as the number of bioactivity classes it contains. In addition to evaluating the methods, different performance measures were explored. This is not as straightforward as in binary classification settings, due to the number of classes, the possibility of multiple class memberships, and the need to translate model scores into "yes/no" predictions for assessing model performance. Both algorithms achieved a recall of correct targets that exceeds 80% in the top 1% of predictions. Performance depends significantly on the underlying diversity and size of a given class of bioactive compounds, with small classes and low structural similarity affecting both algorithms to different degrees. When tested on an external test set extracted from WOMBAT covering more than 500 targets by excluding all compounds with Tanimoto similarity above 0.8 to compounds from the ChEMBL data set, the current methodologies achieved a recall of 63.3% and 66.6% among the top 1% for Naïve Bayes and Parzen-Rosenblatt Window, respectively. While those numbers seem to indicate lower performance, they are also more realistic for settings where protein targets need to be established for novel chemical substances.

  1. Demonstrate VERA Core Simulator Performance Improvements L2:PHI.P13.03

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Benjamin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hamilton, Steven P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jarrett, Michael G. [Univ. of Michigan, Ann Arbor, MI (United States); Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kochunas, Brendan [Univ. of Michigan, Ann Arbor, MI (United States); Liu, Yuxuan [Univ. of Michigan, Ann Arbor, MI (United States); Palmtag, Scott [Core Physics, Inc., Cary, NC (United States); Salko, Robert K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Stimpson, Shane G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Toth, Alex [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Yee, Ben [Univ. of Michigan, Ann Arbor, MI (United States)

    2016-08-31

    This report describes the performance improvements made to the VERA Core Simulator (VERA-CS) during FY2016. The development of the VERA Core Simulator has focused on the capability needed to deplete physical reactors and help solve various problems; this capability required the accurate simulation of many operating cycles of a nuclear power plant. The first section of this report introduces two test problems used to assess the run-time performance of VERA-CS using a source dated February 2016. The next section provides a brief overview of the major modifications made to decrease the computational cost. Following the descriptions of the major improvements, the run-time for each improvement is shown. Conclusions on the work are presented, and further follow-on performance improvements are suggested.

  2. Benchmark Performance of Global Switching versus Local Switching for Trajectory Surface Hopping Molecular Dynamics Simulation: Cis↔Trans Azobenzene Photoisomerization.

    Science.gov (United States)

    Yue, Ling; Yu, Le; Xu, Chao; Lei, Yibo; Liu, Yajun; Zhu, Chaoyuan

    2017-05-19

    A newly developed global switching algorithm that does not require calculation of nonadiabatic coupling vectors reduces computational costs significantly. However, the accuracy of this simplest nonadiabatic molecular dynamic method has not been extensively compared with the conventional Tully's fewest switches. It is necessary to demonstrate the accuracy of this global switching algorithm. An extensive comparison between local and global switching on-the-fly trajectory surface hopping molecular dynamics is performed for cis-to-trans (800 sampling trajectories) and trans-to-cis (600 sampling trajectories) azobenzene photoisomerization at the OM2/MRCI level. The global switching algorithm is coded into the Newton-X program package. Excellent agreement between the two switching algorithms is obtained not only for highly averaged quantities of quantum yields and lifetimes, but also for detailed contour patterns of product distributions, hopping spot distributions and hopping directions in terms of conical intersections between ground and the first excited states. Therefore, the global switching trajectory surface hopping method can be applied to larger complex systems in which nonadiabatic coupling is not available for excited-state molecular dynamic simulations. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Restaurant Energy Use Benchmarking Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  4. Benchmarking ENDF/B-VII.0

    Science.gov (United States)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  5. Combining self- and cross-docking as benchmark tools: the performance of DockBench in the D3R Grand Challenge 2

    Science.gov (United States)

    Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano

    2017-08-01

    Molecular docking is a powerful tool in the field of computer-aided molecular design. In particular, it is the technique of choice for the prediction of a ligand pose within its target binding site. A multitude of docking methods is available nowadays, whose performance may vary depending on the data set. Therefore, some non-trivial choices should be made before starting a docking simulation. In the same framework, the selection of the target structure to use could be challenging, since the number of available experimental structures is increasing. Both issues have been explored within this work. The pose prediction of a pool of 36 compounds provided by D3R Grand Challenge 2 organizers was preceded by a pipeline to choose the best protein/docking-method couple for each blind ligand. An integrated benchmark approach including ligand shape comparison and cross-docking evaluations was implemented inside our DockBench software. The results are encouraging and show that bringing attention to the choice of the docking simulation fundamental components improves the results of the binding mode predictions.

  6. Nanocellulose Derivative/Silica Hybrid Core-Shell Chiral Stationary Phase: Preparation and Enantioseparation Performance

    Directory of Open Access Journals (Sweden)

    Xiaoli Zhang

    2016-05-01

    Full Text Available Core-shell silica microspheres with a nanocellulose derivative in the hybrid shell were successfully prepared as a chiral stationary phase by a layer-by-layer self-assembly method. The hybrid shell assembled on the silica core was formed using a surfactant as template by the copolymerization reaction of tetraethyl orthosilicate and the nanocellulose derivative bearing triethoxysilyl and 3,5-dimethylphenyl groups. The resulting nanocellulose hybrid core-shell chiral packing materials (CPMs were characterized and packed into columns, and their enantioseparation performance was evaluated by high performance liquid chromatography. The results showed that CPMs exhibited uniform surface morphology and core-shell structures. Various types of chiral compounds were efficiently separated under normal and reversed phase mode. Moreover, chloroform and tetrahydrofuran as mobile phase additives could obviously improve the resolution during the chiral separation processes. CPMs still have good chiral separation property when eluted with solvent systems with a high content of tetrahydrofuran and chloroform, which proved the high solvent resistance of this new material.

  7. Relationship between core strength and key variables of performance in elite rink hockey players.

    Science.gov (United States)

    Hoppe, M W; Freiwald, J; Baumgart, C; Born, D P; Reed, J L; Sperlich, B

    2015-03-01

    The aim of this study was to test the hypothesis that a significant relationship exists between the level of core strength-endurance and key variables of endurance, strength, power, speed, and agility performance in male elite rink hockey players. Ten male elite rink hockey players of the German national team were tested for 1) time to exhaustion, maximum oxygen uptake, and running economy, 2) one repetition maximum bench press and half squat, 3) counter movement jump height, 4) 5 m, 10 m, and 20 m speed, and 5) 22 m agility. The rink hockey players were also tested for 6) ventral, lateral-left, lateral-right, and dorsal core strength-endurance using concentric-eccentric muscle tests. The level of total and ventral core strength-endurance was very largely correlated with maximum oxygen uptake (r=0.74 and r=0.71, both Pcore strength-endurance and time to exhaustion (r=0.66, P0.05). The findings from this study suggest that the level of core strength-endurance is largely to very largely correlated with key variables of endurance performance, but not significantly with strength, power, speed, or agility indicators in male elite rink hockey players. These findings should be noted by coaches and scientists when testing physical fitness or planning strength and conditioning programs for male elite rink hockey players.

  8. Performance assessment of excitation system based on minimum variance benchmark%基于最小方差基准的励磁系统性能评估

    Institute of Scientific and Technical Information of China (English)

    张虹; 徐滨; 高健; 庞健

    2014-01-01

    Step response test methods are generally used to evaluate synchronous generator excitation system performance, but this method can not be implemented online. A method for evaluating the excitation system performance of the minimum variance control benchmark is proposed. Performance of the system under the action of the minimum variance controller output is considered as the upper bound of performance. The ratio of this output performance and actual output performance of the system is defined as the performance index. To avoid expanding the Diophantine equation, filtering and correlation analysis (FCOR) algorithm is introduced. The analysis results show that this method only requires synchronous generator output voltage data and a priori knowledge of the system dead time. Simulation results show that this method simplifies the calculation process, and evaluates the performance of excitation control system timely and accurately.%对同步发电机励磁系统性能评价一般通过阶跃响应方法,但该方法无法在线进行,为此提出了最小方差控制基准的性能评估方法。对系统设计最小方差控制器并作为系统控制性能上限,与系统实际性能进行比较而得到性能指标,并对该方法进行系统滤波和相关性分析 FCOR(Filtering and Correlation Analysis)算法的改进,避免了 Diophantine 方程的展开运算。分析表明该评估方法只需利用同步发电机输出端电压数据,结合系统时滞d就可以得到励磁系统的性能指标。仿真结果表明该方法简化了计算过程,能够及时准确地在线评估励磁系统的控制性能。

  9. A framework for benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J. T.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J. B.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  10. A framework for benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-10-01

    Full Text Available Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1 targeted aspects of model performance to be evaluated, (2 a set of benchmarks as defined references to test model performance, (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4 model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties

  11. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  12. Handleiding benchmark VO

    NARCIS (Netherlands)

    Blank, j.l.t.

    2008-01-01

    OnderzoeksrapportenArchiefTechniek, Bestuur en Management> Over faculteit> Afdelingen> Innovation Systems> IPSE> Onderzoek> Publicaties> Onderzoeksrapporten> Handleiding benchmark VO Handleiding benchmark VO 25 november 2008 door IPSE Studies Door J.L.T. Blank. Handleiding voor het lezen van de i

  13. Benchmark af erhvervsuddannelserne

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Wittrup, Jesper

    I dette arbejdspapir diskuterer vi, hvorledes de danske erhvervsskoler kan benchmarkes, og vi præsenterer resultaterne af en række beregningsmodeller. Det er begrebsmæssigt kompliceret at benchmarke erhvervsskolerne. Skolerne udbyder en lang række forskellige uddannelser. Det gør det vanskeligt...

  14. Benchmarking af kommunernes sagsbehandling

    DEFF Research Database (Denmark)

    Amilon, Anna

    Fra 2007 skal Ankestyrelsen gennemføre benchmarking af kommuernes sagsbehandlingskvalitet. Formålet med benchmarkingen er at udvikle praksisundersøgelsernes design med henblik på en bedre opfølgning og at forbedre kommunernes sagsbehandling. Dette arbejdspapir diskuterer metoder for benchmarking...

  15. Chemical insights into the roles of nanowire cores on the growth and supercapacitor performances of Ni-Co-O/Ni(OH)2 core/shell electrodes

    Science.gov (United States)

    Yin, Xuesong; Tang, Chunhua; Zhang, Liuyang; Yu, Zhi Gen; Gong, Hao

    2016-02-01

    Nanostructured core/shell electrodes have been experimentally demonstrated promising for high-performance electrochemical energy storage devices. However, chemical insights into the significant roles of nanowire cores on the growth of shells and their supercapacitor behaviors still remain as a research shortfall. In this work, by substituting 1/3 cobalt in the Co3O4 nanowire core with nickel, a 61% enhancement of the specific mass-loading of the Ni(OH)2 shell, a tremendous 93% increase of the volumetric capacitance and a superior cyclability were achieved in a novel NiCo2O4/Ni(OH)2 core/shell electrode in contrast to a Co3O4/Ni(OH)2 one. A comparative study suggested that not only the growth of Ni(OH)2 shells but also the contribution of cores were attributed to the overall performances. Importantly, their chemical origins were revealed through a theoretical simulation of the core/shell interfacial energy changes. Besides, asymmetric supercapacitor devices and applications were also explored. The scientific clues and practical potentials obtained in this work are helpful for the design and analysis of alternative core/shell electrode materials.

  16. An Effective Approach for Benchmarking Implementation

    Directory of Open Access Journals (Sweden)

    B. M. Deros

    2011-01-01

    Full Text Available Problem statement: The purpose of this study is to present a benchmarking guideline, conceptual framework and computerized mini program to assists companies achieve better performance in terms of quality, cost, delivery, supply chain and eventually increase their competitiveness in the market. The study begins with literature review on benchmarking definition, barriers and advantages from the implementation and the study of benchmarking framework. Approach: Thirty respondents were involved in the case study. They comprise of industrial practitioners, which had assessed usability and practicability of the guideline, conceptual framework and computerized mini program. Results: A guideline and template were proposed to simplify the adoption of benchmarking techniques. A conceptual framework was proposed by integrating the Deming’s PDCA and Six Sigma DMAIC theory. It was provided a step-by-step method to simplify the implementation and to optimize the benchmarking results. A computerized mini program was suggested to assist the users in adopting the technique as part of improvement project. As the result from the assessment test, the respondents found that the implementation method provided an idea for company to initiate benchmarking implementation and it guides them to achieve the desired goal as set in a benchmarking project. Conclusion: The result obtained and discussed in this study can be applied in implementing benchmarking in a more systematic way for ensuring its success.

  17. How Activists Use Benchmarks

    DEFF Research Database (Denmark)

    Seabrooke, Leonard; Wigan, Duncan

    2015-01-01

    Non-governmental organisations use benchmarks as a form of symbolic violence to place political pressure on firms, states, and international organisations. The development of benchmarks requires three elements: (1) salience, that the community of concern is aware of the issue and views...... are put to the test. The first is a reformist benchmarking cycle where organisations defer to experts to create a benchmark that conforms with the broader system of politico-economic norms. The second is a revolutionary benchmarking cycle driven by expert-activists that seek to contest strong vested...... interests and challenge established politico-economic norms. Differentiating these cycles provides insights into how activists work through organisations and with expert networks, as well as how campaigns on complex economic issues can be mounted and sustained....

  18. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  19. Polydopamine and MnO2 core-shell composites for high-performance supercapacitors

    Science.gov (United States)

    Hou, Ding; Tao, Haisheng; Zhu, Xuezhen; Li, Maoguo

    2017-10-01

    Polydopamine and MnO2 core-shell composites (PDA@MnO2) for high-performance supercapacitors had been successfully synthesized by a facile and fast method. The morphology, crystalline phase and chemical composition of PDA@MnO2 composites are characterized using SEM, TEM, XRD, EDS and XPS. The performance of PDA@MnO2 composites are further investigated by cyclic voltammetry, galvanostatic charge-discharge and electrochemical impedance spectroscopy in 1 M Na2SO4 electrolyte. The PDA@MnO2 core-shell nanostructure composites exhibit a high capacitance of 193 F g-1 at the current density of 1A g-1 and retained over 81.2% of its initial capacitance after 2500 cycles of charge-discharge at 2 A g-1. The results manifest that the PDA@MnO2 composites can be potentially applied in supercapacitors.

  20. DANCE, BALANCE AND CORE MUSCLE PERFORMANCE MEASURES ARE IMPROVED FOLLOWING A 9-WEEK CORE STABILIZATION TRAINING PROGRAM AMONG COMPETITIVE COLLEGIATE Dancers

    Science.gov (United States)

    Graning, Jessica; McPherson, Sue; Carter, Elizabeth; Edwards, Joshuah; Melcher, Isaac; Burgess, Taylor

    2017-01-01

    Background Dance performance requires not only lower extremity muscle strength and endurance, but also sufficient core stabilization during dynamic dance movements. While previous studies have identified a link between core muscle performance and lower extremity injury risk, what has not been determined is if an extended core stabilization training program will improve specific measures of dance performance. Hypothesis/Purpose This study examined the impact of a nine-week core stabilization program on indices of dance performance, balance measures, and core muscle performance in competitive collegiate dancers. Study Design Within-subject repeated measures design. Methods A convenience sample of 24 female collegiate dance team members (age = 19.7 ± 1.1 years, height = 164.3 ± 5.3 cm, weight 60.3 ± 6.2 kg, BMI = 22.5 ± 3.0) participated. The intervention consisted of a supervised and non-supervised core (trunk musculature) exercise training program designed specifically for dance team participants performed three days/week for nine weeks in addition to routine dance practice. Prior to the program implementation and following initial testing, transversus abdominis (TrA) activation training was completed using the abdominal draw-in maneuver (ADIM) including ultrasound imaging (USI) verification and instructor feedback. Paired t tests were conducted regarding the nine-week core stabilization program on dance performance and balance measures (pirouettes, single leg balance in passe’ releve position, and star excursion balance test [SEBT]) and on tests of muscle performance. A repeated measures (RM) ANOVA examined four TrA instruction conditions of activation: resting baseline, self-selected activation, immediately following ADIM training and four days after completion of the core stabilization training program. Alpha was set at 0.05 for all analysis. Results Statistically significant improvements were seen on single leg balance in passe

  1. DANCE, BALANCE AND CORE MUSCLE PERFORMANCE MEASURES ARE IMPROVED FOLLOWING A 9-WEEK CORE STABILIZATION TRAINING PROGRAM AMONG COMPETITIVE COLLEGIATE Dancers.

    Science.gov (United States)

    Watson, Todd; Graning, Jessica; McPherson, Sue; Carter, Elizabeth; Edwards, Joshuah; Melcher, Isaac; Burgess, Taylor

    2017-02-01

    Dance performance requires not only lower extremity muscle strength and endurance, but also sufficient core stabilization during dynamic dance movements. While previous studies have identified a link between core muscle performance and lower extremity injury risk, what has not been determined is if an extended core stabilization training program will improve specific measures of dance performance. This study examined the impact of a nine-week core stabilization program on indices of dance performance, balance measures, and core muscle performance in competitive collegiate dancers. Within-subject repeated measures design. A convenience sample of 24 female collegiate dance team members (age = 19.7 ± 1.1 years, height = 164.3 ± 5.3 cm, weight 60.3 ± 6.2 kg, BMI = 22.5 ± 3.0) participated. The intervention consisted of a supervised and non-supervised core (trunk musculature) exercise training program designed specifically for dance team participants performed three days/week for nine weeks in addition to routine dance practice. Prior to the program implementation and following initial testing, transversus abdominis (TrA) activation training was completed using the abdominal draw-in maneuver (ADIM) including ultrasound imaging (USI) verification and instructor feedback. Paired t tests were conducted regarding the nine-week core stabilization program on dance performance and balance measures (pirouettes, single leg balance in passe' releve position, and star excursion balance test [SEBT]) and on tests of muscle performance. A repeated measures (RM) ANOVA examined four TrA instruction conditions of activation: resting baseline, self-selected activation, immediately following ADIM training and four days after completion of the core stabilization training program. Alpha was set at 0.05 for all analysis. Statistically significant improvements were seen on single leg balance in passe' releve and bilateral anterior reach for the SEBT (both p ≤ 0

  2. Impact of structural distortions on the performance of hollow-core photonic bandgap fibers

    OpenAIRE

    2014-01-01

    We present a generic model for studying numerically the performance of hollow-core photonic bandgap fibers (HC-PBGFs) with arbitrary cross-sectional distortions. Fully vectorial finite element simulations reveal that distortions beyond the second ring of air holes have an impact on the leakage loss and bandwidth of the fiber, but do not significantly alter its surface scattering loss which remains the dominant contribution to the overall fiber loss (providing that a sufficient number of rings...

  3. "Functional" Inspiratory and Core Muscle Training Enhances Running Performance and Economy.

    Science.gov (United States)

    Tong, Tomas K; McConnell, Alison K; Lin, Hua; Nie, Jinlei; Zhang, Haifeng; Wang, Jiayuan

    2016-10-01

    Tong, TK, McConnell, AK, Lin, H, Nie, J, Zhang, H, and Wang, J. "Functional" inspiratory and core muscle training enhances running performance and economy. J Strength Cond Res 30(10): 2942-2951, 2016-We compared the effects of two 6-week high-intensity interval training interventions. Under the control condition (CON), only interval training was undertaken, whereas under the intervention condition (ICT), interval training sessions were followed immediately by core training, which was combined with simultaneous inspiratory muscle training (IMT)-"functional" IMT. Sixteen recreational runners were allocated to either ICT or CON groups. Before the intervention phase, both groups undertook a 4-week program of "foundation" IMT to control for the known ergogenic effect of IMT (30 inspiratory efforts at 50% maximal static inspiratory pressure [P0] per set, 2 sets per day, 6 days per week). The subsequent 6-week interval running training phase consisted of 3-4 sessions per week. In addition, the ICT group undertook 4 inspiratory-loaded core exercises (10 repetitions per set, 2 sets per day, inspiratory load set at 50% post-IMT P0) immediately after each interval training session. The CON group received neither core training nor functional IMT. After the intervention phase, global inspiratory and core muscle functions increased in both groups (p ≤ 0.05), as evidenced by P0 and a sport-specific endurance plank test (SEPT) performance, respectively. Compared with CON, the ICT group showed larger improvements in SEPT, running economy at the speed of the onset of blood lactate accumulation, and 1-hour running performance (3.04% vs. 1.57%, p ≤ 0.05). The changes in these variables were interindividually correlated (r ≥ 0.57, n = 16, p ≤ 0.05). Such findings suggest that the addition of inspiratory-loaded core conditioning into a high-intensity interval training program augments the influence of the interval program on endurance running performance and that this may be

  4. Implementation of NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  5. Simple Benchmark Specifications for Space Radiation Protection

    Science.gov (United States)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  6. Full sphere hydrodynamic and dynamo benchmarks

    KAUST Repository

    Marti, P.

    2014-01-26

    Convection in planetary cores can generate fluid flow and magnetic fields, and a number of sophisticated codes exist to simulate the dynamic behaviour of such systems. We report on the first community activity to compare numerical results of computer codes designed to calculate fluid flow within a whole sphere. The flows are incompressible and rapidly rotating and the forcing of the flow is either due to thermal convection or due to moving boundaries. All problems defined have solutions that alloweasy comparison, since they are either steady, slowly drifting or perfectly periodic. The first two benchmarks are defined based on uniform internal heating within the sphere under the Boussinesq approximation with boundary conditions that are uniform in temperature and stress-free for the flow. Benchmark 1 is purely hydrodynamic, and has a drifting solution. Benchmark 2 is a magnetohydrodynamic benchmark that can generate oscillatory, purely periodic, flows and magnetic fields. In contrast, Benchmark 3 is a hydrodynamic rotating bubble benchmark using no slip boundary conditions that has a stationary solution. Results from a variety of types of code are reported, including codes that are fully spectral (based on spherical harmonic expansions in angular coordinates and polynomial expansions in radius), mixed spectral and finite difference, finite volume, finite element and also a mixed Fourier-finite element code. There is good agreement between codes. It is found that in Benchmarks 1 and 2, the approximation of a whole sphere problem by a domain that is a spherical shell (a sphere possessing an inner core) does not represent an adequate approximation to the system, since the results differ from whole sphere results. © The Authors 2014. Published by Oxford University Press on behalf of The Royal Astronomical Society.

  7. Clinical performance of a lithia disilicate-based core ceramic for three-unit posterior FPDs.

    Science.gov (United States)

    Esquivel-Upshaw, Josephine F; Anusavice, Kenneth J; Young, Henry; Jones, Jack; Gibbs, Charles

    2004-01-01

    The purpose of this research project was to determine the clinical success rate of a lithia disilicate-based core ceramic for use in posterior fixed partial dentures (FPD) as a function of bite force, cement type, connector height, and connector width. Thirty ceramic FPD core frameworks were prepared using a heat-pressing technique and a lithia disilicate-based core ceramic. The maximum clenching force was measured for each patient prior to tooth preparation. Connector height and width were measured for each FPD. Patients were recalled yearly after cementation for 2 years and evaluated using 11 clinical criteria. All FPDs were examined by two independent clinicians, and rankings from 1 to 4 were made for each criterion (4 = excellent; 1 = unacceptable). Two of the 30 ceramic FPDs fractured within the 2-year evaluation period, representing a 93% success rate. One fracture was associated with a low occlusal force and short connector height (2.9 mm). The other fracture was associated with the greatest occlusal force (1,031 N) and adequate connector height. All criteria were ranked good to excellent during the 2-year recall for all remaining FPDs. The performance of the experimental core ceramic in posterior FPDs was promising, with only a 7% fracture rate after 2 years. Because of the limited sample size, it is not possible to identify the maximum clenching force that is allowable to prevent fracture caused by interocclusal forces.

  8. Improving performance portability for GPU-specific OpenCL kernels on multi-core/many-core CPUs by analysis-based transformations*#

    Institute of Scientific and Technical Information of China (English)

    Mei WEN; Da-fei HUANG; Chang-qing XUN; Dong CHEN

    2015-01-01

    OpenCL is an open heterogeneous programming framework. Although OpenCL programs are func-tionally portable, they do not provide performance portability, so code transformation often plays an irreplaceable role. When adapting GPU-specifi c OpenCL kernels to run on multi-core/many-core CPUs, coarsening the thread granularity is necessary and thus has been extensively used. However, locality concerns exposed in GPU-specifi c OpenCL code are usually inherited without analysis, which may give side-effects on the CPU performance. Typi-cally, the use of OpenCL’s local memory on multi-core/many-core CPUs may lead to an opposite performance effect, because local-memory arrays no longer match well with the hardware and the associated synchronizations are costly. To solve this dilemma, we actively analyze the memory access patterns using array-access descriptors derived from GPU-specifi c kernels, which can thus be adapted for CPUs by (1) removing all the unwanted local-memory arrays together with the obsolete barrier statements and (2) optimizing the coalesced kernel code with vectorization and locality re-exploitation. Moreover, we have developed an automated tool chain that makes this transformation of GPU-specifi c OpenCL kernels into a CPU-friendly form, which is accompanied with a scheduler that forms a new OpenCL runtime. Experiments show that the automated transformation can improve OpenCL kernel performance on a multi-core CPU by an average factor of 3.24. Satisfactory performance improvements are also achieved on Intel’s many-integrated-core coprocessor. The resultant performance on both architectures is better than or comparable with the corresponding OpenMP performance.

  9. Performance of the CORE-10 and YP-CORE measures in a sample of youth engaging with a community mental health service.

    Science.gov (United States)

    O'Reilly, Aileen; Peiper, Nicholas; O'Keeffe, Lynsey; Illback, Robert; Clayton, Richard

    2016-12-01

    This article assesses the performance and psychometric properties of two versions of the Clinical Outcomes in Routine Evaluation (CORE) measures that assess psychological distress: the Young Person's CORE (YP-CORE) for 11-16 year olds and the CORE-10 for those 17 or older. The sample comprised 1592 young people aged 12-25 who completed the YP-CORE and CORE-10 during their initial engagement with an early intervention service. Total and average scores were examined for both measures. Gender and age differences were evaluated using t-tests and analysis of variance. The factor structures were assessed with principal axis and confirmatory factor analyses. Multigroup confirmatory factor analyses were then employed to evaluate measurement invariance across age and gender. Analyses were supportive of the CORE measures as reliable instruments to assess distress in 12-25 year olds. Based upon eigenvalues in combination with the comparative fit index, the Tucker-Lewis Index, and the root-mean-square error of approximation, both measures were unidimensional. Analysis indicated the factor structure, loadings, item thresholds, and residuals were invariant across age and gender, although partial support for strict invariance was found for gender among 12-16 year olds. Results are compared to previous studies and discussed in the context of program planning, service delivery, and evaluation. Copyright © 2016 John Wiley & Sons, Ltd.

  10. A framework of benchmarking land models

    Science.gov (United States)

    Luo, Y. Q.; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, P.; Dalmonech, D.; Fisher, J.; Fisher, R.; Friedlingstein, P.; Hibbard, K.; Hoffman, F.; Huntzinger, D.; Jones, C. D.; Koven, C.; Lawrence, D.; Li, D. J.; Mahecha, M.; Niu, S. L.; Norby, R.; Piao, S. L.; Qi, X.; Peylin, P.; Prentice, I. C.; Riley, W.; Reichstein, M.; Schwalm, C.; Wang, Y. P.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-02-01

    Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  11. A framework of benchmarking land models

    Directory of Open Access Journals (Sweden)

    Y. Q. Luo

    2012-02-01

    Full Text Available Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1 targeted aspects of model performance to be evaluated; (2 a set of benchmarks as defined references to test model performance; (3 metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4 model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1 a priori thresholds of acceptable model performance and (2 a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.

  12. Medicare Program; Medicare Shared Savings Program; Accountable Care Organizations--Revised Benchmark Rebasing Methodology, Facilitating Transition to Performance-Based Risk, and Administrative Finality of Financial Calculations. Final rule.

    Science.gov (United States)

    2016-06-10

    Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.

  13. A Comparative Study on Performance Benefits of Multi-core CPUs using OpenMP

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi Saravanan

    2012-01-01

    Full Text Available Achieving scalable parallelism from general programs was not successful to this point. To extract parallelism from programs has become the key focus of interest on multi-core CPUs. There are many techniques and programming models such as MPI, CUDA and OpenMP adopted in order to exploit more performance. But there is an urge to #64257;nd the best parallel programming techniques for the bene#64257;t of performance. This article shows how the performance potential bene#64257;ts the parallel programming model over sequential programming model. To support our claim, we are likely to analyze the performance in terms of execution time on both sequential and parallel implementations of naive matrix multiplication vs. Strassens matrix multiplication algorithm using OpenMP. Our analysis results show that optimizing the code using OpenMP increases the performance than sequential implementation and outperforming well with parallel algorithms.

  14. Composing a core set of performance indicators for public mental health care: a modified Delphi procedure.

    Science.gov (United States)

    Lauriks, Steve; de Wit, Matty A S; Buster, Marcel C A; Arah, Onyebuchi A; Klazinga, Niek S

    2014-09-01

    Public mental health care (PMHC) systems are responsible for the wellbeing of vulnerable groups that cope with complex psychosocial problems. This article describes the development of a set of performance indicators that are feasible, meaningful, and useful to assess the quality of the PMHC system in Amsterdam, the Netherlands. Performance indicators were selected from an international inventory and presented to stakeholders of the PMHC system in a modified Delphi procedure. Characteristics of indicators were judged individually, before consensus on a core set was reached during a plenary discussion. Involving stakeholders at early stages of development increases support for quality assessment.

  15. Measuring NUMA effects with the STREAM benchmark

    CERN Document Server

    Bergstrom, Lars

    2011-01-01

    Modern high-end machines feature multiple processor packages, each of which contains multiple independent cores and integrated memory controllers connected directly to dedicated physical RAM. These packages are connected via a shared bus, creating a system with a heterogeneous memory hierarchy. Since this shared bus has less bandwidth than the sum of the links to memory, aggregate memory bandwidth is higher when parallel threads all access memory local to their processor package than when they access memory attached to a remote package. But, the impact of this heterogeneous memory architecture is not easily understood from vendor benchmarks. Even where these measurements are available, they provide only best-case memory throughput. This work presents a series of modifications to the well-known STREAM benchmark to measure the effects of NUMA on both a 48-core AMD Opteron machine and a 32-core Intel Xeon machine.

  16. Study on Scientific Research Performance Appraisal of Universities Based on Benchmarking and DEA%基于标杆管理和DEA的高校科研绩效评价研究

    Institute of Scientific and Technical Information of China (English)

    姜彤彤

    2012-01-01

    Organization can find gap through benchmarking and continuous learning until gaining the success. In this process, how to choose the right benchmark for the success of benchmarking plays a decisive role. The method of benchmarking commonly used in business management and DEA should be combined, which can help universities learn to find gaps, develop the research cycle steps of continuous performance improvement and implement them. The empirical results show that the combination of the two methods can complement each other, and it is also a feasible method for the government and enterprises.%实施标杆管理可以使组织找到差距、不断学习,直至成功。在这一过程中,选择合适的标杆合作对象,对标杆管理的成败起着决定性作用。标杆管理方法与DEA相结合可以应用于高校,使高校基于DEA的分析结果找到学习的标杆及自身的不足,制定科研绩效持续改进的方案并进行具体实施。

  17. The Isprs Benchmark on Indoor Modelling

    Science.gov (United States)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html"target="_blank">http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  18. State of the art: benchmarking microprocessors for embedded automotive applications

    Directory of Open Access Journals (Sweden)

    Adnan Shaout

    2016-09-01

    Full Text Available Benchmarking microprocessors provides a way for consumers to evaluate the performance of the processors. This is done by using either synthetic or real world applications. There are a number of benchmarks that exist today to assist consumers in evaluating the vast number of microprocessors that are available in the market. In this paper an investigation of the various benchmarks available for evaluating microprocessors for embedded automotive applications will be performed. We will provide an overview of the following benchmarks: Whetstone, Dhrystone, Linpack, standard performance evaluation corporation (SPEC CPU2006, embedded microprocessor benchmark consortium (EEMBC AutoBench and MiBench. A comparison of existing benchmarks will be given based on relevant characteristics of automotive applications which will give the proper recommendation when benchmarking processors for automotive applications.

  19. Benchmarking expert system tools

    Science.gov (United States)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  20. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh; Manzano Franco, Joseph B.; Tumeo, Antonino

    2015-05-20

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) { on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.

  1. RISKIND verification and benchmark comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  2. Performance Analysis for EDMA Based on TIC6678Multi-core DSP

    Institute of Scientific and Technical Information of China (English)

    Yun Xu; Yimin Ouyang; Renjie Niu

    2015-01-01

    Frequent data exchange among all kinds of memories has become an inevitable phenomenon in the process of modern embeddedsoftware design. In order to improve the ability of the embedded system data's throughput and computation, most embeddeddevices introduce Enhanced Direct Memory Access (EDMA) data transfer technology. TMS320C6678 is a multi-core DSPproduced by Texas Instruments (TI). There are ten EDMA transmission controllers in the chip for configuration and datatransmissions are allowed to be performed between any two pieces of storage at the same time. This paper expounds the workingmechanism of EDMA based on multi-core DSP TMS320C6678. At the same time, multiple data sets are provided and thebottleneck of limiting data throughout is analyzed and solved.

  3. Surveying and benchmarking techniques to analyse DNA gel fingerprint images.

    Science.gov (United States)

    Heras, Jónathan; Domínguez, César; Mata, Eloy; Pascual, Vico

    2016-11-01

    DNA fingerprinting is a genetic typing technique that allows the analysis of the genomic relatedness between samples, and the comparison of DNA patterns. The analysis of DNA gel fingerprint images usually consists of five consecutive steps: image pre-processing, lane segmentation, band detection, normalization and fingerprint comparison. In this article, we firstly survey the main methods that have been applied in the literature in each of these stages. Secondly, we focus on lane-segmentation and band-detection algorithms-as they are the steps that usually require user-intervention-and detect the seven core algorithms used for both tasks. Subsequently, we present a benchmark that includes a data set of images, the gold standards associated with those images and the tools to measure the performance of lane-segmentation and band-detection algorithms. Finally, we implement the core algorithms used both for lane segmentation and band detection, and evaluate their performance using our benchmark. As a conclusion of that study, we obtain that the average profile algorithm is the best starting point for lane segmentation and band detection.

  4. Financial Integrity Benchmarks

    Data.gov (United States)

    City of Jackson, Mississippi — This data compiles standard financial integrity benchmarks that allow the City to measure its financial standing. It measure the City's debt ratio and bond ratings....

  5. GeodeticBenchmark_GEOMON

    Data.gov (United States)

    Vermont Center for Geographic Information — The GeodeticBenchmark_GEOMON data layer consists of geodetic control monuments (points) that have a known position or spatial reference. The locations of these...

  6. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  7. On Big Data Benchmarking

    OpenAIRE

    Han, Rui; Lu, Xiaoyi

    2014-01-01

    Big data systems address the challenges of capturing, storing, managing, analyzing, and visualizing big data. Within this context, developing benchmarks to evaluate and compare big data systems has become an active topic for both research and industry communities. To date, most of the state-of-the-art big data benchmarks are designed for specific types of systems. Based on our experience, however, we argue that considering the complexity, diversity, and rapid evolution of big data systems, fo...

  8. Benchmarking in Foodservice Operations.

    Science.gov (United States)

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  9. XWeB: The XML Warehouse Benchmark

    Science.gov (United States)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  10. XWeB: the XML Warehouse Benchmark

    CERN Document Server

    Mahboubi, Hadj

    2011-01-01

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  11. Benchmarking and accounting for the (private) cloud

    CERN Document Server

    Belleman, J

    2015-01-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible, the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to ...

  12. Benchmarking biofuels; Biobrandstoffen benchmarken

    Energy Technology Data Exchange (ETDEWEB)

    Croezen, H.; Kampman, B.; Bergsma, G.

    2012-03-15

    A sustainability benchmark for transport biofuels has been developed and used to evaluate the various biofuels currently on the market. For comparison, electric vehicles, hydrogen vehicles and petrol/diesel vehicles were also included. A range of studies as well as growing insight are making it ever clearer that biomass-based transport fuels may have just as big a carbon footprint as fossil fuels like petrol or diesel, or even bigger. At the request of Greenpeace Netherlands, CE Delft has brought together current understanding on the sustainability of fossil fuels, biofuels and electric vehicles, with particular focus on the performance of the respective energy carriers on three sustainability criteria, with the first weighing the heaviest: (1) Greenhouse gas emissions; (2) Land use; and (3) Nutrient consumption [Dutch] Greenpeace Nederland heeft CE Delft gevraagd een duurzaamheidsmeetlat voor biobrandstoffen voor transport te ontwerpen en hierop de verschillende biobrandstoffen te scoren. Voor een vergelijk zijn ook elektrisch rijden, rijden op waterstof en rijden op benzine of diesel opgenomen. Door onderzoek en voortschrijdend inzicht blijkt steeds vaker dat transportbrandstoffen op basis van biomassa soms net zoveel of zelfs meer broeikasgassen veroorzaken dan fossiele brandstoffen als benzine en diesel. CE Delft heeft voor Greenpeace Nederland op een rijtje gezet wat de huidige inzichten zijn over de duurzaamheid van fossiele brandstoffen, biobrandstoffen en elektrisch rijden. Daarbij is gekeken naar de effecten van de brandstoffen op drie duurzaamheidscriteria, waarbij broeikasgasemissies het zwaarst wegen: (1) Broeikasgasemissies; (2) Landgebruik; en (3) Nutriëntengebruik.

  13. The MCNP6 Analytic Criticality Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Codes Group

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  14. Performance of the MTR core with MOX fuel using the MCNP4C2 code.

    Science.gov (United States)

    Shaaban, Ismail; Albarhoum, Mohamad

    2016-08-01

    The MCNP4C2 code was used to simulate the MTR-22 MW research reactor and perform the neutronic analysis for a new fuel namely: a MOX (U3O8&PuO2) fuel dispersed in an Al matrix for One Neutronic Trap (ONT) and Three Neutronic Traps (TNTs) in its core. Its new characteristics were compared to its original characteristics based on the U3O8-Al fuel. Experimental data for the neutronic parameters including criticality relative to the MTR-22 MW reactor for the original U3O8-Al fuel at nominal power were used to validate the calculated values and were found acceptable. The achieved results seem to confirm that the use of MOX fuel in the MTR-22 MW will not degrade the safe operational conditions of the reactor. In addition, the use of MOX fuel in the MTR-22 MW core leads to reduce the uranium fuel enrichment with (235)U and the amount of loaded (235)U in the core by about 34.84% and 15.21% for the ONT and TNTs cases, respectively.

  15. Evaluación comparativa del desempeño de los sistemas estatales de salud usando cobertura efectiva Benchmarking of performance of Mexican states with effective coverage

    Directory of Open Access Journals (Sweden)

    Rafael Lozano

    2007-01-01

    es una herramienta clave para la rectoría del sistema de salud. Al adoptar este enfoque, otros países podrán elegir intervenciones con base en criterios de accesibilidad, efecto en la salud de la población, efecto en desigualdades de salud y en la capacidad para medir dichos efectos. Para alcanzar el éxito en este tipo de análisis comparativo del desempeño a nivel subnacional, las instituciones nacionales que lo lleven a cabo deberán contar con autoridad, habilidades técnicas, recursos e independencia suficientes.Benchmarking of the performance of states, provinces, or districts in a decentralised health system is important for fostering of accountability, monitoring of progress, identification of determinants of success and failure, and creation of a culture of evidence. The Mexican Ministry of Health has, since 2001, used a benchmarking approach based on the World Health Organization (WHO concept of effective coverage of an intervention, which is defined as the proportion of potential health gain that could be delivered by the health system to that which is actually delivered. Using data collection systems, including state representative examination surveys, vital registration, and hospital discharge registries, we have monitored the delivery of 14 interventions for 2005-06. Overall effective coverage ranges from 54.0% in Chiapas, a poor state, to 65.1% in the Federal District. Effective coverage for maternal and child health interventions is substantially higher than that for interventions that target other health problems. Effective coverage for the lowest wealth quintile is 52% compared with 61% for the highest quintile. Effective coverage is closely related to public-health spending per head across states; this relation is stronger for interventions that are not related to maternal and child health than those for maternal and child health. Considerable variation also exists in effective coverage at similar amounts of spending. We discuss the

  16. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  17. Atmospheric circulation of tidally-locked exoplanets: a suite of benchmark tests for dynamical solvers

    CERN Document Server

    Heng, Kevin; Phillipps, Peter J

    2010-01-01

    The complexity of atmospheric modelling and its inherent non-linearity, together with the limited amount of data of exoplanets available, motivate model intercomparisons and benchmark tests. In the geophysical community, the Held-Suarez test is a standard benchmark for comparing dynamical core simulations of the Earth's atmosphere with different solvers, based on statistically-averaged flow quantities. In the present study, we perform analogues of the Held-Suarez test for tidally-locked exoplanets with the GFDL-Princeton Flexible Modeling System (FMS) by subjecting both the spectral and finite difference dynamical cores to a suite of tests, including the standard benchmark for Earth, a hypothetical tidally-locked Earth, a "shallow" hot Jupiter model and a "deep" model of HD 209458b. We find qualitative and quantitative agreement between the solvers for the Earth, tidally-locked Earth and shallow hot Jupiter benchmarks, but the agreement is less than satisfactory for the deep model of HD 209458b. Further inves...

  18. The KMAT: Benchmarking Knowledge Management.

    Science.gov (United States)

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  19. High performance in silico virtual drug screening on many-core processors.

    Science.gov (United States)

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  20. Real-time Performance Verification of Core Protection and Monitoring System with Integrated Model for SMART Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Bon-Seung; Kim, Sung-Jin; Hwang, Dae-Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    In keeping with these purposes, a real-time model of the digital core protection and monitoring systems for simulator implementation was developed on the basis of SCOPS and SCOMS algorithms. In addition, important features of the software models were explained for the application to SMART simulator, and the real-time performance of the models linked with DLL was examined for various simulation scenarios. In this paper, performance verification of core protection and monitoring software is performed with integrated simulator model. A real-time performance verification of core protection and monitoring software for SMART simulator was performed with integrated simulator model. Various DLL connection tests were done for software algorithm change. In addition, typical accident scenarios of SMART were simulated with 3KEYMASTER and simulated results were compared with those of DLL linked core protection and monitoring software. Each calculational result showed good agreements.

  1. Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System

    Science.gov (United States)

    Bulkley, Katrina E.; Christman, Jolley Bruce; Goertz, Margaret E.; Lawrence, Nancy R.

    2010-01-01

    In recent years, interim assessments have become an increasingly popular tool in districts seeking to improve student learning and achievement. Philadelphia has been at the forefront of this change, implementing a set of Benchmark assessments aligned with its Core Curriculum district-wide in 2004. In this article, we examine the overall context…

  2. ASSESSING THE PERFORMANCE AND ENERGY USAGE OF MULTI-CPUS, MULTI-CORE AND MANYCORE SYSTEMS: THE MMP IMAGE ENCODER CASE STUDY

    Directory of Open Access Journals (Sweden)

    Pedro M.M. Pereira

    2016-09-01

    Full Text Available This paper studies the performance and energy consumption of several multi-core, multi-CPUs and manycore hardware platforms and software stacks for parallel programming. It uses the Multimedia Multiscale Parser (MMP, a computationally demanding image encoder application, which was ported to several hardware and software parallel environments as a benchmark. Hardware-wise, the study assesses NVIDIA's Jetson TK1 development board, the Raspberry Pi 2, and a dual Intel Xeon E5-2620/v2 server, as well as NVIDIA's discrete GPUs GTX 680, Titan Black Edition and GTX 750 Ti. The assessed parallel programming paradigms are OpenMP, Pthreads and CUDA, and a single-thread sequential version, all running in a Linux environment. While the CUDA-based implementation delivered the fastest execution, the Jetson TK1 proved to be the most energy efficient platform, regardless of the used parallel software stack. Although it has the lowest power demand, the Raspberry Pi 2 energy efficiency is hindered by its lengthy execution times, effectively consuming more energy than the Jetson TK1. Surprisingly, OpenMP delivered twice the performance of the Pthreads-based implementation, proving the maturity of the tools and libraries supporting OpenMP.

  3. Strain-induced structural defects and their effects on the electrochemical performances of silicon core/germanium shell nanowire heterostructures.

    Science.gov (United States)

    Lin, Yung-Chen; Kim, Dongheun; Li, Zhen; Nguyen, Binh-Minh; Li, Nan; Zhang, Shixiong; Yoo, Jinkyoung

    2017-01-19

    We report on strain-induced structural defect formation in core Si nanowires of a Si/Ge core/shell nanowire heterostructure and the influence of the structural defects on the electrochemical performances in lithium-ion battery anodes based on Si/Ge core/shell nanowire heterostructures. The induced structural defects consisting of stacking faults and dislocations in the core Si nanowire were observed for the first time. The generation of stacking faults in the Si/Ge core/shell nanowire heterostructure is observed to prefer settling in either only the Ge shell region or in both the Ge shell and Si core regions and is associated with the increase of the shell volume fraction. The relaxation of the misfit strain in the [112] oriented core/shell nanowire heterostructure leads to subsequent gliding of Shockley partial dislocations, preferentially forming the twins. The observation of crossover of defect formation is of great importance for understanding heteroepitaxy in radial heterostructures at the nanoscale and for building three dimensional heterostructures for the various applications. Furthermore, the effect of the defect formation on the nanomaterial's functionality is investigated using electrochemical performance tests. The Si/Ge core/shell nanowire heterostructures enhance the gravimetric capacity of lithium ion battery anodes under fast charging/discharging rates compared to Si nanowires. However, the induced structural defects hamper lithiation of the Si/Ge core/shell nanowire heterostructure.

  4. Diagnostic performance and safety of a three-dimensional 14-core systematic biopsy method.

    Science.gov (United States)

    Takeshita, Hideki; Kawakami, Satoru; Numao, Noboru; Sakura, Mizuaki; Tatokoro, Manabu; Yamamoto, Shinya; Kijima, Toshiki; Komai, Yoshinobu; Saito, Kazutaka; Koga, Fumitaka; Fujii, Yasuhisa; Fukui, Iwao; Kihara, Kazunori

    2015-03-01

    To investigate the diagnostic performance and safety of a three-dimensional 14-core biopsy (3D14PBx) method, which is a combination of the transrectal six-core and transperineal eight-core biopsy methods. Between December 2005 and August 2010, 1103 men underwent 3D14PBx at our institutions and were analysed prospectively. Biopsy criteria included a PSA level of 2.5-20 ng/mL or abnormal digital rectal examination (DRE) findings, or both. The primary endpoint of the study was diagnostic performance and the secondary endpoint was safety. We applied recursive partitioning to the entire study cohort to delineate the unique contribution of each sampling site to overall and clinically significant cancer detection. Prostate cancer was detected in 503 of the 1103 patients (45.6%). Age, family history of prostate cancer, DRE, PSA, percentage of free PSA and prostate volume were associated with the positive biopsy results significantly and independently. Of the 503 cancers detected, 39 (7.8%) were clinically locally advanced (≥cT3a), 348 (69%) had a biopsy Gleason score (GS) of ≥7, and 463 (92%) met the definition of biopsy-based significant cancer. Recursive partitioning analysis showed that each sampling site contributed uniquely to both the overall and the biopsy-based significant cancer detection rate of the 3D14PBx method. The overall cancer-positive rate of each sampling site ranged from 14.5% in the transrectal far lateral base to 22.8% in the transrectal far lateral apex. As of August 2010, 210 patients (42%) had undergone radical prostatectomy, of whom 55 (26%) were found to have pathologically non-organ-confined disease, 174 (83%) had prostatectomy GS ≥7 and 185 (88%) met the definition of prostatectomy-based significant cancer. This is the first prospective analysis of the diagnostic performance of an extended biopsy method, which is a simplified version of the somewhat redundant super-extended three-dimensional 26-core biopsy. As expected, each sampling

  5. Performance evaluation of PSO and GA in PWR core loading pattern optimization

    Energy Technology Data Exchange (ETDEWEB)

    Khoshahval, F., E-mail: f_khoshahval@sbu.ac.i [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Velenjak, Tehran (Iran, Islamic Republic of); Minuchehr, H. [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Velenjak, Tehran (Iran, Islamic Republic of); Zolfaghari, A., E-mail: a-zolfaghari@sbu.ac.i [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Velenjak, Tehran (Iran, Islamic Republic of)

    2011-03-15

    Research highlights: The performance of both GA and PSO methods in optimizing of a PWR core are adequate. It seems GA arrives to its final parameter value in a fewer generation than the PSO. The computation time for GA is higher than PSO. The GA-2 and PSO-CFA algorithms perform better in comparison to GA-1 and PSO-IWA. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Recently, genetic algorithm (GA) and particle swarm optimization (PSO) techniques have attracted considerable attention among various modern heuristic optimization techniques. GA is a powerful optimization technique, based upon the principles of natural selection and species evolution. GA is finding popularity as design tools because of its versatility, intuitiveness and ability to solve highly non-linear, mixed integer optimization problems. PSO refers to a relatively new family of algorithms and is mainly inspired by social behavior patterns of organisms that live within large group. This study addresses the application and performance comparison of PSO and GA optimization methods for nuclear fuel loading pattern problem. Flattening of power inside the reactor core of Bushehr nuclear power plant (WWER-1000 type) is chosen as an objective function to prove the validity of algorithms. In addition the performance of both optimization techniques in terms of convergence rate and computational time is compared. It is found that, from an evolutionary point of view, the performance of both GA and PSO is quite adequate. But, GA seems to arrive at its final parameter value in a fewer generations than the PSO. It is also noticed that, the computation time for implemented GA in this work is too high in comparison to PSO.

  6. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    Science.gov (United States)

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  7. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint US/Russian Progress Report for Fiscal Year 1997, Volume 4, part 4-ESADA Plutonium Program Critical Experiments: Single-Region Core Configurations

    Energy Technology Data Exchange (ETDEWEB)

    Akkurt, H.; Abdurrahman, N.M.

    1999-05-01

    The purpose of this study is to simulate and assess the findings from selected ESADA experiments. It is presented in the format prescribed by the Nuclear Energy Agency Nuclear Science Committee for material to be included in the International Handbook of Evaluated Criticality Safety Benchmark Experiments.

  8. Westinghouse Fuel Assemblies Performance after Operation in South-Ukraine NPP Mixed Core

    Energy Technology Data Exchange (ETDEWEB)

    Abdullayev, A. M.; Kulish, G. V.; Slyeptsov, O.; Slyeptsov, S.; Aleshin, Y.; Sparrow, S.; Lashevych, P.; Sokolov, D.; Latorre, Richard

    2013-09-14

    The evaluation of WWER-1000 Westinghouse fuel performance was done using the results of post–irradiation examinations of six LTAs and the WFA reload batches that have operated normally in mixed cores at South-Ukraine NPP, Unit-3 and Unit-2. The data on WFA/LTA elongation, FR growth and bow, WFA bow and twist, RCCA drag force and drag work, RCCA drop time, FR cladding integrity as well as the visual observation of fuel assemblies obtained during the 2006-2012 outages was utilized. The analysis of the measured data showed that assembly growth, FR bow, irradiation growth, and Zr-1%Nb grid and ZIRLO cladding corrosion lies within the design limits. The RCCA drop time measured for the LTA/WFA is about 1.9 s at BOC and practically does not change at EOC. The measured WFA bow and twist, and data of drag work on RCCA insertion showed that the WFA deformation in the mixed core is mostly controlled by the distortion of Russian FAs (TVSA) having the higher lateral stiffness. The visual inspection of WFAs carried out during the 2012 outages revealed some damage to the Zr-1%Nb grid outer strap for some WFAs during the loading sequence. The performed fundamental investigations allowed identifying the root cause of grid outer strap deformation and proposing the WFA design modifications for preventing damage to SG at a 225 kg handling trip limit.

  9. Hypervelocity Impact Performance of Open Cell Foam Core Sandwich Panel Structures

    Science.gov (United States)

    Ryan, S.; Ordonez, E.; Christiansen, E. L.; Lear, D. M.

    2010-01-01

    Open cell metallic foam core sandwich panel structures are of interest for application in spacecraft micrometeoroid and orbital debris shields due to their novel form and advantageous structural and thermal performance. Repeated shocking as a result of secondary impacts upon individual foam ligaments during the penetration process acts to raise the thermal state of impacting projectiles ; resulting in fragmentation, melting, and vaporization at lower velocities than with traditional shielding configurations (e.g. Whipple shield). In order to characterize the protective capability of these structures, an extensive experimental campaign was performed by the Johnson Space Center Hypervelocity Impact Technology Facility, the results of which are reported in this paper. Although not capable of competing against the protection levels achievable with leading heavy shields in use on modern high-risk vehicles (i.e. International Space Station modules), metallic foam core sandwich panels are shown to provide a substantial improvement over comparable structural panels and traditional low weight shielding alternatives such as honeycomb sandwich panels and metallic Whipple shields. A ballistic limit equation, generalized in terms of panel geometry, is derived and presented in a form suitable for application in risk assessment codes.

  10. Performance of a core of transversal skills: self-perceptions of undergraduate medical students.

    Science.gov (United States)

    Ribeiro, Laura; Severo, Milton; Ferreira, Maria Amélia

    2016-01-15

    There is an increasingly growing trend towards integrating scientific research training into undergraduate medical education. Communication, research and organisational/learning skills are core competences acquired by scientific research activity. The aim of this study was to assess the perceived performance of a core of transversal skills, related with scientific research, by Portuguese medical students. A cross-sectional study was conducted in 611 Portuguese students attending the first, fourth and sixth years of the medical course, during the same academic year. A validated questionnaire was applied for this purpose. Medical students felt confident regarding the majority of the analyzed transversal skills, particularly regarding team work capacity (72.7% perceived their own capacity as good). On the other hand, the perceived ability to manage information technology, time and to search literature was classified only as sufficient by many of them. The progression over the medical course and participation in research activities were associated with an increasing odds of a good perceived performance of skills such as writing skills (research activity: OR = 2.00; 95% CI: 1.34-2.97) and English proficiency (research activity: OR = 1.59; 95% CI: 1.06-2.38/final year medical students: OR = 3.63; 95% CI: 2.42-5.45). In this line, the early exposure to research activities along undergraduate medical education is an added value for students and the implementation of an integrated research program on medical curriculum should be considered.

  11. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Grout, Ray W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-09

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved here through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.

  12. Enhanced Device and Circuit-Level Performance Benchmarking of Graphene Nanoribbon Field-Effect Transistor against a Nano-MOSFET with Interconnects

    OpenAIRE

    Huei Chaeng Chin; Cheng Siong Lim; Weng Soon Wong; Danapalasingam, Kumeresan A.; Arora, Vijay K.; Michael Loong Peng Tan

    2014-01-01

    Comparative benchmarking of a graphene nanoribbon field-effect transistor (GNRFET) and a nanoscale metal-oxide-semiconductor field-effect transistor (nano-MOSFET) for applications in ultralarge-scale integration (ULSI) is reported. GNRFET is found to be distinctly superior in the circuit-level architecture. The remarkable transport properties of GNR propel it into an alternative technology to circumvent the limitations imposed by the silicon-based electronics. Budding GNRFET, using the circui...

  13. NEUTRON RADIOGRAPHY (NRAD) REACTOR 64-ELEMENT CORE UPGRADE

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2014-03-01

    The neutron radiography (NRAD) reactor is a 250 kW TRIGA (registered) (Training, Research, Isotopes, General Atomics) Mark II , tank-type research reactor currently located in the basement, below the main hot cell, of the Hot Fuel Examination Facility (HFEF) at the Idaho National Laboratory (INL). It is equipped with two beam tubes with separate radiography stations for the performance of neutron radiography irradiation on small test components. The interim critical configuration developed during the core upgrade, which contains only 62 fuel elements, has been evaluated as an acceptable benchmark experiment. The final 64-fuel-element operational core configuration of the NRAD LEU TRIGA reactor has also been evaluated as an acceptable benchmark experiment. Calculated eigenvalues differ significantly (approximately +/-1%) from the benchmark eigenvalue and have demonstrated sensitivity to the thermal scattering treatment of hydrogen in the U-Er-Zr-H fuel.

  14. ASBench: benchmarking sets for allosteric discovery.

    Science.gov (United States)

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  15. A human benchmark for language recognition

    NARCIS (Netherlands)

    Orr, R.; Leeuwen, D.A. van

    2009-01-01

    In this study, we explore a human benchmark in language recognition, for the purpose of comparing human performance to machine performance in the context of the NIST LRE 2007. Humans are categorised in terms of language proficiency, and performance is presented per proficiency. Themain challenge in

  16. Deviating From the Benchmarks

    DEFF Research Database (Denmark)

    Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela

    survival? The analysis is based on a matched employer-employee dataset and covers about 17,500 startups in manufacturing and services. We adopt a new procedure to estimate individual benchmarks for the quantity and quality of initial human resources, acknowledging correlations between hiring decisions...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...

  17. Benchmarking for Best Practice

    CERN Document Server

    Zairi, Mohamed

    1998-01-01

    Benchmarking for Best Practice uses up-to-the-minute case-studies of individual companies and industry-wide quality schemes to show how and why implementation has succeeded. For any practitioner wanting to establish best practice in a wide variety of business areas, this book makes essential reading. .It is also an ideal textbook on the applications of TQM since it describes concepts, covers definitions and illustrates the applications with first-hand examples. Professor Mohamed Zairi is an international expert and leading figure in the field of benchmarking. His pioneering work in this area l

  18. DYNAMICO, an atmospheric dynamical core for high-performance climate modeling

    Science.gov (United States)

    Dubos, Thomas; Meurdesoif, Yann; Spiga, Aymeric; Millour, Ehouarn; Fita, Lluis; Hourdin, Frédéric; Kageyama, Masa; Traore, Abdoul-Khadre; Guerlet, Sandrine; Polcher, Jan

    2017-04-01

    Institut Pierre Simon Laplace has developed a very scalable atmospheric dynamical core, DYNAMICO, based on energy-conserving finite-difference/finite volume numerics on a quasi-uniform icosahedral-hexagonal mesh. Scalability is achieved by combining hybrid MPI/OpenMP parallelism to asynchronous I/O. This dynamical core has been coupled to radiative transfer physics tailored to the atmosphere of Saturn, allowing unprecedented simulations of the climate of this giant planet. For terrestrial climate studies DYNAMICO is being integrated into the IPSL Earth System Model IPSL-CM. Preliminary aquaplanet and AMIP-style simulations yield reasonable results when compared to outputs from IPSL-CM5. The observed performance suggests that an order of magnitude may be gained with respect to IPSL-CM CMIP5 simulations either on the duration of simulations or on their resolution. Longer simulations would be of interest for the study of paleoclimate, while higher resolution could improve certain aspects of the modeled climate such as extreme events, as will be explored in the HighResMIP project. Following IPSL's strategic vision of building a unified global-regional modelling system, a fully-compressible, non-hydrostatic prototype of DYNAMICO has been developed, enabling future convection-resolving simulations. Work supported by ANR project "HEAT", grant number CE23_2014_HEAT Dubos, T., Dubey, S., Tort, M., Mittal, R., Meurdesoif, Y., and Hourdin, F.: DYNAMICO-1.0, an icosahedral hydrostatic dynamical core designed for consistency and versatility, Geosci. Model Dev., 8, 3131-3150, doi:10.5194/gmd-8-3131-2015, 2015.

  19. The Effect of Performing Bi- and Unilateral Row Exercises on Core Muscle Activation.

    Science.gov (United States)

    Saeterbakken, A; Andersen, V; Brudeseth, A; Lund, H; Fimland, M S

    2015-11-01

    The purpose of the study was to compare core muscle activation in 3 different row exercises (free-weight bent-over row, seated cable row and machine row) performed unilaterally and bilaterally, at matched effort levels. 15 resistance-trained men (26.0±4.4 years, 81.0±9.5 kg, 1.81±0.07 m) performed the exercises in randomized order. For erector spinae and multifidus, EMG activities in unilateral machine- and cable row were 60-63% and 74-78% of the bilateral performance (P≤0.036). For external oblique, the EMG activities recorded during bilateral exercises were 37-41% of the unilateral performance (P≤0.010). In unilateral cable- and machine rows, the EMG activities in external oblique and multifidus were 50-57% and 70-73% of the free-weight row (P≤0.002). In bilateral free-weight row, EMG activity in erector spinae was greater than bilateral machine- (+34%, P=0.004) and unilateral free-weight rows (+12%, P=0.016). For rectus abdominis there were no significant differences between conditions. In conclusion, 1) free-weight row provided greater EMG activity in erector spinae (bilaterally and unilaterally) and multifidus (unilaterally) than machine row; 2) unilateral performance of exercises activated the external oblique more than bilateral performance, regardless of exercise; and 3) generally bilateral performance of exercises provided higher erector spinae and multifidus EMG activity compared to unilateral performance.

  20. Performance of the Dual-frequency Precipitation Radar on the GPM core satellite

    Science.gov (United States)

    Iguchi, Toshio; Seto, Shinta; Awaka, Jun; Meneghini, Robert; Kubota, Takuji; Oki, Riko; Chandra, Venkatchalam; Kawamoto, Nozomi

    2016-04-01

    The GPM core satellite was launched on February 28, 2014. This paper describes some of the results of precipitation measurements with the Dual-Frequency Precipitation Radar (DPR) on the GPM core satellite. The DPR, which was developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), consists of two radars: Ku-band precipitation radar (KuPR) and Ka-band radar (KaPR). The performance of the DPR is evaluated by comparing the level 2 products with the corresponding TRMM/PR data and surface rain measurements. The scanning geometry and footprint size of KuPR and those of PR are nearly identical. The major differences between them are the sensitivity, visiting frequency, and the rain retrieval algorithm. KuPR's sensitivity is twice as good as PR. The increase of sensitivity reduces the cases of missing light rain. Since relatively light rain prevails in Japan, the difference in sensitivity may cause a few percentage points in the bias. Comparisons of the rain estimates by GPM/DPR with AMeDAS rain gauge data over Japan show that annual KuPR's estimates over Japan agree quite well with the rain gauge estimates although the monthly or local statistics of these two kinds of data scatter substantially. KuPR's esimates are closer to the gauge estimates than the TRMM/PR. Possible sources of the differences that include sampling errors, sensitivity, and the algorithm are examined.

  1. The Lateral Compressive Buckling Performance of Aluminum Honeycomb Panels for Long-Span Hollow Core Roofs

    Directory of Open Access Journals (Sweden)

    Caiqi Zhao

    2016-06-01

    Full Text Available To solve the problem of critical buckling in the structural analysis and design of the new long-span hollow core roof architecture proposed in this paper (referred to as a “honeycomb panel structural system” (HSSS, lateral compression tests and finite element analyses were employed in this study to examine the lateral compressive buckling performance of this new type of honeycomb panel with different length-to-thickness ratios. The results led to two main conclusions: (1 Under the experimental conditions that were used, honeycomb panels with the same planar dimensions but different thicknesses had the same compressive stiffness immediately before buckling, while the lateral compressive buckling load-bearing capacity initially increased rapidly with an increasing honeycomb core thickness and then approached the same limiting value; (2 The compressive stiffnesses of test pieces with the same thickness but different lengths were different, while the maximum lateral compressive buckling loads were very similar. Overall instability failure is prone to occur in long and flexible honeycomb panels. In addition, the errors between the lateral compressive buckling loads from the experiment and the finite element simulations are within 6%, which demonstrates the effectiveness of the nonlinear finite element analysis and provides a theoretical basis for future analysis and design for this new type of spatial structure.

  2. Improving the performance of heterogeneous multi-core processors by modifying the cache coherence protocol

    Science.gov (United States)

    Fang, Juan; Hao, Xiaoting; Fan, Qingwen; Chang, Zeqing; Song, Shuying

    2017-05-01

    In the Heterogeneous multi-core architecture, CPU and GPU processor are integrated on the same chip, which poses a new challenge to the last-level cache management. In this architecture, the CPU application and the GPU application execute concurrently, accessing the last-level cache. CPU and GPU have different memory access characteristics, so that they have differences in the sensitivity of last-level cache (LLC) capacity. For many CPU applications, a reduced share of the LLC could lead to significant performance degradation. On the contrary, GPU applications can tolerate increase in memory access latency when there is sufficient thread-level parallelism. Taking into account the GPU program memory latency tolerance characteristics, this paper presents a method that let GPU applications can access to memory directly, leaving lots of LLC space for CPU applications, in improving the performance of CPU applications and does not affect the performance of GPU applications. When the CPU application is cache sensitive, and the GPU application is insensitive to the cache, the overall performance of the system is improved significantly.

  3. Parallel Processing Performance on Multi-Core PC Cluster Distributing Communication Load to Multiple Paths

    Science.gov (United States)

    Fukunaga, Takafumi

    Due to advent of powerful Multi-Core PC cluster the computation performance of each node is dramatically increassed and this trend will continue in the future. On the other hand, the use of powerful network systems (Myrinet, Infiniband, etc.) is expensive and tends to increase difficulty of programming and degrades portability because they need dedicated libraries and protocol stacks. This paper proposes a relatively simple method to improve bandwidth-oriented parallel applications by improving the communication performance without the above dedicated hardware, libraries, protocol stacks and IEEE802.3ad (LACP). Although there are similarities between this proposal and IEEE802.3ad in respect to using multiple Ethernet ports, the proposal performs equal to or better than IEEE802.3ad without LACP switches and drivers. Moreover the performance of LACP is influenced by the environment (MAC addresses, IP addresses, etc.) because its distribution algorithm uses these parameters, the proposed method shows the same effect in spite of them.

  4. User and Performance Impacts from Franklin Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    He, Yun (Helen)

    2009-05-10

    The NERSC flagship computer Cray XT4 system"Franklin" has gone through three major upgrades: quad core upgrade, CLE 2.1 upgrade, and IO upgrade, during the past year. In this paper, we will discuss the various aspects of the user impacts such as user access, user environment, and user issues etc from these upgrades. The performance impacts on the kernel benchmarks and selected application benchmarks will also be presented.

  5. Benchmarks: WICHE Region 2012

    Science.gov (United States)

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  6. Surveys and Benchmarks

    Science.gov (United States)

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  7. The IPE Database: providing information on plant design, core damage frequency and containment performance

    Energy Technology Data Exchange (ETDEWEB)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T. [Brookhaven National Lab., Upton, NY (United States); Su, T.; Danziger, L. [U.S. Nuclear Regulartory Commission, No. Bethesda, MD (United States)

    1996-08-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission`s (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database.

  8. Family incivility and job performance: a moderated mediation model of psychological distress and core self-evaluation.

    Science.gov (United States)

    Lim, Sandy; Tai, Kenneth

    2014-03-01

    This study extends the stress literature by exploring the relationship between family incivility and job performance. We examine whether psychological distress mediates the link between family incivility and job performance. We also investigate how core self-evaluation might moderate this mediated relationship. Data from a 2-wave study indicate that psychological distress mediates the relationship between family incivility and job performance. In addition, core self-evaluation moderates the relationship between family incivility and psychological distress but not the relationship between psychological distress and job performance. The results hold while controlling for general job stress, family-to-work conflict, and work-to-family conflict. The findings suggest that family incivility is linked to poor performance at work, and psychological distress and core self-evaluation are key mechanisms in the relationship.

  9. DWEB: A Data Warehouse Engineering Benchmark

    CERN Document Server

    Darmont, Jérôme; Boussaïd, Omar

    2005-01-01

    Data warehouse architectural choices and optimization techniques are critical to decision support query performance. To facilitate these choices, the performance of the designed data warehouse must be assessed. This is usually done with the help of benchmarks, which can either help system users comparing the performances of different systems, or help system engineers testing the effect of various design choices. While the TPC standard decision support benchmarks address the first point, they are not tuneable enough to address the second one and fail to model different data warehouse schemas. By contrast, our Data Warehouse Engineering Benchmark (DWEB) allows to generate various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. Finally, DWEB is implemented as a Java free software that can be interfaced with most existing relational database management systems. A sample usag...

  10. Comparison of the Structural Performance of Monolithic and Precast Reinforced Concrete Core Walls

    OpenAIRE

    Nakachi, Tadaharu

    2014-01-01

    In the core wall system in high-rise buildings, the four L-shaped core walls at the center effectively reduce seismic vibration. On the other hand, precast core walls are effective for construction because they can be built more quickly than cast-in-place core walls. In this study, a lateral loading test was conducted on a monolithic wall column simulating the corner and the area near the corner of an L-shaped core wall. The test results were compared with those of a precast wall column teste...

  11. Architecture, Design and Implementation of RC64, a Many-Core High-Performance DSP for Space Applications

    Science.gov (United States)

    Ginosar, Ran; Aviely, Peleg; Liran, Tuvia; Alon, Dov; Dobkin, Reuven; Goldberg, Michael

    2013-08-01

    RC64, a novel 64-core many-core signal processing chip targets DSP performance of 12.8 GIPS, 100 GOPS and 12.8 single precision GFLOS while dissipating only 3 Watts. RC64 employs advanced DSP cores, a multi-bank shared memory and a hardware scheduler, supports DDR2 memory and communicates over five proprietary 6.4 Gbps channels. The programming model employs sequential fine-grain tasks and a separate task map to define task dependencies. RC64 is implemented as a 200 MHz ASIC on Tower 130nm CMOS technology, assembled in hermetically sealed ceramic QFP package and qualified to the highest space standards.

  12. Seismic Performance of Precast Reinforced Concrete Core Wall with Horizontal Tied Rebars at Mid Height Level of First Story

    OpenAIRE

    Nakachi, Tadaharu

    2013-01-01

    Precast core walls are considered effective for construction because they can be built more quickly than cast-in-place core walls. Previously, we conducted a lateral loading test on a full precast wall column simulating the area near the corner of an L-shaped core wall in order to examine the seismic performance. The wall column was divided into precast columns, and horizontal tied rebars were concentrated at the second and third floor levels to connect the precast columns. In this study, a l...

  13. Concave Pd-Pt Core-Shell Nanocrystals with Ultrathin Pt Shell Feature and Enhanced Catalytic Performance.

    Science.gov (United States)

    Zhang, Ying; Bu, Lingzheng; Jiang, Kezhu; Guo, Shaojun; Huang, Xiaoqing

    2016-02-10

    One-pot creation of unique concave Pd-Pt core-shell polyhedra has been developed for the first time using an efficient approach. Due to the concave feature and ultrathin Pt shell, the created Pd-Pt core-shell polyhedra exhibit enhanced catalytic performance in both the electrooxidation of methanol and hydrogenation of nitrobenzene, as compared with commercial Pt black and Pd black catalysts. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...

  15. Benchmarking health IT among OECD countries: better data for better policy.

    Science.gov (United States)

    Adler-Milstein, Julia; Ronchi, Elettra; Cohen, Genna R; Winn, Laura A Pannella; Jha, Ashish K

    2014-01-01

    To develop benchmark measures of health information and communication technology (ICT) use to facilitate cross-country comparisons and learning. The effort is led by the Organisation for Economic Co-operation and Development (OECD). Approaches to definition and measurement within four ICT domains were compared across seven OECD countries in order to identify functionalities in each domain. These informed a set of functionality-based benchmark measures, which were refined in collaboration with representatives from more than 20 OECD and non-OECD countries. We report on progress to date and remaining work to enable countries to begin to collect benchmark data. The four benchmarking domains include provider-centric electronic record, patient-centric electronic record, health information exchange, and tele-health. There was broad agreement on functionalities in the provider-centric electronic record domain (eg, entry of core patient data, decision support), and less agreement in the other three domains in which country representatives worked to select benchmark functionalities. Many countries are working to implement ICTs to improve healthcare system performance. Although many countries are looking to others as potential models, the lack of consistent terminology and approach has made cross-national comparisons and learning difficult. As countries develop and implement strategies to increase the use of ICTs to promote health goals, there is a historic opportunity to enable cross-country learning. To facilitate this learning and reduce the chances that individual countries flounder, a common understanding of health ICT adoption and use is needed. The OECD-led benchmarking process is a crucial step towards achieving this.

  16. THE EFFECT OF SELF-SET GRADE GOALS AND CORE SELF-EVALUATIONS ON ACADEMIC PERFORMANCE: A DIARY STUDY.

    Science.gov (United States)

    Bipp, Tanja; Kleingeld, Ad; Van Den Tooren, Marieke; Schinkel, Sonja

    2015-12-01

    The aim of this diary study was to examine the effect of self-set grade goals and core self-evaluations on academic performance. Data were collected among 59 university students (M age = 18.4 yr., SD = 0.8) in a 2-wk. exam period on up to five exam days. Multilevel analyses revealed that the individual grade goals students set for their exams were positively related to the grades they obtained for these exams. However, the goal-performance relationship only applied to students scoring high on core self-evaluations. The results of this study contribute to the understanding of the effect of self-set grade goals and core self-evaluations on academic performance and imply important practical applications to enhance academic performance.

  17. Benchmark Generation and Simulation at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Lagadapati, Mahesh [North Carolina State University (NCSU), Raleigh; Mueller, Frank [North Carolina State University (NCSU), Raleigh; Engelmann, Christian [ORNL

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  18. Benchmarking i den offentlige sektor

    DEFF Research Database (Denmark)

    Bukh, Per Nikolaj; Dietrichson, Lars; Sandalgaard, Niels

    2008-01-01

    I artiklen vil vi kort diskutere behovet for benchmarking i fraværet af traditionelle markedsmekanismer. Herefter vil vi nærmere redegøre for, hvad benchmarking er med udgangspunkt i fire forskellige anvendelser af benchmarking. Regulering af forsyningsvirksomheder vil blive behandlet, hvorefter...

  19. Investigating the Influences of Core Self-Evaluations, Job Autonomy, and Intrinsic Motivation on In-Role Job Performance

    Science.gov (United States)

    Joo, Baek-Kyoo; Jeung, Chang-Wook; Yoon, Hea Jun

    2010-01-01

    This study investigates the effects of core self-evaluations, job autonomy, and intrinsic motivation on employees' perceptions of their in-role job performance, based on a cross-sectional survey of 283 employees in a Fortune Global 100 company in Korea. The results suggest that employees perceived higher in-role job performance when they had…

  20. Confidential benchmarking based on multiparty computation

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Damgård, Kasper Lyneborg; Nielsen, Kurt;

    We report on the design and implementation of a system that uses multiparty computation to enable banks to benchmark their customers' confidential performance data against a large representative set of confidential performance data from a consultancy house. The system ensures that both the banks......' and the consultancy house's data stays confidential, the banks as clients learn nothing but the computed benchmarking score. In the concrete business application, the developed prototype help Danish banks to find the most efficient customers among a large and challenging group of agricultural customers with too much...... debt. We propose a model based on linear programming for doing the benchmarking and implement it using the SPDZ protocol by Damgård et al., which we modify using a new idea that allows clients to supply data and get output without having to participate in the preprocessing phase and without keeping...

  1. Radiography benchmark 2014

    Energy Technology Data Exchange (ETDEWEB)

    Jaenisch, G.-R., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Deresch, A., E-mail: Gerd-Ruediger.Jaenisch@bam.de; Bellon, C., E-mail: Gerd-Ruediger.Jaenisch@bam.de [Federal Institute for Materials Research and Testing, Unter den Eichen 87, 12205 Berlin (Germany); Schumm, A.; Lucet-Sanchez, F.; Guerin, P. [EDF R and D, 1 avenue du Général de Gaulle, 92141 Clamart (France)

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  2. Benchmark for Evaluating Moving Object Indexes

    DEFF Research Database (Denmark)

    Chen, Su; Jensen, Christian Søndergaard; Lin, Dan

    2008-01-01

    that targets techniques for the indexing of the current and near-future positions of moving objects. This benchmark enables the comparison of existing and future indexing techniques. It covers important aspects of such indexes that have not previously been covered by any benchmark. Notable aspects covered...... include update efficiency, query efficiency, concurrency control, and storage requirements. Next, the paper applies the benchmark to half a dozen notable moving-object indexes, thus demonstrating the viability of the benchmark and offering new insight into the performance properties of the indexes....

  3. Features and technology of enterprise internal benchmarking

    Directory of Open Access Journals (Sweden)

    A.V. Dubodelova

    2013-06-01

    Full Text Available The aim of the article. The aim of the article is to generalize characteristics, objectives, advantages of internal benchmarking. The stages sequence of internal benchmarking technology is formed. It is focused on continuous improvement of process of the enterprise by implementing existing best practices.The results of the analysis. Business activity of domestic enterprises in crisis business environment has to focus on the best success factors of their structural units by using standard research assessment of their performance and their innovative experience in practice. Modern method of those needs satisfying is internal benchmarking. According to Bain & Co internal benchmarking is one the three most common methods of business management.The features and benefits of benchmarking are defined in the article. The sequence and methodology of implementation of individual stages of benchmarking technology projects are formulated.The authors define benchmarking as a strategic orientation on the best achievement by comparing performance and working methods with the standard. It covers the processes of researching, organization of production and distribution, management and marketing methods to reference objects to identify innovative practices and its implementation in a particular business.Benchmarking development at domestic enterprises requires analysis of theoretical bases and practical experience. Choice best of experience helps to develop recommendations for their application in practice.Also it is essential to classificate species, identify characteristics, study appropriate areas of use and development methodology of implementation. The structure of internal benchmarking objectives includes: promoting research and establishment of minimum acceptable levels of efficiency processes and activities which are available at the enterprise; identification of current problems and areas that need improvement without involvement of foreign experience

  4. Heat Removal Performance of Hybrid Control Rod for Passive In-Core Cooling System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Mo; Jeong, Yeong Shin; Kim, In Guk; Bang, In Cheol [UNIST, Ulsan (Korea, Republic of)

    2015-10-15

    The two-phase closed heat transfer device can be divided by thermosyphon heat pipe and capillary wicked heat pipe which uses gravitational force or capillary pumping pressure as a driving force of the convection of working fluid. If there is a temperature difference between reactor core and ultimate heat sink, the decay heat removal and reactor shutdown is possible at any accident conditions without external power sources. To apply the hybrid control rod to the commercial nuclear power plants, its modelling about various parameters is the most important work. Also, its unique geometry is coexistence of neutron absorber material and working fluid in a cladding material having annular vapor path. Although thermosyphon heat pipe (THP) or wicked heat pipe (WHP) shows high heat transfer coefficients for limited space, the maximum heat removal capacity is restricted by several phenomena due to their unique heat transfer mechanism. Validation of the existing correlations on the annular vapor path thermosyphon (ATHP) which has different wetted perimeter and heated diameter must be conducted. The effect of inner structure, and fill ratio of the working fluid on the thermal performance of heat pipe has not been investigated. As a first step of the development of hybrid heat pipe, the ATHP which contains neutron absorber in the concentric thermosyphon (CTHP) was prepared and the thermal performance of the annular thermosyphon was experimentally studied. The heat transfer characteristics and flooding limit of the annular vapor path thermosyphon was studied experimentally to model the performance of hybrid control rod. The following results were obtained: (1) The annular vapor path thermosyphon showed better evaporation heat transfer due to the enhanced convection between adiabatic and condenser section. (2) Effect of fill ratio on the heat transfer characteristics was negligible. (3) Existing correlations about flooding limit of thermosyphon could not reflect the annular vapor

  5. Verification of ARES transport code system with TAKEDA benchmarks

    Science.gov (United States)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  6. Effect of Core Training Program on Physical Functional Performance in Female Soccer Players

    Science.gov (United States)

    Taskin, Cengiz

    2016-01-01

    The purpose of this study was to determine the effect of core training program on speed, acceleration, vertical jump, and standing long jump in female soccer players. A total of 40 female soccer players volunteered to participate in this study. They were divided randomly into 1 of 2 groups: core training group (CTG; n = 20) and control group (CG;…

  7. Improvement of Core Performance by Introduction of Moderators in a Blanket Region of Fast Reactors

    Directory of Open Access Journals (Sweden)

    Toshio Wakabayashi

    2013-01-01

    Full Text Available An application of deuteride moderator for fast reactor cores is proposed for power flattening that can mitigate thermal spikes and alleviate the decrease in breeding ratio, which sometimes occurs when hydrogen moderator is applied as a moderator. Zirconium deuteride is employed in a form of pin arrays at the inner most rows of radial blanket fuel assemblies, which works as a reflector in order to flatten the radial power distribution in the outer core region of MONJU. The power flattening can be utilized to increase core average burn-up by increasing operational time. The core characteristics have been evaluated with a continuous-energy model Monte Carlo code MVP and the JENDL-3.3 cross-section library. The result indicates that the discharged fuel burn-up can be increased by about 7% relative to that of no moderator in the blanket region due to the power flattening when the number of deuteride moderator pins is 61. The core characteristics and core safety such as void reactivity, Doppler coefficient, and reactivity insertion that occurred at dissolution of deuteron were evaluated. It was clear that the serious drawback did not appear from the viewpoints of the core characteristics and core safety.

  8. Benchmarking ICRF simulations for ITER

    Energy Technology Data Exchange (ETDEWEB)

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  9. MoS2 /Carbon Nanotube Core-Shell Nanocomposites for Enhanced Nonlinear Optical Performance.

    Science.gov (United States)

    Zhang, Xiaoyan; Selkirk, Andrew; Zhang, Saifeng; Huang, Jiawei; Li, Yuanxin; Xie, Yafeng; Dong, Ningning; Cui, Yun; Zhang, Long; Blau, Werner J; Wang, Jun

    2017-03-08

    Nanocomposites of layered MoS2 and multi-walled carbon nanotubes (CNTs) with core-shell structure were prepared by a simple solvothermal method. The formation of MoS2 nanosheets on the surface of coaxial CNTs has been confirmed by scanning electron microscopy, transmission electron microscopy, absorption spectrum, Raman spectroscopy, and X-ray photoelectron spectroscopy. Enhanced third-order nonlinear optical performances were observed for both femtosecond and nanosecond laser pulses over a broad wavelength range from the visible to the near infrared, compared to those of MoS2 and CNTs alone. The enhancement can be ascribed to the strong coupling effect and the photoinduced charge transfer between MoS2 and CNTs. This work affords an efficient way to fabricate novel CNTs based nanocomposites for enhanced nonlinear light-matter interaction. The versatile nonlinear properties imply a huge potential of the nanocomposites in the development of nanophotonic devices, such as mode-lockers, optical limiters, or optical switches.

  10. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  11. IPE Data Base: Plant design, core damage frequency and containment performance information

    Energy Technology Data Exchange (ETDEWEB)

    Lehner, J.; Lin, C.C.; Pratt, W.T. [Brookhaven National Lab., Upton, NY (United States); Su, T.; Danziger, L. [Nuclear Regulatory Commission, Rockville, MD (United States)

    1995-12-31

    This data base stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to NRC`s Generic Letter GL88-20. The IPE Data Base is a collection of linked files which store information about plant design, core damage frequency, and containment performance in a uniform, structured way. The information contined in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Data Base can be maniulated so that queries regarding individual or groups of plants can be answered using the IPE Data Base. The IPE Data Base supports detailed inquiries into the characteristics of individual plants or classes of plants. Progress has been made on the IPE Data Base and it is largely complete. Recent focus has been the development of a user friendly version which is menu driven and allows the user to ask queries of varying complexity easily, without the need to become familiar with particular data base formats or conventions such as those of DBase IV or Microsoft Access. The user can obtain the information he desired by quickly moving through a series of on-screen menus and ``clicking`` on appropriate choices. In this way even a first time user can benefit from the large amount of information stored in the IPE Data Base without the need of a learning period.

  12. Study of Various Factors Affecting Performance of Multi-Core Processors

    Directory of Open Access Journals (Sweden)

    Nitin Chaturvedi

    2013-07-01

    Full Text Available Advances in Integrated Circuit processing allow for more microprocessor design options. As ChipMultiprocessor system (CMP become the predominant topology for leading microprocessors, criticalcomponents of the system are now integrated on a single chip. This enables sharing of computationresources that was not previously possible. In addition the virtualization of these computation resourcesexposes the system to a mix of diverse and competing workloads. On chip Cache memory is a resource ofprimary concern as it can be dominant in controlling overall throughput. This Paper presents analysis ofvarious parameters affecting the performance of Multi-core Architectures like varying the number ofcores, changes L2 cache size, further we have varied directory size from 64 to 2048 entries on a 4 node, 8node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicoreprocessors with private/shared last level cache as the future trend seems to be towards tiledarchitecture executing multiple parallel applications with optimized silicon area utilization and excellentperformance.

  13. Uncertainties of the KIKO3D-ATHLET calculations using the Kalinin-3 benchmark (Phase II) data

    Energy Technology Data Exchange (ETDEWEB)

    Panka, Istvan; Hegyi, Gyoergy; Maraczy, Csaba; Kereszturi, Andras [Hungarian Academy of Sciences, Centre for Energy Research, Budapest (Hungary). Reactor Analysis Dept.

    2016-09-15

    The best estimate simulation of three-dimensional phenomena in nuclear reactor cores requires the use of coupled neutron physics and thermal-hydraulics calculations. However these analyses should be supplemented by the survey of the corresponding uncertainties. In this paper the uncertainties of the coupled KIKO3D-ATHLET calculations are presented for a VVER-1000 type core using the OECD NEA Kalinin-3 (Phase II) benchmark data, although only the neutronic uncertainties are considered and further simplifications are applied and discussed. Additionally, this study has been performed in the conjunction with the OECD NEA UAM benchmark, as well. In the first part of the paper, the uncertainties of the effective multiplication factor, the assembly-wise radial power distribution, the axial power distribution, the rod worth, etc. are presented at steady-state. After that some uncertainties of the transient calculations are discussed for the considered switch-off of one Main Circulation Pump (MCP) type transient.

  14. Benchmarking of Heavy Ion Transport Codes

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [ORNL; Ronningen, Reginald M. [Michigan State University, East Lansing; Heilbronn, Lawrence [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.

  15. Selecting indicators for international benchmarking of radiotherapy centres

    NARCIS (Netherlands)

    van Lent, W.A.M.; de Beer, R. D.; van Triest, B.; van Harten, Willem H.

    2013-01-01

    Introduction: Benchmarking can be used to improve hospital performance. It is however not easy to develop a concise and meaningful set of indicators on aspects related to operations management. We developed an indicator set for managers and evaluated its use in an international benchmark of

  16. Selecting indicators for international benchmarking of radiotherapy centres

    NARCIS (Netherlands)

    Lent, van W.A.M.; Beer, de R. D.; Triest, van B.; Harten, van W.H.

    2013-01-01

    Introduction: Benchmarking can be used to improve hospital performance. It is however not easy to develop a concise and meaningful set of indicators on aspects related to operations management. We developed an indicator set for managers and evaluated its use in an international benchmark of radiothe

  17. A Protein Classification Benchmark collection for machine learning

    NARCIS (Netherlands)

    Sonego, P.; Pacurar, M.; Dhir, S.; Kertész-Farkas, A.; Kocsor, A.; Gáspári, Z.; Leunissen, J.A.M.; Pongor, S.

    2007-01-01

    Protein classification by machine learning algorithms is now widely used in structural and functional annotation of proteins. The Protein Classification Benchmark collection (http://hydra.icgeb.trieste.it/benchmark) was created in order to provide standard datasets on which the performance of machin

  18. Energy benchmarking of South Australian WWTPs.

    Science.gov (United States)

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  19. Development of High Performance Composite Foam Insulation with Vacuum Insulation Cores

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Kaushik [ORNL; Desjarlais, Andre Omer [ORNL; SmithPhD, Douglas [NanoPore, Inc.; LettsPhD, John [Firestone Building Products; YaoPhD, Jennifer [Firestone Building Products

    2016-01-01

    Development of a high performance thermal insulation (thermal resistance or R-value per inch of R-12 hr-ft2- F/Btu-in or greater), with twice the thermal resistance of state-of-the-art commercial insulation materials ( R6/inch for foam insulation), promises a transformational impact in the area of building insulation. In 2010, in the US, the building envelope-related primary energy consumption was 15.6 quads, of which 5.75 quads were due to opaque wall and roof sections; the total US consumption (building, industrial and transportation) was 98 quads. In other words, the wall and roof contribution was almost 6% of the entire US primary energy consumption. Building energy modeling analyses have shown that adding insulation to increase the R-value of the external walls of residential buildings by R10-20 (hr-ft2- F/Btu) can yield savings of 38-50% in wall-generated heating and cooling loads. Adding R20 will require substantial thicknesses of current commercial insulation materials, often requiring significant (and sometimes cost-prohibitive) alterations to existing buildings. This article describes the development of a next-generation composite insulation with a target thermal resistance of R25 for a 2 inch thick board (R12/inch or higher). The composite insulation will contain vacuum insulation cores, which are nominally R35-40/inch, encapsulated in polyisocyanurate foam. A recently-developed variant of vacuum insulation, called modified atmosphere insulation (MAI), was used in this research. Some background information on the thermal performance and distinguishing features of MAI has been provided. Technical details of the composite insulation development and manufacturing as well as laboratory evaluation of prototype insulation boards are presented.

  20. Algorithm and Architecture Independent Benchmarking with SEAK

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  1. Operational benchmarking of Japanese and Danish hopsitals

    DEFF Research Database (Denmark)

    Traberg, Andreas; Itoh, Kenji; Jacobsen, Peter

    2010-01-01

    This benchmarking model is designed as an integration of three organizational dimensions suited for the healthcare sector. The model incorporates posterior operational indicators, and evaluates upon aggregation of performance. The model is tested upon seven cases from Japan and Denmark. Japanese...

  2. Benchmark Experiment for Beryllium Slab Samples

    Institute of Scientific and Technical Information of China (English)

    NIE; Yang-bo; BAO; Jie; HAN; Rui; RUAN; Xi-chao; REN; Jie; HUANG; Han-xiong; ZHOU; Zu-ying

    2015-01-01

    In order to validate the evaluated nuclear data on beryllium,a benchmark experiment has been performed at China Institution of Atomic Energy(CIAE).Neutron leakage spectra from pure beryllium slab samples(10cm×10cm×11cm)were measured at 61°and 121°using timeof-

  3. 基于LQG性能基准的预测控制经济性能评估算法%Economic performance assessment of model predictive control (MPC) based on LQG benchmarking

    Institute of Scientific and Technical Information of China (English)

    赵超; 张登峰; 许巧玲; 李学来

    2012-01-01

    With the goals of optimal performance, energy conservation and cost effectiveness of process operations in industry, economic performance assessment of advanced process control have received great attention in both academia and industry. Controller performance monitoring and assessment are necessary to assure effectiveness of model predictive control systems and consequently safe and profitable plant operation. An approach to economic performance assessment of model predictive control system is presented. The method builds on steady-state economic optimization techniques and uses the linear quadratic gaussian (LQG) benchmark other than conventional minimum variance control (MVC) to estimate the potential of reduction in variance. The LQG control is a more practical performance benchmark compared to MVC for performance assessment since it considers input variance and output variance, and it thus provides a desired basis for determining the theoretical maximum economic benefit potential arising from variability reduction. Combining the LQG benchmark directly with benefit potential of MPC control system, both the economic benefits and the optimal operation condition can be obtained by solving the economic optimization problem. The proposed algorithm is illustrated by a simulated example of Shell standard problem.%针对已有经济性能评估算法大多采用最小方差控制(Minimum Variance Control,MVC)性能基准,存在对预测控制系统(Model Predictive Control,MPC)性能评估结果可靠性不高的问题,提出了基于线性二次高斯控制(Linear Quadratic Gaussian,LQG)性能基准的经济性能评估算法.通过数值计算方法确定LQG性能基准曲线,避免了复杂交互矩阵的计算.算法以基于模型的稳态经济优化技术为基础,将LQG基准和预测控制系统的经济性能估计相结合,并通过建立一系列稳态优化问题来描述控制系统在不同控制策略下的经济性能.与已有评估算法相比,本

  4. Engineering high-performance Pd core-MgO porous shell nanocatalysts via heterogeneous gas-phase synthesis.

    Science.gov (United States)

    Singh, Vidyadhar; Cassidy, Cathal; Abild-Pedersen, Frank; Kim, Jeong-Hwan; Aranishi, Kengo; Kumar, Sushant; Lal, Chhagan; Gspan, Christian; Grogger, Werner; Sowwan, Mukhles

    2015-08-28

    We report on the design and synthesis of high performance catalytic nanoparticles with a robust geometry via magnetron-sputter inert-gas condensation. Sputtering of Pd and Mg from two independent neighbouring targets enabled heterogeneous condensation and growth of nanoparticles with controlled Pd core-MgO porous shell structure. The thickness of the shell and the number of cores within each nanoparticle could be tailored by adjusting the respective sputtering powers. The nanoparticles were directly deposited on glassy carbon electrodes, and their catalytic activity towards methanol oxidation was examined by cyclic voltammetry. The measurements indicated that the catalytic activity was superior to conventional bare Pd nanoparticles. As confirmed by electron microscopy imaging and supported by density-functional theory (DFT) calculations, we attribute the improved catalytic performance primarily to inhibition of Pd core sintering during the catalytic process by the metal-oxide shell.

  5. Kinetic parameters study based on burn-up for improving the performance of research reactor equilibrium core

    Directory of Open Access Journals (Sweden)

    Muhammad Atta

    2014-01-01

    Full Text Available In this study kinetic parameters, effective delayed neutron fraction and prompt neutron generation time have been investigated at different burn-up stages for research reactor's equilibrium core utilizing low enriched uranium high density fuel (U3Si2-Al fuel with 4.8 g/cm3 of uranium. Results have been compared with reference operating core of Pakistan research Reactor-1. It was observed that by increasing fuel burn-up, effective delayed neutron fraction is decreased while prompt neutron generation time is increased. However, over all ratio beff/L is decreased with increasing burn-up. Prompt neutron generation time L in the understudy core is lower than reference operating core of reactor at all burn-up steps due to hard spectrum. It is observed that beff is larger in the understudy core than reference operating core of due to smaller size. Calculations were performed with the help of computer codes WIMSD/4 and CITATION.

  6. Site-specific carbon deposition for hierarchically ordered core/shell-structured graphitic carbon with remarkable electrochemical performance.

    Science.gov (United States)

    Lv, Yingying; Wu, Zhangxiong; Qian, Xufang; Fang, Yin; Feng, Dan; Xia, Yongyao; Tu, Bo; Zhao, Dongyuan

    2013-10-01

    A fascinating core-shell-structured graphitic carbon material composed of ordered microporous core and uniform mesoporous shell is fabricated for the first time through a site-specific chemical vapor deposition process by using a nanozeolite@mesostructured silica composite molecular sieve as the template. The mesostructure-directing agent cetyltrimethylammonium bromide in the shell of the template can be either burned off or carbonized so that it is successfully utilized as a pore switch to turn the shell of the template "on" or "off" to allow selective carbon deposition. The preferred carbon deposition process can be performed only in the inner microporous zeolite cores or just within the outer mesoporous shells, resulting in a zeolite-like ordered microporous carbon or a hollow mesoporous carbon. Full carbon deposition in the template leads to the new core-shell-structured microporous@mesoporous carbon with a nanographene-constructed framework for fast electron transport, a microporous nanocore with large surface area for high-capacity storage of lithium ions, a mesoporous shell with highly opened mesopores as a transport layer for lithium ions and electron channels to access inner cores. The ordered micropores are protected by the mesoporous shell, avoiding pore blockage as the formation of solid electrolyte interphase layers. Such a unique core-shell-structured microporous@mesoporous carbon material represents a newly established lithium ion storage model, demonstrating high reversible energy storage, excellent rate capability, and long cyclic stability.

  7. Validation of VHTRC calculation benchmark of critical experiment using the MCB code

    Directory of Open Access Journals (Sweden)

    Stanisz Przemysław

    2016-01-01

    Full Text Available The calculation benchmark problem Very High Temperature Reactor Critical (VHTR a pin-in-block type core critical assembly has been investigated with the Monte Carlo Burnup (MCB code in order to validate the latest version of Nuclear Data Library based on ENDF format. Executed benchmark has been made on the basis of VHTR benchmark available from the International Handbook of Evaluated Reactor Physics Benchmark Experiments. This benchmark is useful for verifying the discrepancies in keff values between various libraries and experimental values. This allows to improve accuracy of the neutron transport calculations that may help in designing the high performance commercial VHTRs. Almost all safety parameters depend on the accuracy of neutron transport calculation results that, in turn depend on the accuracy of nuclear data libraries. Thus, evaluation of the libraries applicability to VHTR modelling is one of the important subjects. We compared the numerical experiment results with experimental measurements using two versions of available nuclear data (ENDF-B-VII.1 and JEFF-3.2 prepared for required temperatures. Calculations have been performed with the MCB code which allows to obtain very precise representation of complex VHTR geometry, including the double heterogeneity of a fuel element. In this paper, together with impact of nuclear data, we discuss also the impact of different lattice modelling inside the fuel pins. The discrepancies of keff have been successfully observed and show good agreement with each other and with the experimental data within the 1 σ range of the experimental uncertainty. Because some propagated discrepancies observed, we proposed appropriate corrections in experimental constants which can improve the reactivity coefficient dependency. Obtained results confirm the accuracy of the new Nuclear Data Libraries.

  8. Benchmarking and Regulation

    DEFF Research Database (Denmark)

    Agrell, Per J.; Bogetoft, Peter

    nchmarking methods, and in particular Data Envelopment Analysis (DEA), have become well-established and informative tools for economic regulation. DEA is now routinely used by European regulators to set reasonable revenue caps for energy transmission and distribution system operators....... The application of benchmarking in regulation, however, requires specific steps in terms of data validation, model specification and outlier detection that are not systematically documented in open publications, leading to discussions about regulatory stability and economic feasibility of these techniques....... In this paper, we review the modern foundations for frontier-based regulation and we discuss its actual use in several jurisdictions....

  9. High-performance core-shell PdPt@Pt/C catalysts via decorating PdPt alloy cores with Pt

    Science.gov (United States)

    Wu, Yan-Ni; Liao, Shi-Jun; Liang, Zhen-Xing; Yang, Li-Jun; Wang, Rong-Fang

    A core-shell structured low-Pt catalyst, PdPt@Pt/C, with high performance towards both methanol anodic oxidation and oxygen cathodic reduction, as well as in a single hydrogen/air fuel cell, is prepared by a novel two-step colloidal approach. For the anodic oxidation of methanol, the catalyst shows three times higher activity than commercial Tanaka 50 wt% Pt/C catalyst; furthermore, the ratio of forward current I f to backward current I b is high up to 1.04, whereas for general platinum catalysts the ratio is only ca. 0.70, indicating that this PdPt@Pt/C catalyst has high activity towards methanol anodic oxidation and good tolerance to the intermediates of methanol oxidation. The catalyst is characterized by X-ray diffraction (XRD), transmission electron microscopy (TEM), and X-ray photoelectron spectroscopy (XPS). The core-shell structure of the catalyst is revealed by XRD and TEM, and is also supported by underpotential deposition of hydrogen (UPDH). The high performance of the PdPt@Pt/C catalyst may make it a promising and competitive low-Pt catalyst for hydrogen fueled polymer electrolyte membrane fuel cell (PEMFC) or direct methanol fuel cell (DMFC) applications.

  10. Unlocking the Origin of Superior Performance of a Si-Ge Core-Shell Nanowire Quantum Dot Field Effect Transistor.

    Science.gov (United States)

    Dhungana, Kamal B; Jaishi, Meghnath; Pati, Ranjit

    2016-07-13

    The sustained advancement in semiconducting core-shell nanowire technology has unlocked a tantalizing route for making next generation field effect transistor (FET). Understanding how to control carrier mobility of these nanowire channels by applying a gate field is the key to developing a high performance FET. Herein, we have identified the switching mechanism responsible for the superior performance of a Si-Ge core-shell nanowire quantum dot FET over its homogeneous Si counterpart. A quantum transport approach is used to investigate the gate-field modulated switching behavior in electronic current for ultranarrow Si and Si-Ge core-shell nanowire quantum dot FETs. Our calculations reveal that for the ON state, the gate-field induced transverse localization of the wave function restricts the carrier transport to the outer (shell) layer with the pz orbitals providing the pathway for tunneling of electrons in the channels. The higher ON state current in the Si-Ge core-shell nanowire FET is attributed to the pz orbitals that are distributed over the entire channel; in the case of Si nanowire, the participating pz orbital is restricted to a few Si atoms in the channel resulting in a smaller tunneling current. Within the gate bias range considered here, the transconductance is found to be substantially higher in the case of a Si-Ge core-shell nanowire FET than in a Si nanowire FET, which suggests a much higher mobility in the Si-Ge nanowire device.

  11. Multi-Core Technology for and Fault Tolerant High-Performance Spacecraft Computer Systems

    Science.gov (United States)

    Behr, Peter M.; Haulsen, Ivo; Van Kampenhout, J. Reinier; Pletner, Samuel

    2012-08-01

    The current architectural trends in the field of multi-core processors can provide an enormous increase in processing power by exploiting the parallelism available in many applications. In particular because of their high energy efficiency, it is obvious that multi-core processor-based systems will also be used in future space missions. In this paper we present the system architecture of a powerful optical sensor system based on the eight core multi-core processor P4080 from Freescale. The fault tolerant structure and the highly effective FDIR concepts implemented on different hardware and software levels of the system are described in detail. The space application scenario and thus the main requirements for the sensor system have been defined by a complex tracking sensor application for autonomous landing or docking manoeuvres.

  12. Performance Flexibility Architecture of Core Service Platform for Next-Generation Network

    Institute of Scientific and Technical Information of China (English)

    YANG Menghui; YANG Weikang; WANG Xiaoge; LIAO Jianxin; CHEN Junliang

    2008-01-01

    The hardware and software architectures of core service platforms for next-generation networks were analyzed to compute the minimum cost hardware configuration of a core service platform. This method gives a closed form expression for the optimized hardware cost configuration based on the service require-ments, the processing features of the computers running the core service platform software, and the proc-essing capabilities of the common object request broker architecture middleware. Three simulation scenar-ios were used to evaluate the model. The input includes the number of servers for the protocol mapping (PM), Parlay gateway (PG), application sever (AS), and communication handling (CH) functions. The simu-lation results show that the mean delay meets requirements. When the number of servers for PM, PG, AS,and CH functions were not properly selected, the mean delay was excessive. Simulation results show that the model is valid and can be used to optimize investments in core service platforms.

  13. KUGEL: a thermal, hydraulic, fuel performance, and gaseous fission product release code for pebble bed reactor core analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shamasundar, B.I.; Fehrenbach, M.E.

    1981-05-01

    The KUGEL computer code is designed to perform thermal/hydraulic analysis and coated-fuel particle performance calculations for axisymmetric pebble bed reactor (PBR) cores. This computer code was developed as part of a Department of Energy (DOE)-funded study designed to verify the published core performance data on PBRs. The KUGEL code is designed to interface directly with the 2DB code, a two-dimensional neutron diffusion code, to obtain distributions of thermal power, fission rate, fuel burnup, and fast neutron fluence, which are needed for thermal/hydraulic and fuel performance calculations. The code is variably dimensioned so that problem size can be easily varied. An interpolation routine allows variable mesh size to be used between the 2DB output and the two-dimensional thermal/hydraulic calculations.

  14. Diagnostic performance of core needle biopsy in identifying breast phyllodes tumors

    Science.gov (United States)

    Zhou, Zhi-Rui; Wang, Chen-Chen; Sun, Xiang-Jie; Yang, Zhao-Zhi

    2016-01-01

    Background A retrospective analysis of diagnoses was performed in patients with phyllodes tumors of the breast (PTB) who received preoperative core needle biopsy (CNB) and had breast surgery at Fudan University Shanghai Cancer Center from January 1, 2002 to April 1, 2013. The resulting data allowed us to compare the accordance between CNB and excision diagnoses of PTB patients and evaluate the accuracy of CNB in preoperative diagnosis. Methods Data from 128 patients with PTB who had undergone preoperative CNB and breast surgery were retrospectively analyzed. We reviewed the medical history, clinical follow-up data, and CNB diagnostic data. A diagnostic test was used to evaluate the sensitivity and specificity of CNB in diagnosing benign, borderline, and malignant phyllodes tumors. Results The accuracy of CNB for diagnosing PTB was 13.3% (17/128). Of the remaining patients, 98 (75.5% of the PTB patients) were diagnosed with fibroadenoma or fibroepithelial lesions. The sensitivity of CNB at diagnosing benign, borderline, and malignant phyllodes tumors were 4.9% (2/41), 4.2% (3/71), and 25.0% (4/16), respectively, whereas the corresponding specificity were 92.0%, 98.2%, and 100%, respectively. Some clinical features, such as large tumor size, rapid growth, or surgical history of fibroadenomas, were indicative of an increased possibility of PTB. Conclusions CNB provides a pathological basis for the preoperative diagnosis of PTB, but it has a poor accuracy and offers limited guidance for surgical decisions. Considering CNB along with multiple histologic features may improve the ability to accurately diagnose PTB. An integrated assessment using CNBs in combination with clinical data and imaging features is suggested as a reliable strategy to assist PTB diagnosis. PMID:28066593

  15. Photoelectrochemical performance of NiO-coated ZnO–CdS core-shell photoanode

    Science.gov (United States)

    Iyengar, Pranit; Das, Chandan; Balasubramaniam, K. R.

    2017-03-01

    A nano-structured core-shell ZnO–CdS photoanode device with a mesoporous NiO co-catalyst layer was fabricated using solution-processing methods. The growth of the sparse ZnO nano-rod film with a thickness of ca. 930 nm was achieved by optimizing parameters such as the thickness of the ZnO seed layer, choice of Zn precursor salt and the salt concentration. CdS was then coated by a combination of spin coating and spin SILAR (Successive Ionic Layer Adsorption and Reaction) methods to completely fill the interspace of ZnO nano-rods. The uniform CdS surface facilitated the growth of a continuous mesoporous NiO layer. Upon illumination of 100 mW·cm‑2 AM 1.5 G radiation the device exhibits stable photocurrents of 2.15 mA·cm‑2 at 1.23 V and 0.92 mA·cm‑2 at 0.00 V versus RHE, which are significantly higher as compared to the bare ZnO–CdS device. The excellent performance of the device can be ascribed to the higher visible region absorption by CdS, and effective separation of the photogenerated charge carriers due to the suitable band alignment and nanostructuring. Additionally, the mesoporous NiO overlayer offered a larger contact area with the electrolyte and promoted the kinetics enabling higher and stable photocurrent even till the 35th min. of testing.

  16. Benchmarking the ERG valve tip and MRI Interventions Smart Flow neurocatheter convection-enhanced delivery system's performance in a gel model of the brain: employing infusion protocols proposed for gene therapy for Parkinson's disease

    Science.gov (United States)

    Sillay, Karl; Schomberg, Dominic; Hinchman, Angelica; Kumbier, Lauren; Ross, Chris; Kubota, Ken; Brodsky, Ethan; Miranpuri, Gurwattan

    2012-04-01

    Convection-enhanced delivery (CED) is an advanced infusion technique used to deliver therapeutic agents into the brain. CED has shown promise in recent clinical trials. Independent verification of published parameters is warranted with benchmark testing of published parameters in applicable models such as gel phantoms, ex vivo tissue and in vivo non-human animal models to effectively inform planned and future clinical therapies. In the current study, specific performance characteristics of two CED infusion catheter systems, such as backflow, infusion cloud morphology, volume of distribution (mm3) versus the infused volume (mm3) (Vd/Vi) ratios, rate of infusion (µl min-1) and pressure (mmHg), were examined to ensure published performance standards for the ERG valve-tip (VT) catheter. We tested the hypothesis that the ERG VT catheter with an infusion protocol of a steady 1 µl min-1 functionality is comparable to the newly FDA approved MRI Interventions Smart Flow (SF) catheter with the UCSF infusion protocol in an agarose gel model. In the gel phantom models, no significant difference was found in performance parameters between the VT and SF catheter. We report, for the first time, such benchmark characteristics in CED between these two otherwise similar single-end port VT with stylet and end-port non-stylet infusion systems. Results of the current study in agarose gel models suggest that the performance of the VT catheter is comparable to the SF catheter and warrants further investigation as a tool in the armamentarium of CED techniques for eventual clinical use and application.

  17. Performance limitation and the role of core temperature when wearing light-weight workwear under moderate thermal conditions.

    Science.gov (United States)

    Kofler, Philipp; Burtscher, Martin; Heinrich, Dieter; Bottoni, Giuliamarta; Caven, Barnaby; Bechtold, Thomas; Teresa Herten, Anne; Hasler, Michael; Faulhaber, Martin; Nachbauer, Werner

    2015-01-01

    The objective of this investigation was to achieve an understanding about the relationship between heat stress and performance limitation when wearing a two-layerfire-resistant light-weight workwear (full-clothed ensemble) compared to an one-layer short sports gear (semi-clothed ensemble) in an exhaustive, stressful situation under moderate thermal condition (25°C). Ten well trained male subjects performed a strenuous walking protocol with both clothing ensembles until exhaustion occurred in a climatic chamber. Wearing workwear reduced the endurance performance by 10% (p=0.007) and the evaporation by 21% (p=0.003), caused a more pronounced rise in core temperature during submaximal walking (0.7±0.3 vs. 1.2±0.4°C; p≤0.001) and from start till exhaustion (1.4±0.3 vs. 1.8±0.5°C; p=0.008), accelerated sweat loss (13±2 vs. 15±3gmin(-1); p=0.007), and led to a significant higher heart rate at the end of cool down (103±6 vs. 111±7bpm; p=0.004). Correlation analysis revealed that core temperature development during submaximal walking and evaporation may play important roles for endurance performance. However, a critical core temperature of 40°C, which is stated to be a crucial factor for central fatigue and performance limitation, was not reached either with the semi-clothed or the full-clothed ensemble (38.3±0.4 vs. 38.4±0.5°C). Additionally, perceived exertion did not increase to a higher extent parallel with the rising core temperature with workwear which would substantiate the critical core temperature theory. In conclusion, increased heat stress led to cardiovascular exercise limitation rather than central fatigue. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Entropy-based benchmarking methods

    OpenAIRE

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth preservation method of Causey and Trager (1981) may violate this principle, while its requirements are explicitly taken into account in the pro-posed entropy-based benchmarking methods. Our illustrati...

  19. Benchmarking Asteroid-Deflection Experiment

    Science.gov (United States)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  20. A Benchmarking System for Domestic Water Use

    Directory of Open Access Journals (Sweden)

    Dexter V. L. Hunt

    2014-05-01

    Full Text Available The national demand for water in the UK is predicted to increase, exacerbated by a growing UK population, and home-grown demands for energy and food. When set against the context of overstretched existing supply sources vulnerable to droughts, particularly in increasingly dense city centres, the delicate balance of matching minimal demands with resource secure supplies becomes critical. When making changes to "internal" demands the role of technological efficiency and user behaviour cannot be ignored, yet existing benchmarking systems traditionally do not consider the latter. This paper investigates the practicalities of adopting a domestic benchmarking system (using a band rating that allows individual users to assess their current water use performance against what is possible. The benchmarking system allows users to achieve higher benchmarks through any approach that reduces water consumption. The sensitivity of water use benchmarks are investigated by making changes to user behaviour and technology. The impact of adopting localised supplies (i.e., Rainwater harvesting—RWH and Grey water—GW and including "external" gardening demands are investigated. This includes the impacts (in isolation and combination of the following: occupancy rates (1 to 4; roof size (12.5 m2 to 100 m2; garden size (25 m2 to 100 m2 and geographical location (North West, Midlands and South East, UK with yearly temporal effects (i.e., rainfall and temperature. Lessons learnt from analysis of the proposed benchmarking system are made throughout this paper, in particular its compatibility with the existing Code for Sustainable Homes (CSH accreditation system. Conclusions are subsequently drawn for the robustness of the proposed system.

  1. HPC Benchmark Suite NMx Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Intelligent Automation Inc., (IAI) and University of Central Florida (UCF) propose to develop a comprehensive numerical test suite for benchmarking current and...

  2. Preliminary Results for the OECD/NEA Time Dependent Benchmark using Rattlesnake, Rattlesnake-IQS and TDKENO

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mausolff, Zander [Univ. of Florida, Gainesville, FL (United States); Weems, Zach [Univ. of Florida, Gainesville, FL (United States); Popp, Dustin [Univ. of Florida, Gainesville, FL (United States); Smith, Kristin [Univ. of Florida, Gainesville, FL (United States); Shriver, Forrest [Univ. of Florida, Gainesville, FL (United States); Goluoglu, Sedat [Univ. of Florida, Gainesville, FL (United States); Prince, Zachary [Texas A & M Univ., College Station, TX (United States); Ragusa, Jean [Texas A & M Univ., College Station, TX (United States)

    2016-08-01

    One goal of the MAMMOTH M&S project is to validate the analysis capabilities within MAMMOTH. Historical data has shown limited value for validation of full three-dimensional (3D) multi-physics methods. Initial analysis considered the TREAT startup minimum critical core and one of the startup transient tests. At present, validation is focusing on measurements taken during the M8CAL test calibration series. These exercises will valuable in preliminary assessment of the ability of MAMMOTH to perform coupled multi-physics calculations; calculations performed to date are being used to validate the neutron transport solver Rattlesnake\\cite{Rattlesnake} and the fuels performance code BISON. Other validation projects outside of TREAT are available for single-physics benchmarking. Because the transient solution capability of Rattlesnake is one of the key attributes that makes it unique for TREAT transient simulations, validation of the transient solution of Rattlesnake using other time dependent kinetics benchmarks has considerable value. The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has recently developed a computational benchmark for transient simulations. This benchmark considered both two-dimensional (2D) and 3D configurations for a total number of 26 different transients. All are negative reactivity insertions, typically returning to the critical state after some time.

  3. Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform.

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, John Nicolas; Lin, Paul Tinphone

    2009-01-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling and multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.

  4. Core-shell diode array for high performance particle detectors and imaging sensors: status of the development

    Science.gov (United States)

    Jia, G.; Hübner, U.; Dellith, J.; Dellith, A.; Stolz, R.; Plentz, J.; Andrä, G.

    2017-02-01

    We propose a novel high performance radiation detector and imaging sensor by a ground-breaking core-shell diode array design. This novel core-shell diode array are expected to have superior performance respect to ultrahigh radiation hardness, high sensitivity, low power consumption, fast signal response and high spatial resolution simultaneously. These properties are highly desired in fundamental research such as high energy physics (HEP) at CERN, astronomy and future x-ray based protein crystallography at x-ray free electron laser (XFEL) etc.. This kind of detectors will provide solutions for these fundamental research fields currently limited by instrumentations. In this work, we report our progress on the development of core-shell diode array for the applications as high performance imaging sensors and particle detectors. We mainly present our results in the preparation of high aspect ratio regular silicon rods by metal assisted wet chemical etching technique. Nearly 200 μm deep and 2 μm width channels with high aspect ratio have been etched into silicon. This result will open many applications not only for the core-shell diode array, but also for a high density integration of 3D microelectronics devices.

  5. High performance of SDC and GDC core shell type composite electrolytes using methane as a fuel for low temperature SOFC

    Directory of Open Access Journals (Sweden)

    Muneeb Irshad

    2016-02-01

    Full Text Available Nanocomposites Samarium doped Ceria (SDC, Gadolinium doped Ceria (GDC, core shell SDC amorphous Na2CO3 (SDCC and GDC amorphous Na2CO3 (GDCC were synthesized using co-precipitation method and then compared to obtain better solid oxide electrolytes materials for low temperature Solid Oxide Fuel Cell (SOFCs. The comparison is done in terms of structure, crystallanity, thermal stability, conductivity and cell performance. In present work, XRD analysis confirmed proper doping of Sm and Gd in both single phase (SDC, GDC and dual phase core shell (SDCC, GDCC electrolyte materials. EDX analysis validated the presence of Sm and Gd in both single and dual phase electrolyte materials; also confirming the presence of amorphous Na2CO3 in SDCC and GDCC. From TGA analysis a steep weight loss is observed in case of SDCC and GDCC when temperature rises above 725 °C while SDC and GDC do not show any loss. The ionic conductivity and cell performance of single phase SDC and GDC nanocomposite were compared with core shell GDC/amorphous Na2CO3 and SDC/ amorphous Na2CO3 nanocomposites using methane fuel. It is observed that dual phase core shell electrolytes materials (SDCC, GDCC show better performance in low temperature range than their corresponding single phase electrolyte materials (SDC, GDC with methane fuel.

  6. High performance of SDC and GDC core shell type composite electrolytes using methane as a fuel for low temperature SOFC

    Energy Technology Data Exchange (ETDEWEB)

    Irshad, Muneeb; Siraj, Khurram, E-mail: razahussaini786@gmail.com, E-mail: khurram.uet@gmail.com; Javed, Fayyaz; Ahsan, Muhammad; Rafique, Muhammad Shahid [Department of Physics, University of Engineering and Technology, Lahore (Pakistan); Raza, Rizwan, E-mail: razahussaini786@gmail.com, E-mail: khurram.uet@gmail.com [Department of Physics, COMSATS Institute of Information Technology, Lahore (Pakistan); Shakir, Imran [Deanship of scientific research, College of Engineering, PO Box 800, King Saud University, Riyadh 11421 (Saudi Arabia)

    2016-02-15

    Nanocomposites Samarium doped Ceria (SDC), Gadolinium doped Ceria (GDC), core shell SDC amorphous Na{sub 2}CO{sub 3} (SDCC) and GDC amorphous Na{sub 2}CO{sub 3} (GDCC) were synthesized using co-precipitation method and then compared to obtain better solid oxide electrolytes materials for low temperature Solid Oxide Fuel Cell (SOFCs). The comparison is done in terms of structure, crystallanity, thermal stability, conductivity and cell performance. In present work, XRD analysis confirmed proper doping of Sm and Gd in both single phase (SDC, GDC) and dual phase core shell (SDCC, GDCC) electrolyte materials. EDX analysis validated the presence of Sm and Gd in both single and dual phase electrolyte materials; also confirming the presence of amorphous Na{sub 2}CO{sub 3} in SDCC and GDCC. From TGA analysis a steep weight loss is observed in case of SDCC and GDCC when temperature rises above 725 °C while SDC and GDC do not show any loss. The ionic conductivity and cell performance of single phase SDC and GDC nanocomposite were compared with core shell GDC/amorphous Na{sub 2}CO{sub 3} and SDC/ amorphous Na{sub 2}CO{sub 3} nanocomposites using methane fuel. It is observed that dual phase core shell electrolytes materials (SDCC, GDCC) show better performance in low temperature range than their corresponding single phase electrolyte materials (SDC, GDC) with methane fuel.

  7. GPUs benchmarking in subpixel image registration algorithm

    Science.gov (United States)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  8. Benchmarking of methods for genomic taxonomy.

    Science.gov (United States)

    Larsen, Mette V; Cosentino, Salvatore; Lukjancenko, Oksana; Saputra, Dhany; Rasmussen, Simon; Hasman, Henrik; Sicheritz-Pontén, Thomas; Aarestrup, Frank M; Ussery, David W; Lund, Ole

    2014-05-01

    One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is--that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In the current study, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Type that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. The KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.

  9. Fabrication of In2O3@In2S3 core-shell nanocubes for enhanced photoelectrochemical performance

    Science.gov (United States)

    Li, Haohua; Chen, Cong; Huang, Xinyou; Leng, Yang; Hou, Mengnan; Xiao, Xiaogu; Bao, Jie; You, Jiali; Zhang, Wenwen; Wang, Yukun; Song, Juan; Wang, Yaping; Liu, Qinqin; Hope, Gregory A.

    2014-02-01

    Herein, we report the facile synthesis of In2O3@In2S3 core-shell nanocubes and their improved photoelectrochemical property. In2O3@In2S3 core-shell nanocubes are grown on a F-doped SnO2 (FTO) glass substrate by a two-step process, which involves the electrodeposition of In2O3 nanocubes and a subsequent ion-exchange treatment. The improved light-harvesting ability and the suitable band alignment of the In2O3@In2S3 core-shell nanocubes generate a remarkable photocurrent density of 6.19 mA cm-2 (at 0 V vs. Ag/AgCl), which is substantially higher than the pristine In2O3 nanocubes. These results provide a new insight into the design of a high-performance photoanode for photoelectrochemical water splitting.

  10. Nanowire-based three-dimensional hierarchical core/shell heterostructured electrodes for high performance proton exchange membrane fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Madhu Sudan; Li, Ruying; Sun, Xueliang [Department of Mechanical and Materials Engineering, The University of Western Ontario, London, Ontario N6A 5B9 (Canada); Cai, Mei [General Motors Research and Development Center, Warren, MI 48090-9055 (United States)

    2008-12-01

    In order to effectively utilize expensive Pt in fuel cell electrocatalyst and improve the durability of PEM fuel cells, new catalyst supports with three-dimensional (3D) open structure are highly desirable. Here, we report the fabrication of a 3D core/shell heterostructure consisting tin nanowire core and carbon nanotube shell (SnC) grown directly onto fuel cell backing (here carbon paper) as Pt catalyst support for PEM fuel cells. Compared with the conventional Pt/C membrane electrode assembly (MEA), SnC nanowire-based MEA shows significantly higher oxygen reaction performance and better CO tolerance as well as excellent stability in PEM fuel cells. The results demonstrate that the core/shell nanowire-based composites are very promising supports in making cost effective and electrocatalysts for fuel cell applications. (author)

  11. Core-shell amorphous silicon-carbon nanoparticles for high performance anodes in lithium ion batteries

    Science.gov (United States)

    Sourice, Julien; Bordes, Arnaud; Boulineau, Adrien; Alper, John P.; Franger, Sylvain; Quinsac, Axelle; Habert, Aurélie; Leconte, Yann; De Vito, Eric; Porcher, Willy; Reynaud, Cécile; Herlin-Boime, Nathalie; Haon, Cédric

    2016-10-01

    Core-shell silicon-carbon nanoparticles are attractive candidates as active material to increase the capacity of Li-ion batteries while mitigating the detrimental effects of volume expansion upon lithiation. However crystalline silicon suffers from amorphization upon the first charge/discharge cycle and improved stability is expected in starting with amorphous silicon. Here we report the synthesis, in a single-step process, of amorphous silicon nanoparticles coated with a carbon shell (a-Si@C), via a two-stage laser pyrolysis where decomposition of silane and ethylene are conducted in two successive reaction zones. Control of experimental conditions mitigates silicon core crystallization as well as formation of silicon carbide. Auger electron spectroscopy and scanning transmission electron microscopy show a carbon shell about 1 nm in thickness, which prevents detrimental oxidation of the a-Si cores. Cyclic voltammetry demonstrates that the core-shell composite reaches its maximal lithiation during the first sweep, thanks to its amorphous core. After 500 charge/discharge cycles, it retains a capacity of 1250 mAh.g-1 at a C/5 rate and 800 mAh.g-1 at 2C, with an outstanding coulombic efficiency of 99.95%. Moreover, post-mortem observations show an electrode volume expansion of less than 20% and preservation of the nanostructuration.

  12. Design of Super-Paramagnetic Core-Shell Nanoparticles for Enhanced Performance of Inverted Polymer Solar Cells.

    Science.gov (United States)

    Jaramillo, Johny; Boudouris, Bryan W; Barrero, César A; Jaramillo, Franklin

    2015-11-18

    Controlling the nature and transfer of excited states in organic photovoltaic (OPV) devices is of critical concern due to the fact that exciton transport and separation can dictate the final performance of the system. One effective method to accomplish improved charge separation in organic electronic materials is to control the spin state of the photogenerated charge-carrying species. To this end, nanoparticles with unique iron oxide (Fe3O4) cores and zinc oxide (ZnO) shells were synthesized in a controlled manner. Then, the structural and magnetic properties of these core-shell nanoparticles (Fe3O4@ZnO) were tuned to ensure superior performance when they were incorporated into the active layers of OPV devices. Specifically, small loadings of the core-shell nanoparticles were blended with the previously well-characterized OPV active layer of poly(3-hexylthiophene) (P3HT) and [6,6]-phenyl-C61-butyric acid methyl ester (PCBM). Upon addition of the core-shell nanoparticles, the performance of the OPV devices was increased up to 25% relative to P3HT-PCBM active layer devices that contained no nanoparticles; this increase was a direct result of an increase in the short-circuit current densities of the devices. Furthermore, it was demonstrated that the increase in photocurrent was not due to enhanced absorption of the active layer due to the presence of the Fe3O4@ZnO core-shell nanoparticles. In fact, this increase in device performance occurred because of the presence of the superparamagnetic Fe3O4 in the core of the nanoparticles as incorporation of ZnO only nanoparticles did not alter the device performance. Importantly, however, the ZnO shell of the nanoparticles mitigated the negative optical effect of Fe3O4, which have been observed previously. This allowed the core-shell nanoparticles to outperform bare Fe3O4 nanoparticles when the single-layer nanoparticles were incorporated into the active layer of OPV devices. As such, the new materials described here present a

  13. Preparation and electrochemical performances of carbon sphere@ZnO core-shell nanocomposites for supercapacitor applications

    Science.gov (United States)

    Xiao, Xuechun; Han, Bingqian; Chen, Gang; Wang, Lihong; Wang, Yude

    2017-01-01

    Carbon sphere (CS)@ZnO core-shell nanocomposites were successfully prepared through facile low-temperature water-bath method without annealing treatment. The morphology and the microstructure of samples were characterized by transition electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS), respectively. ZnO nanoparticles with several nanometers in size decorated on the surface of the carbon sphere and formed a core-shell structure. Electrochemical performances of the CS@ZnO core-shell nanocomposites electrodes were investigated by cyclic voltammetry (CV) and galvanostatic charge/discharge (GDC). The CS@ZnO core-shell nanocomposite electrodes exhibit much larger specific capacitance and cycling stability is improved significantly compared with pure ZnO electrode. The CS@ZnO core-shell nanocomposite with mole ratio of 1:1 achieves a specific capacitance of 630 F g−1 at the current density of 2 A g−1. Present work might provide a new route for fabricating carbon sphere and transition metal oxides composite materials as electrodes for the application in supercapacitors. PMID:28057915

  14. Preparation and electrochemical performances of carbon sphere@ZnO core-shell nanocomposites for supercapacitor applications

    Science.gov (United States)

    Xiao, Xuechun; Han, Bingqian; Chen, Gang; Wang, Lihong; Wang, Yude

    2017-01-01

    Carbon sphere (CS)@ZnO core-shell nanocomposites were successfully prepared through facile low-temperature water-bath method without annealing treatment. The morphology and the microstructure of samples were characterized by transition electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS), respectively. ZnO nanoparticles with several nanometers in size decorated on the surface of the carbon sphere and formed a core-shell structure. Electrochemical performances of the CS@ZnO core-shell nanocomposites electrodes were investigated by cyclic voltammetry (CV) and galvanostatic charge/discharge (GDC). The CS@ZnO core-shell nanocomposite electrodes exhibit much larger specific capacitance and cycling stability is improved significantly compared with pure ZnO electrode. The CS@ZnO core-shell nanocomposite with mole ratio of 1:1 achieves a specific capacitance of 630 F g‑1 at the current density of 2 A g‑1. Present work might provide a new route for fabricating carbon sphere and transition metal oxides composite materials as electrodes for the application in supercapacitors.

  15. Benchmarking foreign electronics technologies

    Energy Technology Data Exchange (ETDEWEB)

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  16. IAEA coordinated research project (CRP) on 'Analytical and experimental benchmark analyses of accelerator driven systems'

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, Alberto [Universidad Politecnica de Madrid (Spain); Aliberti, Gerardo; Gohar, Yousry; Talamo, Alberto [ANL, Argonne (United States); Bornos, Victor; Kiyavitskaya, Anna [Joint Institute of Power Eng. and Nucl. Research ' Sosny' , Minsk (Belarus); Carta, Mario [ENEA, Casaccia (Italy); Janczyszyn, Jerzy [AGH-University of Science and Technology, Krakow (Poland); Maiorino, Jose [IPEN, Sao Paulo (Brazil); Pyeon, Cheolho [Kyoto University (Japan); Stanculescu, Alexander [IAEA, Vienna (Austria); Titarenko, Yury [ITEP, Moscow (Russian Federation); Westmeier, Wolfram [Wolfram Westmeier GmbH, Ebsdorfergrund (Germany)

    2008-07-01

    In December 2005, the International Atomic Energy Agency (IAEA) has started a Coordinated Research Project (CRP) on 'Analytical and Experimental Benchmark Analyses of Accelerator Driven Systems'. The overall objective of the CRP, performed within the framework of the Technical Working Group on Fast Reactors (TWGFR) of IAEA's Nuclear Energy Department, is to increase the capability of interested Member States in developing and applying advanced reactor technologies in the area of long-lived radioactive waste utilization and transmutation. The specific objective of the CRP is to improve the present understanding of the coupling of an external neutron source (e.g. spallation source) with a multiplicative sub-critical core. The participants are performing computational and experimental benchmark analyses using integrated calculation schemes and simulation methods. The CRP aims at integrating some of the planned experimental demonstration projects of the coupling between a sub-critical core and an external neutron source (e.g. YALINA Booster in Belarus, and Kyoto University's Critical Assembly (KUCA)). The objective of these experimental programs is to validate computational methods, obtain high energy nuclear data, characterize the performance of sub-critical assemblies driven by external sources, and to develop and improve techniques for sub-criticality monitoring. The paper summarizes preliminary results obtained to-date for some of the CRP benchmarks. (authors)

  17. 47 CFR 54.805 - Zone and study area above benchmark revenues calculated by the Administrator.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Zone and study area above benchmark revenues... Mechanism § 54.805 Zone and study area above benchmark revenues calculated by the Administrator. (a) The following steps shall be performed by the Administrator to determine Zone Above Benchmark Revenues for...

  18. Neutronics Benchmarks for the Utilization of Mixed-Oxide Fuel: Joint U.S./Russian Progress Report for Fiscal Year 1997 Volume 2-Calculations Performed in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Primm III, RT

    2002-05-29

    This volume of the progress report provides documentation of reactor physics and criticality safety studies conducted in the US during fiscal year 1997 and sponsored by the Fissile Materials Disposition Program of the US Department of Energy. Descriptions of computational and experimental benchmarks for the verification and validation of computer programs for neutron physics analyses are included. All benchmarks include either plutonium, uranium, or mixed uranium and plutonium fuels. Calculated physics parameters are reported for all of the computational benchmarks and for those experimental benchmarks that the US and Russia mutually agreed in November 1996 were applicable to mixed-oxide fuel cycles for light-water reactors.

  19. Benchmark job – Watch out!

    CERN Multimedia

    Staff Association

    2017-01-01

    On 12 December 2016, in Echo No. 259, we already discussed at length the MERIT and benchmark jobs. Still, we find that a couple of issues warrant further discussion. Benchmark job – administrative decision on 1 July 2017 On 12 January 2017, the HR Department informed all staff members of a change to the effective date of the administrative decision regarding benchmark jobs. The benchmark job title of each staff member will be confirmed on 1 July 2017, instead of 1 May 2017 as originally announced in HR’s letter on 18 August 2016. Postponing the administrative decision by two months will leave a little more time to address the issues related to incorrect placement in a benchmark job. Benchmark job – discuss with your supervisor, at the latest during the MERIT interview In order to rectify an incorrect placement in a benchmark job, it is essential that the supervisor and the supervisee go over the assigned benchmark job together. In most cases, this placement has been done autom...

  20. Internal Benchmarking for Institutional Effectiveness

    Science.gov (United States)

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  1. Entropy-based benchmarking methods

    NARCIS (Netherlands)

    Temurshoev, Umed

    2012-01-01

    We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth pre

  2. Psychiatric OSCE Performance of Students with and without a Previous Core Psychiatry Clerkship

    Science.gov (United States)

    Goisman, Robert M.; Levin, Robert M.; Krupat, Edward; Pelletier, Stephen R.; Alpert, Jonathan E.

    2010-01-01

    Objective: The OSCE has been demonstrated to be a reliable and valid method by which to assess students' clinical skills. An OSCE station was used to determine whether or not students who had completed a core psychiatry clerkship demonstrated skills that were superior to those who had not taken the clerkship and which areas discriminated between…

  3. Design and performance of a pulse transformer based on Fe-based nanocrystalline core.

    Science.gov (United States)

    Yi, Liu; Xibo, Feng; Lin, Fuchang

    2011-08-01

    A dry-type pulse transformer based on Fe-based nanocrystalline core with a load of 0.88 nF, output voltage of more than 65 kV, and winding ratio of 46 is designed and constructed. The dynamic characteristics of Fe-based nanocrystalline core under the impulse with the pulse width of several microseconds were studied. The pulse width and incremental flux density have an important effect on the pulse permeability, so the pulse permeability is measured under a certain pulse width and incremental flux density. The minimal volume of the toroidal pulse transformer core is determined by the coupling coefficient, the capacitors of the resonant charging circuit, incremental flux density, and pulse permeability. The factors of the charging time, ratio, and energy transmission efficiency in the resonant charging circuit based on magnetic core-type pulse transformer are analyzed. Experimental results of the pulse transformer are in good agreement with the theoretical calculation. When the primary capacitor is 3.17 μF and charge voltage is 1.8 kV, a voltage across the secondary capacitor of 0.88 nF with peak value of 68.5 kV, rise time (10%-90%) of 1.80 μs is obtained.

  4. Performance of heterogeneous earthfill dams under earthquakes: optimal location of the impervious core

    Directory of Open Access Journals (Sweden)

    S. López-Querol

    2008-01-01

    Full Text Available Earthfill dams are man-made geostructures which may be especially damaged by seismic loadings, because the soil skeleton they are made of suffers remarkable modifications in its mechanical properties, as well as changes of pore water pressure and flow of this water inside their pores, when subjected to vibrations. The most extreme situation is the dam failure due to soil liquefaction. Coupled finite element numerical codes are a useful tool to assess the safety of these dams. In this paper the application of a fully coupled numerical model, previously developed and validated by the authors, to a set of theoretical cross sections of earthfill dams with impervious core, is presented. All these dams are same height and have the same volume of impervious material at the core. The influence of the core location inside the dam on its response against seismic loading is numerically explored. The dams are designed as strictly stable under static loads. As a result of this research, a design recommendation on the location of the impervious core is obtained for this type of earth dams, on the basis of the criteria of minor liquefaction risk, minor soil degradation during the earthquake and minor crest settlement.

  5. The Design and Performance of IceCube DeepCore

    CERN Document Server

    ,

    2011-01-01

    The IceCube neutrino observatory in operation at the South Pole, Antarctica, comprises three distinct components: a large buried array for ultrahigh energy neutrino detection, a surface air shower array, and a new buried component called DeepCore. DeepCore was designed to lower the IceCube neutrino energy threshold by over an order of magnitude, to energies as low as about 10 GeV. DeepCore is situated primarily 2100 m below the surface of the icecap at the South Pole, at the bottom center of the existing IceCube array, and began taking physics data in May 2010. Its location takes advantage of the exceptionally clear ice at those depths and allows it to use the surrounding IceCube detector as a highly efficient active veto against the principal background of downward-going muons produced in cosmic-ray air showers. DeepCore has a module density roughly five times higher than that of the standard IceCube array, and uses photomultiplier tubes with a new photocathode featuring a quantum efficiency about 35% higher...

  6. International benchmarking of specialty hospitals. A series of case studies on comprehensive cancer centres

    OpenAIRE

    van Lent Wineke AM; de Beer Relinde D; van Harten Wim H

    2010-01-01

    Abstract Background Benchmarking is one of the methods used in business that is applied to hospitals to improve the management of their operations. International comparison between hospitals can explain performance differences. As there is a trend towards specialization of hospitals, this study examines the benchmarking process and the success factors of benchmarking in international specialized cancer centres. Methods Three independent international benchmarking studies on operations managem...

  7. Common Core: Victory Is Yours!

    Science.gov (United States)

    Fink, Jennifer L. W.

    2012-01-01

    In this article, the author discusses how to implement the Common Core State Standards in the classroom. She presents examples and activities that will leave teachers feeling "rosy" about tackling the new standards. She breaks down important benchmarks and shows how other teachers are doing the Core--and loving it!

  8. Benchmarking & European Sustainable Transport Policies

    DEFF Research Database (Denmark)

    Gudmundsson, H.

    2003-01-01

    , Benchmarking is one of the management tools that have recently been introduced in the transport sector. It is rapidly being applied to a wide range of transport operations, services and policies. This paper is a contribution to the discussion of the role of benchmarking in the future efforts...... to support Sustainable European Transport Policies. The key message is that transport benchmarking has not yet been developed to cope with the challenges of this task. Rather than backing down completely, the paper suggests some critical conditions for applying and adopting benchmarking for this purpose. One...... way forward is to ensure a higher level of environmental integration in transport policy benchmarking. To this effect the paper will discuss the possible role of the socalled Transport and Environment Reporting Mechanism developed by the European Environment Agency. The paper provides an independent...

  9. Benchmarking and Sustainable Transport Policy

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik; Wyatt, Andrew; Gordon, Lucy

    2004-01-01

    is generally not advised. Several other ways in which benchmarking and policy can support one another are identified in the analysis. This leads to a range of recommended initiatives to exploit the benefits of benchmarking in transport while avoiding some of the lurking pitfalls and dead ends......Order to learn from the best. In 2000 the European Commission initiated research to explore benchmarking as a tool to promote policies for ‘sustainable transport’. This paper reports findings and recommendations on how to address this challenge. The findings suggest that benchmarking is a valuable...... tool that may indeed help to move forward the transport policy agenda. However, there are major conditions and limitations. First of all it is not always so straightforward to delimit, measure and compare transport services in order to establish a clear benchmark. Secondly ‘sustainable transport...

  10. Simple benchmark for complex dose finding studies.

    Science.gov (United States)

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  11. Cleanroom Energy Efficiency: Metrics and Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  12. Advocacy for Benchmarking in the Nigerian Institute of Advanced ...

    African Journals Online (AJOL)

    FIRST LADY

    and the implications of the adoption of the benchmarking strategy. The paper ... procedure in NIALS library as a tool of improving library business performance. .... seminars, internet fora and use of online databases & websites. (Business ...

  13. Implementation of the NAS Parallel Benchmarks in Java

    Science.gov (United States)

    Frumkin, Michael A.; Schultz, Matthew; Jin, Haoqiang; Yan, Jerry; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Several features make Java an attractive choice for High Performance Computing (HPC). In order to gauge the applicability of Java to Computational Fluid Dynamics (CFD), we have implemented the NAS (NASA Advanced Supercomputing) Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would position Java closer to Fortran in the competition for CFD applications.

  14. Benchmarking of collimation tracking using RHIC beam loss data.

    Energy Technology Data Exchange (ETDEWEB)

    Robert-Demolaize,G.; Drees, A.

    2008-06-23

    State-of-the-art tracking tools were recently developed at CERN to study the cleaning efficiency of the Large Hadron Collider (LHC) collimation system. In order to estimate the prediction accuracy of these tools, benchmarking studies can be performed using actual beam loss measurements from a machine that already uses a similar multistage collimation system. This paper reviews the main results from benchmarking studies performed with specific data collected from operations at the Relativistic Heavy Ion Collider (RHIC).

  15. Material combinations and parametric study of thermal and mechanical performance of pyramidal core sandwich panels used for hypersonic aircrafts

    Science.gov (United States)

    Zhang, Ruiping; Zhang, Xiaoqing; Lorenzini, Giulio; Xie, Gongnan

    2016-11-01

    A novel kind of lightweight integrated thermal protection system, named pyramidal core sandwich panel, is proposed to be a good safeguard for hypersonic aircrafts in the current study. Such system is considered as not only an insulation structure but also a load-bearing structure. In the context of design for hypersonic aircrafts, an efficient optimization should be paid enough attention. This paper concerns with the homogenization of the proposed pyramidal sandwich core panel using two-dimensional model in subsequent research for material selection. According to the required insulation performance and thermal-mechanical properties, several suitable material combinations are chosen as candidates for the pyramidal core sandwich panel by adopting finite element analysis and approximate response surface. To obtain lightweight structure with an excellent capability of heat insulation and load-bearing, an investigation on some specific design variables, which are significant for thermal-mechanical properties of the structure, is performed. Finally, a good balance between the insulation performance, the capability of load-bearing and the lightweight has attained.

  16. Analyzing the trade-off between multiple memory controllers and memory channels on multi-core processor performance

    Energy Technology Data Exchange (ETDEWEB)

    Sancho Pitarch, Jose Carlos [Los Alamos National Laboratory; Kerbyson, Darren [Los Alamos National Laboratory; Lang, Mike [Los Alamos National Laboratory

    2010-01-01

    Increasing the core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend to cope with this challenge is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for a wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers with each with one channel compared with one controller with two memory channels.

  17. Improved performances of polymer-based dielectric by using inorganic/organic core-shell nanoparticles

    Science.gov (United States)

    Benhadjala, W.; Bord-Majek, I.; Béchou, L.; Suhir, E.; Buet, M.; Rougé, F.; Gaud, V.; Plano, B.; Ousten, Y.

    2012-10-01

    BaTiO3/hyperbranched polyester/methacrylate core-shell nanoparticles were studied by varying the shell thickness and the methacrylate ratio. We demonstrated that coalescence typically observed in traditional composites employing polymer matrices is significantly reduced. By modifying the shell thickness, the equivalent filler fraction was tuned from 7 wt. % to 41 wt. %. Obtained permittivities were compared with reported models for two-phase mixtures. The nonlinear behavior of the dielectric constant as a function of the equivalent filler fraction has been fitted with the Bruggeman equation. Methacrylate groups reduce by a decade the loss factor by improving nanoparticles adhesion. The permittivity reaching 85 at 1 kHz makes core-shell nanoparticles a promising material for embedded capacitors.

  18. Yield performance of the European Union Maize Landrace Core Collection under multiple corn borer infestations

    OpenAIRE

    Malvar Pintos, Rosa Ana; Butrón Gómez, Ana María; Álvarez Rodríguez, Ángel; Padilla Alonso, Guillermo; Cartea González, María Elena; Revilla Temiño, Pedro; Ordás Pérez, Amando

    2007-01-01

    In Europe, corn borer attack is the main biotic stressor for the maize (Zea mays L.) crop. European corn borer (Ostrinia nubilalis Hbn.) is the most important maize pest in central and north Europe, while pink stem borer (Sesamia nonagrioides Lef.) is predominant in warmer areas of southern Europe. The objective of this study was the evaluation of the European Maize Union Landrace Core Collection (EUMLCC) for yield under infestation with European corn borer (O. nubilalis) and pink stem borer ...

  19. Scalable High-Performance Parallel Design for Network Intrusion Detection Systems on Many-Core Processors

    OpenAIRE

    Jiang, Hayang; Xie, Gaogang; Salamatian, Kavé; Mathy, Laurent

    2013-01-01

    Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. Both hardware accelerated and parallel software-based NIDS solutions, based on commodity multi-core and GPU processors, have been proposed to overcome these challenges. Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. ...

  20. Structural performance of complex core systems for FRP-balsa composite sandwich bridge decks

    OpenAIRE

    Osei-Antwi, Michael

    2014-01-01

    Based on current fiber-reinforced polymer (FRP) composite construction principles, FRP decks fall into two categories: pultruded decks and sandwich decks. Sandwich decks comprise face sheets and either honeycombs or foams reinforced with internal FRP webs for shear resistance. The honeycomb structure and the webs cause debonding between the upper face sheets and the core due to the uneven support of the former. An alternative material that has high shear capacity and can provide uniform ...

  1. Field Performance of Three-Phase Amorphous Metal Core Distribution Transformers at Pearl Harbor, Hawaii

    Science.gov (United States)

    1990-08-01

    37 Waste water management and sanitary engineering Airfields and pavements 38 O1 pollution removal and recovery3 Air polution ADVANCED BASE AND...utility systems at Ford Island, Barbers Point Naval Air Station, the Naval Shipyard, and the Naval Supply Center at Pearl Harbor, Hawaii. The main...Power Meter connected to a three-phase 4-wire amorphous core transformer under test at Barbers Point Naval Air Station. This testing procedure was

  2. New Nanocrystalline Core Performance Versus Finemet(Registered) for High-power Inductors

    Science.gov (United States)

    2008-12-01

    electric ground vehicle systems. The design and development of compact, high-power, and high- temperature inductors for a 150 kW...002 nanocrystalline core material is compared. 1. INTRODUCTION Hybrid- electric vehicles (HEV) and their supporting technological advancements have... Electr . Conf., 1258-1263. Urciuoli, D. and C. W. Tipton, 2006: Development of a 90 kW Bi-Directional DC-DC Converter for Power Dense Applications, Proc. of 21st IEEE Appl. Power Electr . Conf., 1375-1378.

  3. Design and Performance Improvements of the Prototype Open Core Flywheel Energy Storage System

    Science.gov (United States)

    Pang, D.; Anand, D. K. (Editor); Kirk, J. A. (Editor)

    1996-01-01

    A prototype magnetically suspended composite flywheel energy storage (FES) system is operating at the University of Maryland. This system, designed for spacecraft applications, incorporates recent advances in the technologies of composite materials, magnetic suspension, and permanent magnet brushless motor/generator. The current system is referred to as an Open Core Composite Flywheel (OCCF) energy storage system. This paper will present design improvements for enhanced and robust performance. Initially, when the OCCF prototype was spun above its first critical frequency of 4,500 RPM, the rotor movement would exceed the space available in the magnetic suspension gap and touchdown on the backup mechanical bearings would occur. On some occasions it was observed that, after touchdown, the rotor was unable to re-suspend as the speed decreased. Additionally, it was observed that the rotor would exhibit unstable oscillations when the control system was initially turned on. Our analysis suggested that the following problems existed: (1) The linear operating range of the magnetic bearings was limited due to electrical and magnetic saturation; (2) The inductance of the magnetic bearings was affecting the transient response of the system; (3) The flywheel was confined to a small movement because mechanical components could not be held to a tight tolerance; and (4) The location of the touchdown bearing magnifies the motion at the pole faces of the magnetic bearings when the linear range is crucial. In order to correct these problems an improved design of the flywheel energy storage system was undertaken. The magnetic bearings were re-designed to achieve a large linear operating range and to withstand load disturbances of at least 1 g. The external position transducers were replaced by a unique design which were resistant to magnetic field noise and allowed cancellation of the radial growth of the flywheel at high speeds. A central rod was utilized to ensure the concentricity

  4. I won't let you down... or will I? Core self-evaluations, other-orientation, anticipated guilt and gratitude, and job performance.

    Science.gov (United States)

    Grant, Adam M; Wrzesniewski, Amy

    2010-01-01

    Although core self-evaluations have been linked to higher job performance, research has shown variability in the strength of this relationship. We propose that high core self-evaluations are more likely to increase job performance for other-oriented employees, who tend to anticipate feelings of guilt and gratitude. We tested these hypotheses across 3 field studies using different operationalizations of both performance and other-orientation (prosocial motivation, agreeableness, and duty). In Study 1, prosocial motivation strengthened the association between core self-evaluations and the performance of professional university fundraisers. In Study 2, agreeableness strengthened the association between core self-evaluations and supervisor ratings of initiative among public service employees. In Study 3, duty strengthened the association between core self-evaluations and the objective productivity of call center employees, and this moderating relationship was mediated by feelings of anticipated guilt and gratitude. We discuss implications for theory and research on personality and job performance.

  5. Benchmarking of energy time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    Benchmarking consists of the adjustment of time series data from one source in order to achieve agreement with similar data from a second source. The data from the latter source are referred to as the benchmark(s), and often differ in that they are observed at a lower frequency, represent a higher level of temporal aggregation, and/or are considered to be of greater accuracy. This report provides an extensive survey of benchmarking procedures which have appeared in the statistical literature, and reviews specific benchmarking procedures currently used by the Energy Information Administration (EIA). The literature survey includes a technical summary of the major benchmarking methods and their statistical properties. Factors influencing the choice and application of particular techniques are described and the impact of benchmark accuracy is discussed. EIA applications and procedures are reviewed and evaluated for residential natural gas deliveries series and coal production series. It is found that the current method of adjusting the natural gas series is consistent with the behavior of the series and the methods used in obtaining the initial data. As a result, no change is recommended. For the coal production series, a staged approach based on a first differencing technique is recommended over the current procedure. A comparison of the adjustments produced by the two methods is made for the 1987 Indiana coal production series. 32 refs., 5 figs., 1 tab.

  6. Correlational effect size benchmarks.

    Science.gov (United States)

    Bosco, Frank A; Aguinis, Herman; Singh, Kulraj; Field, James G; Pierce, Charles A

    2015-03-01

    Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  8. Assessing and benchmarking multiphoton microscopes for biologists.

    Science.gov (United States)

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  9. Physics benchmarks of the VELO upgrade

    CERN Document Server

    Eklund, Lars

    2017-01-01

    The LHCb Experiment at the LHC is successfully performing precision measurements primarily in the area of flavour physics. The collaboration is preparing an upgrade that will start taking data in 2021 with a trigger-less readout at five times the current luminosity. The vertex locator has been crucial in the success of the experiment and will continue to be so for the upgrade. It will be replaced by a hybrid pixel detector and this paper discusses the performance benchmarks of the upgraded detector. Despite the challenging experimental environment, the vertex locator will maintain or improve upon its benchmark figures compared to the current detector. Finally the long term plans for LHCb, beyond those of the upgrade currently in preparation, are discussed.

  10. Current Status of the Transmutation Reactor Technology and Preliminary Evaluation of Transmutation Performance of the KALIMER Core

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Ser Gi; Sim, Yoon Sub; Kim, Yeong Il; Kim, Young Gyum; Lee, Byung Woon; Song, Hoon; Lee, Ki Bog; Jang, Jin Wook; Lee, Dong Uk

    2005-08-15

    devised. It has been considered that the degradations of core performances resulting from increase of the transmutation rate are very important problems. From the analysis results of the state-of-art of the nuclear transmutation technology, the following technical research topics are determined as the technical solution ways for the future development and enhancement of the transmutation technology; 1) the improvement of core safety through the reduction of the coolant void reactivity worth by using the void duct assembly, 2) the design of a reference transmutation reactor for the future transmutation research through the change of the KALIMER-600 reactor core into the transmutation reactor and its core performance analysis, 3) the optimization study of the hybrid loading of uranium-free fuel and uranium fuel to improve the transmutation rate and the core safety parameters. Finally, the feasibility of the transmutation core suggested above where the void duct assemblies are devised to improve the sodium void reactivity worth and to achieve the power flattening under a single fuel enrichment and a single type of fuel assembly is analyzed and assessed. The results show that this core has its sodium coolant void reactivity less than 3$ and this core can transmutate the TRU nuclides discharged from two LWRs of the same thermal power.

  11. Benchmarking in water project analysis

    Science.gov (United States)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  12. Core-Shell Diamond as a Support for Solid-Phase Extraction and High-Performance Liquid Chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Saini, Gaurav; Jensen, David S.; Wiest, Landon A.; Vail, Michael A.; Dadson, Andrew; Lee, Milton L.; Shutthanandan, V.; Linford, Matthew R.

    2010-06-01

    We report the formation of core-shell diamond particles for solid phase extraction (SPE) and high performance liquid chromatography (HPLC) made by layer-by-layer (LbL) deposition. Their synthesis begins with the amine functionalization of microdiamond by its immersion in an aqueous solution of a primary amine-containing polymer (polyallylamine (PAAm)). The amine-terminated microdiamond is then immersed in an aqueous suspension of nanodiamond, which leads to adsorption of the nanodiamond. Alternating (self-limiting) immersions in the solutions of the amine-containing polymer and the suspension of nanodiamond are continued until the desired number of nanodiamond layers is formed around the microdiamond. Finally, the core-shell particles are cross-linked with 1,2,5,6-diepoxycyclooctane or reacted with 1,2-epoxyoctadecane. Layer-by-layer deposition of PAAm and nanodiamond is also studied on planar Si/SiO2 surfaces, which were characterized by SEM, Rutherford backscattering spectrometry (RBS) and nuclear reaction analysis (NRA). Core-shell particles are characterized by diffuse reflectance infrared Fourier transform spectroscopy (DRIFT), environmental scanning electron microscopy (ESEM), and Brunauer Emmett Teller (BET) surface area and pore size measurements. Larger (ca. 50 μm) core-shell diamond particles have much higher surface areas, and analyte loading capacities in SPE than nonporous solid diamond particles. Smaller (ca. 3 μm), normal and reversed phase, core-shell diamond particles have been used for HPLC, with 36,300 plates per meter for mesitylene in a separation of benzene and alkyl benzenes on a C18 adsorbent, and 54,800 plates per meter for diazinon in a similar separation of two pesticides.

  13. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  14. Core-shell nanoparticles optical sensors - Rational design of zinc ions fluorescent nanoprobes of improved analytical performance

    Science.gov (United States)

    Woźnica, Emilia; Gasik, Joanna; Kłucińska, Katarzyna; Kisiel, Anna; Maksymiuk, Krzysztof; Michalska, Agata

    2017-10-01

    In this work the effect of affinity of an analyte to a receptor on the response of nanostructural fluorimetric probes is discussed. Core-shell nanoparticles sensors are prepared that benefit from the properties of the phases involved leading to improved analytical performance. The optical transduction system chosen is independent of pH, thus the change of sample pH can be used to control the analyte - receptor affinity through the ;conditional; binding constant prevailing within the lipophilic phase. It is shown that by affecting the ;conditional; binding constant the performance of the sensor can be fine-tuned. As expected, increase in ;conditional; affinity of the ligand embedded in the lipophilic phase to the analyte results in higher sensitivity over narrow concentration range - bulk reaction and sigmoidal shape response of emission intensity vs. logarithm of concentration changes. To induce a linear dependence of emission intensity vs. logarithm of analyte concentration covering a broad concentration range, a spatial confinement of the reaction zone is proposed, and application of core-shell nanostructures. The core material, polypyrrole nanospheres, is effectively not permeable for the analyte - ligand complex, thus the reaction is limited to the outer shell layer of the polymer prepared from poly(maleic anhydride-alt-1-octadecene). For herein introduced system a linear dependence of emission intensity vs. logarithm of Zn2+ concentration was obtained within the range from 10-7 to 10-1 M.

  15. Durable polydopamine-coated porous sulfur core-shell cathode for high performance lithium-sulfur batteries

    Science.gov (United States)

    Deng, Yuanfu; Xu, Hui; Bai, Zhaowen; Huang, Baoling; Su, Jingyang; Chen, Guohua

    2015-12-01

    Lithium-sulfur batteries show fascinating potential for advanced energy system due to their high specific capacity, low-cost, and environmental benignity. However, their wide applications have been plagued by low coulombic efficiency, fast capacity fading and poor rate performance. Herein, a facile method for preparation of S@PDA (PDA = polydopamine) composites with core-shell structure and good electrochemical performance as well as the First-Principles calculations on the interactions of PDA and polysulfides are reported. Taking the advantages of the core-shell structure with porous sulfur core, the high mechanical flexibility of PDA for accommodating the volumetric variation during the discharge/charge processes, the good lithium ion conductivity and the strong chemical interactions between the nitrogen/oxygen atoms with lone electron pair and lithium polysulfides for alleviating their dissolution, the S@PDA composites exhibit high discharge capacities at different current densities (1048 and 869 mAh g-1 at 0.2 and 0.8 A g-1, respectively) and excellent capacity retention capability. A capacity decay as low as 0.021% per cycle and an average coulombic efficiency of 98.5% is observed over a long-term cycling of 890 cycles at 0.8 A g-1. The S@PDA electrode has great potential as a low-cost cathode in high energy Li-S batteries.

  16. Coupled analysis of core thermal hydraulics and fuel performance to evaluate a thermally induced fuel failure in an SFR subassembly

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sun Rock; Chang, Doo Soo; Kim, Sang Ji [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    A limiting factor analysis in a core thermal design is highly important to assure the safe and reliable operation of a reactor system. In a sodium cooled fast reactor (SFR), the coolant thermal conductivity is about hundreds of times larger than the thermal conductivity of water. Moreover, the coolant boiling temperature in an SFR is around 900 .deg. C, which is much higher than that of the water coolant in a PWR. Considering typical operating temperatures, an SFR has about a 300 .deg. C thermal margin to its boiling point. Therefore, instead of DNBR (Departure from Nucleate Boiling Ratio) in a PWR, the core thermal design of SFRs requires assuring proper fuel performance and safety, where the design limits are highly related to the temperature distribution and material behavior under various operating conditions. Typical limiting factors in SFRs are the thermal component of the plastic hoop strain, radial primary hoop stress, and cumulative damage factor during normal operation. However, the previous fuel performance codes only evaluate a single fuel pin performance, which neglects the radial peaking factors and reveals too conservative results. In this work, the multi physics analysis is performed using both thermalhydraulic and fuel performance codes.

  17. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  18. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.; Tumeo, Antonino; Halappanavar, Mahantesh

    2017-06-01

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structured locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.

  19. The Performance of the PedsQL™ Generic Core Scales in Children with Sickle Cell Disease

    OpenAIRE

    2008-01-01

    The objective of this study was to determine the feasibility, reliability and validity of the Pediatric Quality of Life Inventory™ generic core scales (PedsQL™ questionnaire) in children with sickle cell disease. This was a cross-sectional study of children from an urban hospital-based sickle cell disease clinic and an urban primary care clinic. The study participants were children ages 2 to 18 years who presented to clinic for a routine visit. Health-related quality of life (HRQL) was the ma...

  20. IAEA GT-MHR Benchmark Calculations Using the HELIOS/MASTER Two-Step Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Hoon; Kim, Kang Seog; Cho, Jin Young; Song, Jae Seung; Noh, Jae Man; Lee, Chung Chan; Zee, Sung Quun

    2007-05-15

    A new two-step procedure based on the HELISO/MASTER code system has been developed for the prismatic VHTR physics analysis. This procedure employs the HELIOS code for the transport lattice calculation to generate a few group constants, and the MASTER code for the 3-dimensional core calculation to perform the reactor physics analysis. Double heterogeneity effect due to the random distribution of the particulate fuel could be dealt with the recently developed reactivity-equivalent physical transformation (RPT) method. The strong spectral effects of the graphite moderated reactor core could be solved both by optimizing the number of energy groups and group boundaries, and by employing a partial core model instead of a single block one to generate a few group cross sections. Burnable poisons in the inner reflector and asymmetrically located large control rod can be treated by adopting the equivalence theory applied for the multi-block models to generate surface dependent discontinuity factors. Effective reflector cross sections were generated by using a simple mini-core model and an equivalence theory. In this study the IAEA GT-MHR benchmark problems with a plutonium fuel were analyzed by using the HELIOS/MASTER code package and the Monte Carlo code MCNP. Benchmark problems include pin, block and core models. The computational results of the HELIOS/MASTER code system were compared with those of MCNP and other participants. The results show that the 2-step procedure using HELIOS/MASTER can be applied to the reactor physics analysis for the prismatic VHTR with a good accuracy.