WorldWideScience

Sample records for high performance requirements

  1. High performance sealing - meeting nuclear and aerospace requirements

    Wensel, R.; Metcalfe, R.

    1994-11-01

    Although high performance sealing is required in many places, two industries lead all others in terms of their demand-nuclear and aerospace. The factors that govern the high reliability and integrity of seals, particularly elastomer seals, for both industries are discussed. Aerospace requirements include low structural weight and a broad range of conditions, from the cold vacuum of space to the hot, high pressures of rocket motors. It is shown, by example, how a seal can be made an integral part of a structure in order to improve performance, rather than using a conventional handbook design. Typical processes are then described for selection, specification and procurement of suitable elastomers, functional and accelerated performance testing, database development and service-life prediction. Methods for quality assurance of elastomer seals are summarized. Potentially catastrophic internal dejects are a particular problem for conventional non-destructive inspection techniques. A new method of elastodynamic testing for these is described. (author)

  2. Investigation of high-alpha lateral-directional control power requirements for high-performance aircraft

    Foster, John V.; Ross, Holly M.; Ashley, Patrick A.

    1993-01-01

    Designers of the next-generation fighter and attack airplanes are faced with the requirements of good high angle-of-attack maneuverability as well as efficient high speed cruise capability with low radar cross section (RCS) characteristics. As a result, they are challenged with the task of making critical design trades to achieve the desired levels of maneuverability and performance. This task has highlighted the need for comprehensive, flight-validated lateral-directional control power design guidelines for high angles of attack. A joint NASA/U.S. Navy study has been initiated to address this need and to investigate the complex flight dynamics characteristics and controls requirements for high angle-of-attack lateral-directional maneuvering. A multi-year research program is underway which includes groundbased piloted simulation and flight validation. This paper will give a status update of this program that will include a program overview, description of test methodology and preliminary results.

  3. Requirements for high performance computing for lattice QCD. Report of the ECFA working panel

    Jegerlehner, F.; Kenway, R.D.; Martinelli, G.; Michael, C.; Pene, O.; Petersson, B.; Petronzio, R.; Sachrajda, C.T.; Schilling, K.

    2000-01-01

    This report, prepared at the request of the European Committee for Future Accelerators (ECFA), contains an assessment of the High Performance Computing resources which will be required in coming years by European physicists working in Lattice Field Theory and a review of the scientific opportunities which these resources would open. (orig.)

  4. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  5. Bearings for high performance requirements in two-stroke and four-stroke diesel engines

    Ederer, U.G.

    1983-11-01

    Most measures to reduce fuel consumption in diesel engines lead, directly or indirectly, to more severe operating conditions for the engine bearings. In ever more instances the bearings become the components which limit useful engine life and the time between overhauls. Bearings with improved performance characteristics are required. During recent years, Miba Gleitlager AG has developed several solutions to meet these requirements. They consist of either material improvements, such as a cast white metal (SnSb 12Cu 3 NiCd) with higher fatigue strength, or an electroplated overlay (PbSn 18 Cu) with improved fatigue and wear resistance. New design solutions found included the steel-Al Sn 6-WM 85 bearing for two-stroke engines, the steel-Al Sn 6 PbSn 18 Cu bearing applied to two-stroke crosshead bearings, the steel-AlZn 4,5 PbSn 18 Cu bearing for high bearing loads in four-stroke engines, and the Miba-Rillenlager with its radically new running-surface structure for extreme load and wear conditions. The application potential of these bearings and the operating experience with them are discussed in this article.

  6. Design study of technology requirements for high performance single-propeller-driven business airplanes

    Kohlman, D. L.; Hammer, J.

    1985-01-01

    Developments in aerodyamic, structural and propulsion technologies which influence the potential for significant improvements in performance and fuel efficiency of general aviation business airplanes are discussed. The advancements include such technolgies as natural laminar flow, composite materials, and advanced intermittent combustion engines. The design goal for this parameter design study is a range of 1300 nm at 300 knots true airspeed with a payload of 1200lbs at 35,000 ft cruise altitude. The individual and synergistic effects of various advanced technologies on the optimization of this class of high performance, single engine, propeller driven business airplanes are identified.

  7. Solder bond requirement for large, built-up, high-performance conductors

    Willig, R.L.

    1981-01-01

    Some large built-up conductors fabricated for large superconducting magnets are designed to operate above the maximum recovery current. Because the stability of these conductors is sensitive to the quality of the solder bond joining the composite superconductor to the high-conductivity substrate, a minimum bond requirement is necessary. The present analysis finds that the superconductor is unstable and becomes abruptly resistive when there are temperature excursions into the current sharing region of a poorly bonded conductor. This abrupt transition, produces eddy current heating in the vicinity of the superconducting filaments and causes a sharp reduction in the minimum propagating zone (MPZ) energy. This sensitivity of the MPZ energy to the solder bond contact area is used to specify a minimum bond requirement. For the superconducting MHD magnet built for the Component Development Integration Facility (CDIF), the minimum bonded surface area is .68 cm/sup 2//cm which is 44% of the composite perimeter. 5 refs

  8. AREVA's fuel assemblies addressing high performance requirements of the worldwide PWR fleet

    Anniel, Marc; Bordy, Michel-Aristide

    2009-01-01

    Taking advantage of its presence in the fuel activities since the start of commercial nuclear worldwide operation, AREVA is continuing to support the customers with the priority on reliability, to: >participate in plant operational performance for the in core fuel reliability, the Zero Tolerance for Failure ZTF as a continuous improvement target and the minimisation of manufacturing/quality troubles, >guarantee the supply chain a proven product stability and continuous availability, >support performance improvements with proven design and technology for fuel management updating and cycle cost optimization, >support licensing assessments for fuel assembly and reloads, data/methodologies/services, >meet regulatory challenges regarding new phenomena, addressing emergent performance issues and emerging industry challenges for changing operating regimes. This capacity is based on supplies by AREVA accumulating very large experience both in manufacturing and in plant operation, which is demonstrated by: >manufacturing location in 4 countries including 9 fuel factories in USA, Germany, Belgium and France. Up to now about 120,000 fuel assemblies and 8,000 RCCA have been released to PWR nuclear countries, from AREVA European factories, >irradiation performed or in progress in about half of PWR world wide nuclear plants. Our optimum performances cover rod burn ups of to 82GWD/tU and fuel assemblies successfully operated under various world wide fuel management types. AREVA's experience, which is the largest in the world, has the extensive support of the well known fuel components such as the M5'TM'cladding, the MONOBLOC'TM'guide tube, the HTP'TM' and HMP'TM' structure components and the comprehensive services brought in engineering, irradiation and post irradiation fields. All of AREVA's fuel knowledge is devoted to extend the definition of fuel reliability to cover the whole scope of fuel vendor support. Our Top Reliability and Quality provide customers with continuous

  9. High Performance Computing and Storage Requirements for Biological and Environmental Research Target 2017

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2013-05-01

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In addition to large-­scale computing and storage resources NERSC provides support and expertise that help scientists make efficient use of its systems. The latest review revealed several key requirements, in addition to achieving its goal of characterizing BER computing and storage needs.

  10. A regulatory perspective on design and performance requirements for engineered systems in high-level waste

    Bernero, R.M.

    1992-01-01

    For engineered systems, this paper gives an overview of some of the current activities at the U.S. Nuclear Regulatory Commission (NRC), with the intent of elucidating how the regulatory process works in the management of high-level waste (HLW). Throughout the waste management cycle, starting with packaging and transportation, and continuing to final closure of a repository, these activities are directed at taking advantage of the prelicensing consultation period, a period in which the NRC, DOE and others can interact in ways that will reduce regulatory, technical and institutional uncertainties, and open the path to development and construction of a deep geologic repository for permanent disposal of HLW. Needed interactions in the HLW program are highlighted. Examples of HLW regulatory activities are given in discussions of a multipurpose-cask concept and of current NRC work on the meaning of the term substantially complete containment

  11. Responsive design high performance

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  12. Organizing Performance Requirements For Dynamical Systems

    Malchow, Harvey L.; Croopnick, Steven R.

    1990-01-01

    Paper describes methodology for establishing performance requirements for complicated dynamical systems. Uses top-down approach. In series of steps, makes connections between high-level mission requirements and lower-level functional performance requirements. Provides systematic delineation of elements accommodating design compromises.

  13. Memory controllers for high-performance and real-time MPSoCs : requirements, architectures, and future trends

    Akesson, K.B.; Huang, Po-Chun; Clermidy, F.; Dutoit, D.; Goossens, K.G.W.; Chang, Yuan-Hao; Kuo, Tei-Wei; Vivet, P.; Wingard, D.

    2011-01-01

    Designing memory controllers for complex real-time and high-performance multi-processor systems-on-chip is challenging, since sufficient capacity and (real-time) performance must be provided in a reliable manner at low cost and with low power consumption. This special session contains four

  14. The FAIR timing master: a discussion of performance requirements and architectures for a high-precision timing system

    Kreider, M.

    2012-01-01

    Production chains in a particle accelerator are complex structures with many inter-dependencies and multiple paths to consider. This ranges from system initialization and synchronization of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ White-Rabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelization in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modeling any given production chain and devising a suitable architecture to execute these models. (authors)

  15. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  16. Lightweight high-performance 1-4 meter class spaceborne mirrors: emerging technology for demanding spaceborne requirements

    Hull, Tony; Hartmann, Peter; Clarkson, Andrew R.; Barentine, John M.; Jedamzik, Ralf; Westerhoff, Thomas

    2010-07-01

    Pending critical spaceborne requirements, including coronagraphic detection of exoplanets, require exceptionally smooth mirror surfaces, aggressive lightweighting, and low-risk cost-effective optical manufacturing methods. Simultaneous development at Schott for production of aggressively lightweighted (>90%) Zerodur® mirror blanks, and at L-3 Brashear for producing ultra-smooth surfaces on Zerodur®, will be described. New L-3 techniques for large-mirror optical fabrication include Computer Controlled Optical Surfacing (CCOS) pioneered at L-3 Tinsley, and the world's largest MRF machine in place at L-3 Brashear. We propose that exceptional mirrors for the most critical spaceborne applications can now be produced with the technologies described.

  17. Quantitative evaluation of yeast's requirement for glycerol formation in very high ethanol performance fed-batch process

    Nevoigt Elke

    2010-05-01

    Full Text Available Abstract Background Glycerol is the major by-product accounting for up to 5% of the carbon in Saccharomyces cerevisiae ethanolic fermentation. Decreasing glycerol formation may redirect part of the carbon toward ethanol production. However, abolishment of glycerol formation strongly affects yeast's robustness towards different types of stress occurring in an industrial process. In order to assess whether glycerol production can be reduced to a certain extent without jeopardising growth and stress tolerance, the yeast's capacity to synthesize glycerol was adjusted by fine-tuning the activity of the rate-controlling enzyme glycerol 3-phosphate dehydrogenase (GPDH. Two engineered strains whose specific GPDH activity was significantly reduced by two different degrees were comprehensively characterized in a previously developed Very High Ethanol Performance (VHEP fed-batch process. Results The prototrophic strain CEN.PK113-7D was chosen for decreasing glycerol formation capacity. The fine-tuned reduction of specific GPDH activity was achieved by replacing the native GPD1 promoter in the yeast genome by previously generated well-characterized TEF promoter mutant versions in a gpd2Δ background. Two TEF promoter mutant versions were selected for this study, resulting in a residual GPDH activity of 55 and 6%, respectively. The corresponding strains were referred to here as TEFmut7 and TEFmut2. The genetic modifications were accompanied to a strong reduction in glycerol yield on glucose; the level of reduction compared to the wild-type was 61% in TEFmut7 and 88% in TEFmut2. The overall ethanol production yield on glucose was improved from 0.43 g g-1 in the wild type to 0.44 g g-1 measured in TEFmut7 and 0.45 g g-1 in TEFmut2. Although maximal growth rate in the engineered strains was reduced by 20 and 30%, for TEFmut7 and TEFmut2 respectively, strains' ethanol stress robustness was hardly affected; i.e. values for final ethanol concentration (117 ± 4 g

  18. Sequestration Coating Performance Requirements for ...

    symposium paper The EPA’s National Homeland Security Research Center (NHSRC), in collaboration with ASTM International, developed performance standards for materials which could be applied to exterior surfaces contaminated by an RDD to mitigate the spread and migration of radioactive contamination.

  19. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  20. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  1. NWTS program criteria for mined geologic disposal of nuclear waste: functional requirements and performance criteria for waste packages for solidified high-level waste and spent fuel

    1982-07-01

    The Department of Energy (DOE) has primary federal responsibility for the development and implementation of safe and environmentally acceptable nuclear waste disposal methods. Currently, the principal emphasis in the program is on emplacement of nuclear wastes in mined geologic repositories well beneath the earth's surface. A brief description of the mined geologic disposal system is provided. The National Waste Terminal Storage (NWTS) program was established under DOE's predecessor, the Energy Research and Development Administration, to provide facilities for the mined geologic disposal of radioactive wastes. The NWTS program includes both the development and the implementation of the technology necessary for designing, constructing, licensing, and operating repositories. The program does not include the management of processing radioactive wastes or of transporting the wastes to repositories. The NWTS-33 series, of which this document is a part, provides guidance for the NWTS program in the development and implementation of licensed mined geologic disposal systems for solidified high-level and transuranic (TRU) wastes. This document presents the functional requirements and performance criteria for waste packages for solidified high-level waste and spent fuel. A separate document to be developed, NWTS-33(4b), will present the requirements and criteria for waste packages for TRU wastes. The hierarchy and application of these requirements and criteria are discussed in Section 2.2

  2. A high-performance liquid chromatography-based radiometric assay for sucrose-phosphate synthase and other UDP-glucose requiring enzymes

    Salvucci, M.E.; Crafts-Brandner, S.J.

    1991-01-01

    A method for product analysis that eliminates a problematic step in the radiometric sucrose-phosphate synthase assay is described. The method uses chromatography on a boronate-derivatized high-performance liquid chromatography column to separate the labeled product, [14C]sucrose phosphate, from unreacted uridine 5'-diphosphate-[14C]glucose (UDP-Glc). Direct separation of these compounds eliminates the need for treatment of the reaction mixtures with alkaline phosphatase, thereby avoiding the problem of high background caused by contaminating phosphodiesterase activity in alkaline phosphatase preparations. The method presented in this paper can be applied to many UDP-Glc requiring enzymes; here the authors show its use for determining the activities of sucrose-phosphate synthase, sucrose synthase, and uridine diphosphate-glucose pyrophosphorylase in plant extracts

  3. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  4. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  5. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  6. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  7. Requirements on high resolution detectors

    Koch, A. [European Synchrotron Radiation Facility, Grenoble (France)

    1997-02-01

    For a number of microtomography applications X-ray detectors with a spatial resolution of 1 {mu}m are required. This high spatial resolution will influence and degrade other parameters of secondary importance like detective quantum efficiency (DQE), dynamic range, linearity and frame rate. This note summarizes the most important arguments, for and against those detector systems which could be considered. This article discusses the mutual dependencies between the various figures which characterize a detector, and tries to give some ideas on how to proceed in order to improve present technology.

  8. Python high performance programming

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  9. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  10. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  11. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  12. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  13. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  14. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  15. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  16. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  17. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  18. Comparison of energy performance requirements levels

    Spiekman, Marleen; Thomsen, Kirsten Engelund; Rose, Jørgen

    This summary report provides a synthesis of the work within the EU SAVE project ASIEPI on developing a method to compare the energy performance (EP) requirement levels among the countries of Europe. Comparing EP requirement levels constitutes a major challenge. From the comparison of for instance...... the present Dutch requirement level (EPC) of 0,8 with the present Flemish level of E80, it can easily be seen that direct comparison is not possible. The conclusions and recommendations of the study are presented in part A. These constitute the most important result of the project. Part B gives an overview...... of all other project material related to that topic, which allows to easily identify the most pertinent information. Part C lists the project partners and sponsors....

  19. R high performance programming

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  20. Frequency Control Performance Measurement and Requirements

    Illian, Howard F.

    2010-12-20

    Frequency control is an essential requirement of reliable electric power system operations. Determination of frequency control depends on frequency measurement and the practices based on these measurements that dictate acceptable frequency management. This report chronicles the evolution of these measurements and practices. As technology progresses from analog to digital for calculation, communication, and control, the technical basis for frequency control measurement and practices to determine acceptable performance continues to improve. Before the introduction of digital computing, practices were determined largely by prior experience. In anticipation of mandatory reliability rules, practices evolved from a focus primarily on commercial and equity issues to an increased focus on reliability. This evolution is expected to continue and place increased requirements for more precise measurements and a stronger scientific basis for future frequency management practices in support of reliability.

  1. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  2. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  3. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  4. High Performance Computing Multicast

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  5. NGINX high performance

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  6. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  7. Predicting sample size required for classification performance

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  8. Generating units performances: power system requirements

    Fourment, C; Girard, N; Lefebvre, H

    1994-08-01

    The part of generating units within the power system is more than providing power and energy. Their performance are not only measured by their energy efficiency and availability. Namely, there is a strong interaction between the generating units and the power system. The units are essential components of the system: for a given load profile the frequency variation follows directly from the behaviour of the units and their ability to adapt their power output. In the same way, the voltage at the units terminals are the key points to which the voltage profile at each node of the network is linked through the active and especially the reactive power flows. Therefore, the customer will experience the frequency and voltage variations induced by the units behaviour. Moreover, in case of adverse conditions, if the units do not operate as well as expected or trip, a portion of the system, may be the whole system, may collapse. The limitation of the performance of a unit has two kinds of consequences. Firstly, it may result in an increased amount of not supplied energy or loss of load probability: for example if the primary reserve is not sufficient, a generator tripping may lead to an abnormal frequency deviation, and load may have to be shed to restore the balance. Secondly, the limitation of a unit performance results in an economic over-cost for the system: for instance, if not enough `cheap` units are able to load-following, other units with higher operating costs have to be started up. We would like to stress the interest for the operators and design teams of the units on the one hand, and the operators and design teams of the system on the other hand, of dialog and information exchange, in operation but also at the conception stage, in order to find a satisfactory compromise between the system requirements and the consequences for the generating units. (authors). 11 refs., 4 figs.

  9. High performance proton accelerators

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  10. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  11. Dosimeter characteristics and service performance requirements

    Ambrosi, P.; Bartlett, D.T.

    1999-01-01

    The requirements for personal dosimeters and dosimetry services given by ICRP 26, ICRP 35, ICRP 60 and ICRP 75 are summarised and compared with the requirements given in relevant international standards. Most standards could be made more relevant to actual workplace conditions. In some standards, the required tests of energy and angular dependence of the response are not sufficient, or requirements on overall uncertainty are lacking. (author)

  12. Concept development and needs identification for intelligent network flow optimization (INFLO) : functional and performance requirements, and high-level data and communication needs.

    2012-11-01

    The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...

  13. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  14. Towards performance requirements for structural connections

    Stark, J.W.B.

    1999-01-01

    There is a tendency in the Construction Industry to move from solution driven specifications towards performance specifications. Traditionally structural specifications including those for steel construction used to be mainly solution driven. In this paper the position of the draft European

  15. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  16. Material requirements for the High Speed Civil Transport

    Stephens, Joseph R.; Hecht, Ralph J.; Johnson, Andrew M.

    1993-01-01

    Under NASA-sponsored High Speed Research (HSR) programs, the materials and processing requirements have been identified for overcoming the environmental and economic barriers of the next generation High Speed Civil Transport (HSCT) propulsion system. The long (2 to 5 hours) supersonic cruise portion of the HSCT cycle will place additional durability requirements on all hot section engine components. Low emissions combustor designs will require high temperature ceramic matrix composite liners to meet an emission goal of less than 5g NO(x) per Kg fuel burned. Large axisymmetric and two-dimensional exhaust nozzle designs are now under development to meet or exceed FAR 36 Stage III noise requirements, and will require lightweight, high temperature metallic, intermetallic, and ceramic matrix composites to reduce nozzle weight and meet structural and acoustic component performance goals. This paper describes and discusses the turbomachinery, combustor, and exhaust nozzle requirements of the High Speed Civil Transport propulsion system.

  17. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  18. Cost optimal levels for energy performance requirements

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    This report summarises the work done within the Concerted Action EPBD from December 2010 to April 2011 in order to feed into the European Commission's proposal for a common European procedure for a Cost-Optimal methodology under the Directive on the Energy Performance of Buildings (recast) 2010/3...

  19. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  20. Design requirements and performance requirements for reactor fuel recycle manipulator systems

    Grundmann, J.G.

    1975-01-01

    The development of a new generation of remote handling devices for remote production work in support of reactor fuel recycle systems is discussed. These devices require greater mobility, speed and visual capability than remote handling systems used in research activities. An upgraded manipulator system proposed for a High-Temperature Gas-Cooled Reactor fuel refabrication facility is described. Design and performance criteria for the manipulators, cranes, and TV cameras in the proposed system are enumerated

  1. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  2. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  3. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  4. Functional High Performance Financial IT

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  5. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  6. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  7. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  8. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  9. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  10. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  11. Optimizing the design of very high power, high performance converters

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  12. High gain requirements and high field Tokamak experiments

    Cohn, D.R.

    1994-01-01

    Operation at sufficiently high gain (ratio of fusion power to external heating power) is a fundamental requirement for tokamak power reactors. For typical reactor concepts, the gain is greater than 25. Self-heating from alpha particles in deuterium-tritium plasmas can greatly reduce ητ/temperature requirements for high gain. A range of high gain operating conditions is possible with different values of alpha-particle efficiency (fraction of alpha-particle power that actually heats the plasma) and with different ratios of self heating to external heating. At one extreme, there is ignited operation, where all of the required plasma heating is provided by alpha particles and the alpha-particle efficiency is 100%. At the other extreme, there is the case of no heating contribution from alpha particles. ητ/temperature requirements for high gain are determined as a function of alpha-particle heating efficiency. Possibilities for high gain experiments in deuterium-tritium, deuterium, and hydrogen plasmas are discussed

  13. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  14. 14 CFR 171.269 - Marker beacon performance requirements.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Marker beacon performance requirements. 171.269 Section 171.269 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Landing System (ISMLS) § 171.269 Marker beacon performance requirements. ISMLS marker beacon equipment...

  15. Scaling of neck performance requirements in side impacts

    Wismans, J.S.H.M.; Meijer, R.; Rodarius, C.; Been, B.W.

    2008-01-01

    Neck biofidelity performance requirements for different sized crash dummies and human body computer models are usually based on scaling of performance requirements derived for a 50th percentile body size. The objective of this study is to investigate the validity of the currently used scaling laws

  16. Performance Requirements for the Double Shell Tank (DST) System

    SMITH, D.F.

    2001-01-01

    This document identifies the upper-level Double-Shell Tank (DST) System functions and bounds the associated performance requirements. The functions and requirements are provided along with supporting bases. These functions and requirements, in turn, will be incorporated into specifications for the DST System

  17. Development and testing of the methodology for performance requirements

    Rivers, J.D.

    1989-01-01

    The U.S. Department of Energy (DOE) is in the process of implementing a set of materials control and accountability (MC ampersand A) performance requirements. These graded requirements set a uniform level of performance for similar materials at various facilities against the threat of an insider adversary stealing special nuclear material (SNM). These requirements are phrased in terms of detecting the theft of a goal quantity of SNM within a specified time period and with a probability greater than or equal to a special value and include defense-in-depth requirements. The DOE has conducted an extensive effort over the last 2 1/2 yr to develop a practical methodology to be used in evaluating facility performance against the performance requirements specified in DOE order 5633.3. The major participants in the development process have been the Office of Safeguards and Security (OSS), Brookhaven National Laboratory, and Los Alamos National Laboratory. The process has included careful reviews of related evaluation systems, a review of the intent of the requirements in the order, and site visits to most of the major facilities in the DOE complex. As a result of this extensive effort to develop guidance for the MC ampersand A performance requirements, OSS was able to provide a practical method that will allow facilities to evaluate the performance of their safeguards systems against the performance requirements. In addition, the evaluations can be validated by the cognizant operations offices in a systematic manner

  18. Performance concerns for high duty fuel cycle

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  19. A performance requirements analysis of the SSC control system

    Hunt, S.M.; Low, K.

    1992-01-01

    This paper presents the results of analysis of the performance requirements of the Superconducting Super Collider Control System. We quantify the performance requirements of the system in terms of response time, throughput and reliability. We then examine the effect of distance and traffic patterns on control system performance and examine how these factors influence the implementation of the control network architecture and compare the proposed system against those criteria. (author)

  20. Nuclear fuels with high burnup: safety requirements

    Phuc Tran Dai

    2016-01-01

    Vietnam authorities foresees to build 3 reactors from Russian design (VVER AES 2006) by 2030. In order to prepare the preliminary report on safety analysis the Vietnamese Agency for Radioprotection and Safety has launched an investigation on the behaviour of nuclear fuels at high burnups (up to 60 GWj/tU) that will be those of the new plants. This study deals mainly with the behaviour of the fuel assemblies in case of loss of coolant (LOCA). It appears that for an average burnup of 50 GWj/tU and for the advanced design of the fuel assembly (cladding and materials) safety requirements are fulfilled. For an average burnup of 60 GWj/tU, a list of issues remains to be assessed, among which the impact of clad bursting or the hydrogen embrittlement of the advanced zirconium alloys. (A.C.)

  1. High performance MEAs. Final report

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  2. Engineered Barrier System performance requirements systems study report. Revision 02

    Balady, M.A.

    1997-01-14

    This study evaluates the current design concept for the Engineered Barrier System (EBS), in concert with the current understanding of the geologic setting to assess whether enhancements to the required performance of the EBS are necessary. The performance assessment calculations are performed by coupling the EBS with the geologic setting based on the models (some of which were updated for this study) and assumptions used for the 1995 Total System Performance Assessment (TSPA). The need for enhancements is determined by comparing the performance assessment results against the EBS related performance requirements. Subsystem quantitative performance requirements related to the EBS include the requirement to allow no more than 1% of the waste packages (WPs) to fail before 1,000 years after permanent closure of the repository, as well as a requirement to control the release rate of radionuclides from the EBS. The EBS performance enhancements considered included additional engineered components as well as evaluating additional performance available from existing design features but for which no performance credit is currently being taken.

  3. Engineered Barrier System performance requirements systems study report. Revision 02

    Balady, M.A.

    1997-01-01

    This study evaluates the current design concept for the Engineered Barrier System (EBS), in concert with the current understanding of the geologic setting to assess whether enhancements to the required performance of the EBS are necessary. The performance assessment calculations are performed by coupling the EBS with the geologic setting based on the models (some of which were updated for this study) and assumptions used for the 1995 Total System Performance Assessment (TSPA). The need for enhancements is determined by comparing the performance assessment results against the EBS related performance requirements. Subsystem quantitative performance requirements related to the EBS include the requirement to allow no more than 1% of the waste packages (WPs) to fail before 1,000 years after permanent closure of the repository, as well as a requirement to control the release rate of radionuclides from the EBS. The EBS performance enhancements considered included additional engineered components as well as evaluating additional performance available from existing design features but for which no performance credit is currently being taken

  4. The contribution of material control to meeting performance requirements

    Rivers, J.D.

    1989-01-01

    The U.S. Dept. of Energy (DOE) is in the process of implementing a set of performance requirements for material control and accountability (MC ampersand A). These graded requirements set a uniform level of performance for similar materials at various facilities with respect to the threat of an insider adversary stealing special nuclear material (SNM). These requirements are phrased in terms of detecting the theft of a goal quantity of SNM within a specified time period and with a probability greater than or equal to a specified value and include defense in-depth requirements

  5. Strength-toughness requirements for thick walled high pressure vessels

    Kapp, J.A.

    1990-01-01

    The strength and toughness requirements of materials for use in high pressure vessels has been the subject of some discussion in the meetings of the Materials Task Group of the Special Working Group High Pressure Vessels. A fracture mechanics analysis has been performed to theoretically establish the required toughness for a high pressure vessel. This paper reports that the analysis performed is based on the validity requirement for plane strain fracture of fracture toughness test specimens. This is that at the fracture event, the crack length, uncracked ligament, and vessel length must each be greater than fifty times the crack tip plastic zone size for brittle fracture to occur. For high pressure piping applications, the limiting physical dimension is the uncracked ligament, as it can be assumed that the other dimensions are always greater than fifty times the crack tip plastic zone. To perform the fracture mechanics analysis several parameters must be known: these include vessel dimensions, material strength, degree of autofrettage, and design pressure. Results of the analysis show, remarkably, that the effects of radius ratio, pressure and degree of autofrettage can be ignored when establishing strength and toughness requirements for code purposes. The only parameters that enter into the calculation are yield strength, toughness and vessel thickness. The final results can easily be represented as a graph of yield strength against toughness on which several curves, one for each vessel thickness, are plotted

  6. High performance soft magnetic materials

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  7. Input data requirements for performance modelling and monitoring of photovoltaic plants

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high......, modelling the performance of the PV modules at high irradiances requires a dataset of only a few hundred samples in order to obtain a power estimation accuracy of ~1-2\\%....

  8. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  9. Can Knowledge of the Characteristics of "High Performers" Be Generalised?

    McKenna, Stephen

    2002-01-01

    Two managers described as high performing constructed complexity maps of their organization/world. The maps suggested that high performance is socially constructed and negotiated in specific contexts and management competencies associated with it are context specific. Development of high performers thus requires personalized coaching more than…

  10. Cost-optimal levels for energy performance requirements

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels...... of minimum energy performance requirements. In addition to the summary report released in August 2011, the full detailed report on this study is now also made available, just as the EC is about to publish its proposed Regulation for MS to apply in their process to update national building requirements....

  11. Input data required for specific performance assessment codes

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  12. Working group 4B - human intrusion: Design/performance requirements

    Channell, J.

    1993-01-01

    There is no summary of the progress made by working group 4B (Human Intrusion: Design/performance Requirements) during the Electric Power Research Institute's EPRI Workshop on the technical basis of EPA HLW Disposal Criteria, March 1993. This group was to discuss the waste disposal standard, 40 CFR Part 191, in terms of the design and performance requirements of human intrusion. Instead, because there were so few members, they combined with working group 4A and studied the three-tier approach to evaluating postclosure performance

  13. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  14. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  15. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  16. 4D Dynamic Required Navigation Performance Final Report

    Finkelsztein, Daniel M.; Sturdy, James L.; Alaverdi, Omeed; Hochwarth, Joachim K.

    2011-01-01

    New advanced four dimensional trajectory (4DT) procedures under consideration for the Next Generation Air Transportation System (NextGen) require an aircraft to precisely navigate relative to a moving reference such as another aircraft. Examples are Self-Separation for enroute operations and Interval Management for in-trail and merging operations. The current construct of Required Navigation Performance (RNP), defined for fixed-reference-frame navigation, is not sufficiently specified to be applicable to defining performance levels of such air-to-air procedures. An extension of RNP to air-to-air navigation would enable these advanced procedures to be implemented with a specified level of performance. The objective of this research effort was to propose new 4D Dynamic RNP constructs that account for the dynamic spatial and temporal nature of Interval Management and Self-Separation, develop mathematical models of the Dynamic RNP constructs, "Required Self-Separation Performance" and "Required Interval Management Performance," and to analyze the performance characteristics of these air-to-air procedures using the newly developed models. This final report summarizes the activities led by Raytheon, in collaboration with GE Aviation and SAIC, and presents the results from this research effort to expand the RNP concept to a dynamic 4D frame of reference.

  17. High performance current controller for particle accelerator magnets supply

    Maheshwari, Ram Krishan; Bidoggia, Benoit; Munk-Nielsen, Stig

    2013-01-01

    The electromagnets in modern particle accelerators require high performance power supply whose output is required to track the current reference with a very high accuracy (down to 50 ppm). This demands very high bandwidth controller design. A converter based on buck converter topology is used...

  18. Management issues for high performance storage systems

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  19. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  20. Biomedical Requirements for High Productivity Computing Systems

    2005-04-01

    differences in heart muscle structure between normal and brittle-boned mice suffering from osteogenesis imperfecta (OI) because of a deficiency in the protein...reached. In a typical comparative modeling exercise one would use a heuristic algorithm to determine possible sequences of interest, then the Smith...example exercise , require a description of the cellular events that create demands for oxygen. Having cellular level equations together with

  1. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  2. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  3. High Performance OLED Panel and Luminaire

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  4. 42 CFR 84.103 - Man tests; performance requirements.

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Man tests; performance requirements. 84.103 Section 84.103 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OCCUPATIONAL SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self-Contained Breathing Apparatus § 84.103 Man tests;...

  5. 19 CFR 143.5 - System performance requirements.

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false System performance requirements. 143.5 Section 143.5 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF... must demonstrate that his system can interface directly with the Customs computer and ensure accurate...

  6. Performance requirements for the single-shell tank

    GRENARD, C.E.

    1999-01-01

    This document provides performance requirements for the waste storage and waste feed delivery functions of the Single-Shell Tank (SST) System. The requirements presented here in will be used as a basis for evaluating the ability of the system to complete the single-shell tank waste feed delivery mission. They will also be used to select the technology or technologies for retrieving waste from the tanks selected for the single-shell tank waste feed delivery mission, assumed to be 241-C-102 and 241-C-104. This revision of the Performance Requirements for the SST is based on the findings of the SST Functional Analysis, and are reflected in the current System Specification for the SST System

  7. Radioactive material package test standards and performance requirements - public perception

    Pope, R.B.; Shappert, L.B.; Rawl, R.R.

    1992-01-01

    This paper addresses issues related to the public perception of the regulatory test standards and performance requirements for packaging and transporting radioactive material. Specifically, it addresses the adequacy of the package performance standards and testing for Type B packages, which are those packages designed for transporting the most hazardous quantities and forms of radioactive material. Type B packages are designed to withstand accident conditions in transport. To improve public perception, the public needs to better understand: (a) the regulatory standards and requirements themselves, (b) the extensive history underlying their development, and (c) the soundness of the technical foundation. The public needs to be fully informed on studies, tests, and analyses that have been carried out worldwide and form the basis of the regulatory standards and requirements. This paper provides specific information aimed at improving the public perception of packages test standards

  8. Performance demonstration requirements for eddy current steam generator tube inspection

    Kurtz, R.J.; Heasler, P.G.; Anderson, C.M.

    1992-10-01

    This paper describes the methodology used for developing performance demonstration tests for steam generator tube eddy current (ET) inspection systems. The methodology is based on statistical design principles. Implementation of a performance demonstration test based on these design principles will help to ensure that field inspection systems have a high probability of detecting and correctly sizing tube degradation. The technical basis for the ET system performance thresholds is presented. Probability of detection and flaw sizing tests are described

  9. Required performance to the concrete structure of the accelerator facilities

    Irie, Masaaki; Yoshioka, Masakazu; Miyahara, Masanobu

    2006-01-01

    As for the accelerator facility, there is many a thing which is constructed as underground concrete structure from viewpoint such as cover of radiation and stability of the structure. Required performance to the concrete structure of the accelerator facility is the same as the general social infrastructure, but it has been possessed the feature where target performance differs largely. As for the body sentence, expressing the difference of the performance which is required from the concrete structure of the social infrastructure and the accelerator facility, construction management of the concrete structure which it plans from order of the accelerator engineering works facility, reaches to the design, supervision and operation it is something which expresses the method of thinking. In addition, in the future of material structural analysis of the concrete which uses the neutron accelerator concerning view it showed. (author)

  10. Learning Apache Solr high performance

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  11. High-performance composite chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  12. Do talento ao alto rendimento: indicadores de acesso à excelência no handebol From talent to a high level of performance: key requirements to access the excellence in handball

    Luís Massuça

    2010-12-01

    Full Text Available O talento constitui uma das condições fundamentais para acender à excelência no desporto de competição e a sua identificação representa o primeiro passo de um longo processo de especialização que permite selecionar os sujeitos certos. Para conhecer as variáveis que os treinadores julgam mais influentes no sucesso do atleta de handebol (do sexo masculino, foi aplicado um questionário a 71 treinadores de handebol ("Questionário aos Técnicos de Andebol - QTA"; MASSUÇA, 2007. Neste instrumento, solicitava-se aos participantes que avaliassem o grau de importância de cada fator e indicador de rendimento no sucesso em geral (Handebolista, A e, que o fizessem igualmente para cada uma das posições de jogo que caracterizam a modalidade desportiva (Ponta, P; Lateral, L; Central, C; Pivot, Pi; Guarda-redes, GR. Os resultados permitem constatar que não existe um perfil de atleta de handebol, mas vários. Pode assim concluir-se que no jogo de handebol o sucesso pode ser experimentado por atletas com diferentes características. Em complemento, julgamos que o inventário apresentado (das qualidades necessárias ao atleta de handebol de alto rendimento poderá servir de referência para a construção de um modelo de seleção de talentos.Talent is a key requirement to access the excellence in a competitive sport and its identification is the first step of a long process of specialization that allows the correct selection of subjects. To understand what the most influent variables to achieve success are, a questionnaire was administered to 71 handball coaches ("Questionnaire to Handball Coaches - QTA"; MASSUÇA, 2007. The coaches were asked to rate the importance of each factor and performance indicator considering the success of the general male handball player (A and to do exactly the same for each specific playing position (wing, P; backward left/right, L; backward centre, C; pivot, Pi; goalkeeper, GR. Results showed that there is not a

  13. High-Performance Composite Chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  14. Toward High-Performance Organizations.

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  15. High performance Mo adsorbent PZC

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  16. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  17. High performance nuclear fuel element

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  18. High Performance JavaScript

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  19. Performance Evaluation and Requirements Assessment for Gravity Gradient Referenced Navigation

    Jisun Lee

    2015-07-01

    Full Text Available In this study, simulation tests for gravity gradient referenced navigation (GGRN are conducted to verify the effects of various factors such as database (DB and sensor errors, flight altitude, DB resolution, initial errors, and measurement update rates on the navigation performance. Based on the simulation results, requirements for GGRN are established for position determination with certain target accuracies. It is found that DB and sensor errors and flight altitude have strong effects on the navigation performance. In particular, a DB and sensor with accuracies of 0.1 E and 0.01 E, respectively, are required to determine the position more accurately than or at a level similar to the navigation performance of terrain referenced navigation (TRN. In most cases, the horizontal position error of GGRN is less than 100 m. However, the navigation performance of GGRN is similar to or worse than that of a pure inertial navigation system when the DB and sensor errors are 3 E or 5 E each and the flight altitude is 3000 m. Considering that the accuracy of currently available gradiometers is about 3 E or 5 E, GGRN does not show much advantage over TRN at present. However, GGRN is expected to exhibit much better performance in the near future when accurate DBs and gravity gradiometer are available.

  20. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  1. High-Performance Wireless Telemetry

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  2. Building Trust in High-Performing Teams

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  3. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  4. High performance polyethylene nanocomposite fibers

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  5. academic performance of less endowed high school students

    User

    girls) who obtained the basic requirements for courses that they ... Academic performance of students from less endowed senior high ... 106 ... only pay academic facility user fees. The second ..... certificate education, Pro is senior executive.

  6. HIGH-PERFORMANCE COATING MATERIALS

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  7. MYRRHA cryogenic system study on performances and reliability requirements

    Junquera, T.; Chevalier, N.R.; Thermeau, J.P.; Medeiros Romao, L.; Vandeplassche, D.

    2015-01-01

    A precise evaluation of the cryogenic requirements for accelerator-driven system such as the MYRRHA project has been performed. In particular, operation temperature, thermal losses, and required cryogenic power have been evaluated. A preliminary architecture of the cryogenic system including all its major components, as well as the principles for the cryogenic fluids distribution has been proposed. A detailed study on the reliability aspects has also been initiated. This study is based on the reliability of large cryogenic systems used for accelerators like HERA, LHC or SNS Linac. The requirements to guarantee good cryogenic system availability can be summarised as follows: 1) Mean Time Between Maintenance (MTBM) should be > 8 000 hours; 2) Valves, heat exchangers and turbines are particularly sensitive elements to impurities (dust, oil, gases), improvements are necessary to keep a minimal level in these components; 3) Redundancy studies for all elements containing moving/vibrating parts (turbines, compressors, including their respective bearings and seal shafts) are necessary; 4) Periodic maintenance is mandatory: oil checks, control of screw compressors every 10.000-15.000 hours, vibration surveillance programme, etc; 5) Special control and maintenance of utilities equipment (supply of cooling water, compressed air and electrical supply) is necessary; 6) Periodic vacuum checks to identify leakage appearance such as insulation vacuum of transfer lines and distribution boxes are necessary; 7) Easily exchangeable cold compressors are required

  8. Application of systems engineering to determine performance requirements for repository waste packages

    Aitken, E.A.; Stimmell, G.L.

    1987-01-01

    The waste package for a nuclear waste repository in salt must contribute substantially to the performance objectives defined by the Salt Repository Project (SRP) general requirements document governing disposal of high-level waste. The waste package is one of the engineered barriers providing containment. In establishing the performance requirements for a project focused on design and fabrication of the waste package, the systems engineering methodology has been used to translate the hierarchy requirements for the repository system to specific performance requirements for design and fabrication of the waste package, a subsystem of the repository. This activity is ongoing and requires a methodology that provides traceability and is capable of iteration as baseline requirements are refined or changed. The purpose of this summary is to describe the methodology being used and the way it can be applied to similar activities in the nuclear industry

  9. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  10. High-performance vertical organic transistors.

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Cognition and procedure representational requirements for predictive human performance models

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  12. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2016-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  13. A Study on Performance Requirements for Advanced Alarm System

    Seong, Duk Hyun; Jeong, Jae Hoon; Sim, Young Rok; Ko, Jong Hyun; Kim, Jung Seon; Jang, Gwi Sook; Park, Geun Ok

    2005-01-01

    A design goals of advanced alarm system is providing advanced alarm information to operator in main control room. To achive this, we applied computer based system to Alarm System. Because, It should apply data management and advanced alarm processing(ie. Data Base Mangegment System and S/W module for alarm processing). These are not impossible in analog based alarm system. And, preexitance research examples are made on digital computer. We have digital systems for test of advanced alarm system table and have tested and studied using by test equipment in the view point of the system performance, stability and security. In this paper, we discribed about general software architecture of preexitance research examples. Also, CPU performance and requirements of system software that served to accommodate it, stability and security

  14. High-performance laboratories and cleanrooms; TOPICAL

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  15. Development of high performance ODS alloys

    Shao, Lin [Texas A & M Univ., College Station, TX (United States); Gao, Fei [Univ. of Michigan, Ann Arbor, MI (United States); Garner, Frank [Texas A & M Univ., College Station, TX (United States)

    2018-01-29

    This project aims to capitalize on insights developed from recent high-dose self-ion irradiation experiments in order to develop and test the next generation of optimized ODS alloys needed to meet the nuclear community's need for high strength, radiation-tolerant cladding and core components, especially with enhanced resistance to void swelling. Two of these insights are that ferrite grains swell earlier than tempered martensite grains, and oxide dispersions currently produced only in ferrite grains require a high level of uniformity and stability to be successful. An additional insight is that ODS particle stability is dependent on as-yet unidentified compositional combinations of dispersoid and alloy matrix, such as dispersoids are stable in MA957 to doses greater than 200 dpa but dissolve in MA956 at doses less than 200 dpa. These findings focus attention on candidate next-generation alloys which address these concerns. Collaboration with two Japanese groups provides this project with two sets of first-round candidate alloys that have already undergone extensive development and testing for unirradiated properties, but have not yet been evaluated for their irradiation performance. The first set of candidate alloys are dual phase (ferrite + martensite) ODS alloys with oxide particles uniformly distributed in both ferrite and martensite phases. The second set of candidate alloys are ODS alloys containing non-standard dispersoid compositions with controllable oxide particle sizes, phases and interfaces.

  16. Ultra high performance concrete dematerialization study

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  17. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  18. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  19. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  20. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  1. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  2. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  3. Satellite Ocean Color Sensor Design Concepts and Performance Requirements

    McClain, Charles R.; Meister, Gerhard; Monosmith, Bryan

    2014-01-01

    In late 1978, the National Aeronautics and Space Administration (NASA) launched the Nimbus-7 satellite with the Coastal Zone Color Scanner (CZCS) and several other sensors, all of which provided major advances in Earth remote sensing. The inspiration for the CZCS is usually attributed to an article in Science by Clarke et al. who demonstrated that large changes in open ocean spectral reflectance are correlated to chlorophyll-a concentrations. Chlorophyll-a is the primary photosynthetic pigment in green plants (marine and terrestrial) and is used in estimating primary production, i.e., the amount of carbon fixed into organic matter during photosynthesis. Thus, accurate estimates of global and regional primary production are key to studies of the earth's carbon cycle. Because the investigators used an airborne radiometer, they were able to demonstrate the increased radiance contribution of the atmosphere with altitude that would be a major issue for spaceborne measurements. Since 1978, there has been much progress in satellite ocean color remote sensing such that the technique is well established and is used for climate change science and routine operational environmental monitoring. Also, the science objectives and accompanying methodologies have expanded and evolved through a succession of global missions, e.g., the Ocean Color and Temperature Sensor (OCTS), the Seaviewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Medium Resolution Imaging Spectrometer (MERIS), and the Global Imager (GLI). With each advance in science objectives, new and more stringent requirements for sensor capabilities (e.g., spectral coverage) and performance (e.g., signal-to-noise ratio, SNR) are established. The CZCS had four bands for chlorophyll and aerosol corrections. The Ocean Color Imager (OCI) recommended for the NASA Pre-Aerosol, Cloud, and Ocean Ecosystems (PACE) mission includes 5 nanometers hyperspectral coverage from 350 to

  4. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  5. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  6. NCI's Transdisciplinary High Performance Scientific Data Platform

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  7. Large Scale Computing and Storage Requirements for High Energy Physics

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  8. Large Scale Computing and Storage Requirements for High Energy Physics

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  9. The need for high performance breeder reactors

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  10. 5 CFR 9901.405 - Performance management system requirements.

    2010-01-01

    ...) Holds supervisors and managers accountable for effectively managing the performance of employees under... and communicating performance expectations, monitoring performance and providing feedback, and... (b) of this section, supervisors and managers will— (1) Clearly communicate performance expectations...

  11. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  12. High Burnup Fuel Performance and Safety Research

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  13. High performance liquid chromatography in pharmaceutical analyses

    Branko Nikolin

    2004-05-01

    serum contains numerous endogenous compounds often present in concentrations much greater than those of analyte. Analiyte concentrations are often low, and in the case of drugs, the endogenous compounds are sometimes structurally very similar to the drug to be measured. The binding of drugs to the plasma protein also may occur which decreases the amount of free compound that is measured. To undertake the analyses of drugs and metabolites in body fluids the analyst is facet with several problems. The first problem is due to the complex nature of the body fluid, the drugs must be isolated by an extraction technique, which ideally should provide a relatively clean extract, and the separation system must be capable of resolving the drugs of interest from co extractives. All mentioned when we are using high performance liquid chromatography require good selections of detectors, good stationary phase, eluents and adequate program during separation. UV/VIS detector is the most versatile detector used in high performance liquid chromatography it is not always ideal since it is lack of specificity means high resolution of the analyte that may be required. UV detection is preferred since it offers excellent linearity and rapid quantitative analyses can be performed against a single standard of the drug being determined. Diode array and rapid scanning detector are useful for peak identification and monitoring peak purity but they are somewhat less sensitive then single wavelength detectors. In liquid chromatography some components may have a poor UV chromophores if UV detection is being used or be completely retained on the liquid chromatography column. Fluorescence and electrochemical detector are not only considerably more sensitive towed appropriate analytes but also more selective than UV detectors for many compounds. If at all possible fluorescence detectors are sensitive, stable, selective and easy to operate. It is selectivity shows itself in the lack of frontal

  14. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  15. Precision cosmology with time delay lenses: high resolution imaging requirements

    Meng, Xiao-Lei; Liao, Kai [Department of Astronomy, Beijing Normal University, 19 Xinjiekouwai Street, Beijing, 100875 (China); Treu, Tommaso; Agnello, Adriano [Department of Physics, University of California, Broida Hall, Santa Barbara, CA 93106 (United States); Auger, Matthew W. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Marshall, Philip J., E-mail: xlmeng919@gmail.com, E-mail: tt@astro.ucla.edu, E-mail: aagnello@physics.ucsb.edu, E-mail: mauger@ast.cam.ac.uk, E-mail: liaokai@mail.bnu.edu.cn, E-mail: dr.phil.marshall@gmail.com [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2015-09-01

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation

  16. Precision cosmology with time delay lenses: High resolution imaging requirements

    Meng, Xiao -Lei [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Treu, Tommaso [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Agnello, Adriano [Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Auger, Matthew W. [Univ. of Cambridge, Cambridge (United Kingdom); Liao, Kai [Beijing Normal Univ., Beijing (China); Univ. of California, Santa Barbara, CA (United States); Univ. of California, Los Angeles, CA (United States); Marshall, Philip J. [Stanford Univ., Stanford, CA (United States)

    2015-09-28

    Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration of the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρtot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive

  17. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  18. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  19. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  20. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  1. Leadership in organizations with high security and reliability requirements

    Gonzalez, F.

    2013-01-01

    Developing leadership skills in organizations is the key to ensure the sustainability of excellent results in industries with high requirements safety and reliability. In order to have a model of leadership development specific to this type of organizations, Tecnatom in 2011, we initiated a project internal, to find and adapt a competency model to these requirements.

  2. L-Band Digital Aeronautical Communications System Engineering - Concepts of Use, Systems Performance, Requirements, and Architectures

    Zelkin, Natalie; Henriksen, Stephen

    2010-01-01

    This NASA Contractor Report summarizes and documents the work performed to develop concepts of use (ConUse) and high-level system requirements and architecture for the proposed L-band (960 to 1164 MHz) terrestrial en route communications system. This work was completed as a follow-on to the technology assessment conducted by NASA Glenn Research Center and ITT for the Future Communications Study (FCS). ITT assessed air-to-ground (A/G) communications concepts of use and operations presented in relevant NAS-level, international, and NAS-system-level documents to derive the appropriate ConUse relevant to potential A/G communications applications and services for domestic continental airspace. ITT also leveraged prior concepts of use developed during the earlier phases of the FCS. A middle-out functional architecture was adopted by merging the functional system requirements identified in the bottom-up assessment of existing requirements with those derived as a result of the top-down analysis of ConUse and higher level functional requirements. Initial end-to-end system performance requirements were derived to define system capabilities based on the functional requirements and on NAS-SR-1000 and the Operational Performance Assessment conducted as part of the COCR. A high-level notional architecture of the L-DACS supporting A/G communication was derived from the functional architecture and requirements.

  3. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  4. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  5. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  6. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  7. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  8. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  9. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  10. Assessment of Performance-based Requirements for Structural Design

    Hertz, Kristian Dahl

    2005-01-01

    and for a detailed assessment of the requirements. The design requirements to be used for a factory producing elements for industrial housing for unknown costumers are discussed, and a fully developed fire is recommended as a common requirement for domestic houses, hotels, offices, schools and hospitals. In addition...

  11. High Energy Physics and Nuclear Physics Network Requirements

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  12. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  13. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  14. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  15. 5 CFR 9701.405 - Performance management system requirements.

    2010-01-01

    ... feedback, and developing, rating, and rewarding performance; and (6) Specify the criteria and procedures to... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Performance management system... HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Performance Management § 9701.405 Performance...

  16. Evaluation of the Trade Space Between UAS Maneuver Performance and SAA System Performance Requirements

    Jack, Devin P.; Hoffler, Keith D.; Johnson, Sally C.

    2014-01-01

    A need exists to safely integrate Unmanned Aircraft Systems (UAS) into the National Airspace System. Replacing manned aircraft's see-and-avoid capability in the absence of an onboard pilot is one of the key challenges associated with safe integration. Sense-and-avoid (SAA) systems will have to achieve yet-to-be-determined required separation distances for a wide range of encounters. They will also need to account for the maneuver performance of the UAS they are paired with. The work described in this paper is aimed at developing an understanding of the trade space between UAS maneuver performance and SAA system performance requirements. An assessment of current manned and unmanned aircraft performance was used to establish potential UAS performance test matrix bounds. Then, nearterm UAS integration work was used to narrow down the scope. A simulator was developed with sufficient fidelity to assess SAA system performance requirements for a wide range of encounters. The simulator generates closest-point-of-approach (CPA) data from the wide range of UAS performance models maneuvering against a single intruder with various encounter geometries. The simulator is described herein and has both a graphical user interface and batch interface to support detailed analysis of individual UAS encounters and macro analysis of a very large set of UAS and encounter models, respectively. Results from the simulator using approximate performance data from a well-known manned aircraft is presented to provide insight into the problem and as verification and validation of the simulator. Analysis of climb, descent, and level turn maneuvers to avoid a collision is presented. Noting the diversity of backgrounds in the UAS community, a description of the UAS aerodynamic and propulsive design and performance parameters is included. Initial attempts to model the results made it clear that developing maneuver performance groups is required. Discussion of the performance groups developed and how

  17. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  18. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  19. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  20. Optimizing cementious content in concrete mixtures for required performance.

    2012-01-01

    "This research investigated the effects of changing the cementitious content required at a given water-to-cement ratio (w/c) on workability, strength, and durability of a concrete mixture. : An experimental program was conducted in which 64 concrete ...

  1. High performance VLSI telemetry data systems

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  2. High-performance commercial building systems

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  3. 10 CFR 63.114 - Requirements for performance assessment.

    2010-01-01

    ... processes of engineered barriers in the performance assessment, including those processes that would adversely affect the performance of natural barriers. Degradation, deterioration, or alteration processes of... surrounding region to the extent necessary, and information on the design of the engineered barrier system...

  4. Intelligent Facades for High Performance Green Buildings

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  5. Long-term bridge performance high priority bridge performance issues.

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  6. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  7. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  8. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  9. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  10. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  11. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  12. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  13. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  14. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  15. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  16. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  17. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  18. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  19. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  20. NINJA: Java for High Performance Numerical Computing

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  1. High performance image processing of SPRINT

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  2. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  3. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  4. Fracture toughness of ultra high performance concrete by flexural performance

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  5. High-Performance Tiled WMS and KML Web Server

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  6. IT Requirements Integration in High-Rise Construction Design Projects

    Levina, Anastasia; Ilin, Igor; Esedulaev, Rustam

    2018-03-01

    The paper discusses the growing role of IT support for the operation of modern high-rise buildings, due to the complexity of managing engineering systems of buildings and the requirements of consumers for the IT infrastructure. The existing regulatory framework for the development of design documentation for construction, including high-rise buildings, is analyzed, and the lack of coherence in the development of this documentation with the requirements for the creation of an automated management system and the corresponding IT infrastructure is stated. The lack of integration between these areas is the cause of delays and inefficiencies both at the design stage and at the stage of putting the building into operation. The paper proposes an approach to coordinate the requirements of the IT infrastructure of high-rise buildings and design documentation for construction. The solution to this problem is possible within the framework of the enterprise architecture concept by coordinating the requirements of the IT and technological layers at the design stage of the construction.

  7. Frequency selective surfaces based high performance microstrip antenna

    Narayan, Shiv; Jha, Rakesh Mohan

    2016-01-01

    This book focuses on performance enhancement of printed antennas using frequency selective surfaces (FSS) technology. The growing demand of stealth technology in strategic areas requires high-performance low-RCS (radar cross section) antennas. Such requirements may be accomplished by incorporating FSS into the antenna structure either in its ground plane or as the superstrate, due to the filter characteristics of FSS structure. In view of this, a novel approach based on FSS technology is presented in this book to enhance the performance of printed antennas including out-of-band structural RCS reduction. In this endeavor, the EM design of microstrip patch antennas (MPA) loaded with FSS-based (i) high impedance surface (HIS) ground plane, and (ii) the superstrates are discussed in detail. The EM analysis of proposed FSS-based antenna structures have been carried out using transmission line analogy, in combination with the reciprocity theorem. Further, various types of novel FSS structures are considered in desi...

  8. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  9. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  10. ITER-EDA physics design requirements and plasma performance assessments

    Uckan, N.A.; Galambos, J.; Wesley, J.; Boucher, D.; Perkins, F.; Post, D.; Putvinski, S.

    1996-01-01

    Physics design guidelines, plasma performance estimates, and sensitivity of performance to changes in physics assumptions are presented for the ITER-EDA Interim Design. The overall ITER device parameters have been derived from the performance goals using physics guidelines based on the physics R ampersand D results. The ITER-EDA design has a single-null divertor configuration (divertor at the bottom) with a nominal plasma current of 21 MA, magnetic field of 5.68 T, major and minor radius of 8.14 m and 2.8 m, and a plasma elongation (at the 95% flux surface) of ∼1.6 that produces a nominal fusion power of ∼1.5 GW for an ignited burn pulse length of ≥1000 s. The assessments have shown that ignition at 1.5 GW of fusion power can be sustained in ITER for 1000 s given present extrapolations of H-mode confinement (τ E = 0.85 x τ ITER93H ), helium exhaust (τ* He /τ E = 10), representative plasma impurities (n Be /n e = 2%), and beta limit [β N = β(%)/(I/aB) ≤ 2.5]. The provision of 100 MW of auxiliary power, necessary to access to H-mode during the approach to ignition, provides for the possibility of driven burn operations at Q = 15. This enables ITER to fulfill its mission of fusion power (∼ 1--1.5 GW) and fluence (∼1 MWa/m 2 ) goals if confinement, impurity levels, or operational (density, beta) limits prove to be less favorable than present projections. The power threshold for H-L transition, confinement uncertainties, and operational limits (Greenwald density limit and beta limit) are potential performance limiting issues. Improvement of the helium exhaust (τ* He /τ E ≤ 5) and potential operation in reverse-shear mode significantly improve ITER performance

  11. Design of JMTR high-performance fuel element

    Sakurai, Fumio; Shimakawa, Satoshi; Komori, Yoshihiro; Tsuchihashi, Keiichiro; Kaminaga, Fumito

    1999-01-01

    For test and research reactors, the core conversion to low-enriched uranium fuel is required from the viewpoint of non-proliferation of nuclear weapon material. Improvements of core performance are also required in order to respond to recent advanced utilization needs. In order to meet both requirements, a high-performance fuel element of high uranium density with Cd wires as burnable absorbers was adopted for JMTR core conversion to low-enriched uranium fuel. From the result of examination of an adaptability of a few group constants generated by a conventional transport-theory calculation with an isotropic scattering approximation to a few group diffusion-theory core calculation for design of the JMTR high-performance fuel element, it was clear that the depletion of Cd wires was not able to be predicted accurately using group constants generated by the conventional method. Therefore, a new generation method of a few group constants in consideration of an incident neutron spectrum at Cd wire was developed. As the result, the most suitable high-performance fuel element for JMTR was designed successfully, and that allowed extension of operation duration without refueling to almost twice as long and offer of irradiation field with constant neutron flux. (author)

  12. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  13. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  14. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  15. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  16. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  17. High performance repairing of reinforced concrete structures

    Iskhakov, I.; Ribakov, Y.; Holschemacher, K.; Mueller, T.

    2013-01-01

    Highlights: ► Steel fibered high strength concrete is effective for repairing concrete elements. ► Changing fibers’ content, required ductility of the repaired element is achieved. ► Experiments prove previously developed design concepts for two layer beams. -- Abstract: Steel fibered high strength concrete (SFHSC) is an effective material that can be used for repairing concrete elements. Design of normal strength concrete (NSC) elements that should be repaired using SFHSC can be based on general concepts for design of two-layer beams, consisting of SFHSC in the compressed zone and NSC without fibers in the tensile zone. It was previously reported that such elements are effective when their section carries rather large bending moments. Steel fibers, added to high strength concrete, increase its ultimate deformations due to the additional energy dissipation potential contributed by fibers. When changing the fibers’ content, a required ductility level of the repaired element can be achieved. Providing proper ductility is important for design of structures to dynamic loadings. The current study discusses experimental results that form a basis for finding optimal fiber content, yielding the highest Poisson coefficient and ductility of the repaired elements’ sections. Some technological issues as well as distribution of fibers in the cross section of two-layer bending elements are investigated. The experimental results, obtained in the frame of this study, form a basis for general technological provisions, related to repairing of NSC beams and slabs, using SFHSC.

  18. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  19. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  20. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  1. DURIP: High Performance Computing in Biomathematics Applications

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  2. High Performance Computing Operations Review Report

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  3. Planning for high performance project teams

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  4. Computational Biology and High Performance Computing 2000

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  5. Long duration performance of high temperature irradiation resistant thermocouples

    Rempe, J.; Knudson, D.; Condie, K.; Cole, J.; Wilkins, S.C.

    2007-01-01

    Many advanced nuclear reactor designs require new fuel, cladding, and structural materials. Data are needed to characterize the performance of these new materials in high temperature, radiation conditions. However, traditional methods for measuring temperature in-pile degrade at temperatures above 1100 C degrees. To address this instrumentation need, the Idaho National Laboratory (INL) developed and evaluated the performance of a high temperature irradiation-resistant thermocouple that contains alloys of molybdenum and niobium. To verify the performance of INL's recommended thermocouple design, a series of high temperature (from 1200 to 1800 C) long duration (up to six months) tests has been initiated. This paper summarizes results from the tests that have been completed. Data are presented from 4000 hour tests conducted at 1200 and 1400 C that demonstrate the stability of this thermocouple (less than 2% drift). In addition, post test metallographic examinations are discussed which confirm the compatibility of thermocouple materials throughout these long duration, high temperature tests. (authors)

  6. Research on high performance mirrors for free electron lasers

    Kitatani, Fumito

    1996-01-01

    For the stable functioning of free electron laser, high performance optical elements are required because of its characteristics. In particular in short wavelength free electron laser, since its gain is low, the optical elements having very high reflectivity are required. Also in free electron laser, since high energy noise light exists, the optical elements must have high optical breaking strength. At present in Power Reactor and Nuclear Fuel Development Corporation, the research for heightening the performance of dielectric multi-layer film elements for short wavelength is carried out. For manufacturing such high performance elements, it is necessary to develop the new materials for vapor deposition, new vapor deposition process, and the techniques of accurate substrate polishing and inspection. As the material that satisfies the requirements, there is diamond-like carbon (DLC) film, of which the properties are explained. As for the manufacture of the DLC films for short wavelength optics, the test equipment for forming the DLC films, the test of forming the DLC films, the change of the film quality due to gas conditions, discharge conditions and substrate materials, and the measurement of the optical breaking strength are reported. (K.I.)

  7. Development of DSRC device and communication system performance measures recommendations for DSRC OBE performance and security requirements.

    2016-05-22

    This report presents recommendations for minimum DSRC device communication performance and security : requirements to ensure effective operation of the DSRC system. The team identified recommended DSRC : communications requirements aligned to use cas...

  8. High performance separation of lanthanides and actinides

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  9. Borehole sealing literature review of performance requirements and materials

    Piccinin, D.; Hooton, R.D.

    1985-02-01

    To ensure the safe disposal of nuclear wastes, all potential pathways for radionuclide release to the biosphere must be effectively sealed. This report presents a summary of the literature up to August 1982 and outlines the placement, mechanical property and durability-stability requirements for borehole sealing. An outline of the materials that have been considered for possible use in borehole sealing is also included. Cement grouts are recommended for further study since it is indicated in the literature that cement grouts offer the best opportunity of effectively sealing boreholes employing present technology. However, new and less well known materials should also be researched to ensure that the best possible borehole plugging system is developed. 78 refs

  10. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  11. A High Performance COTS Based Computer Architecture

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  12. Operational requirements of spherical HTR fuel elements and their performance

    Roellig, K.; Theymann, W.

    1985-01-01

    The German development of spherical fuel elements with coated fuel particles led to a product design which fulfils the operational requirements for all HTR applications with mean gas exit temperatures from 700 deg C (electricity and steam generation) up to 950 deg C (supply of nuclear process heat). In spite of this relatively wide span for a parameter with strong impact on fuel element behaviour, almost identical fuel specifications can be used for the different reactor purposes. For pebble bed reactors with relatively low gas exit temperatures of 700 deg C, the ample design margins of the fuel elements offer the possibility to enlarge the scope of their in-service duties and, simultaneously, to improve fuel cycle economics. This is demonstrated for the HTR-500, an electricity and steam generating 500 MWel eq plant presently proposed as follow-up project to the THTR-300. Due to the low operating temperatures of the HTR-500 core, the fuel can be concentrated in about 70% of the pebbles of the core thus saving fuel cycle costs. Under all design accident conditions fuel temperatures are maintained below 1250 deg C. This allows a significant reduction in the engineered activity barriers outside the primary circuit, in particular for the loss of coolant accident. Furthermore, access to major primary circuit components and the reuse of the fuel elements after any design accident are possible. (author)

  13. Cryogenic propellant management: Integration of design, performance and operational requirements

    Worlund, A. L.; Jamieson, J. R., Jr.; Cole, T. W.; Lak, T. I.

    1985-01-01

    The integration of the design features of the Shuttle elements into a cryogenic propellant management system is described. The implementation and verification of the design/operational changes resulting from design deficiencies and/or element incompatibilities encountered subsequent to the critical design reviews are emphasized. Major topics include: subsystem designs to provide liquid oxygen (LO2) tank pressure stabilization, LO2 facility vent for ice prevention, liquid hydrogen (LH2) feedline high point bleed, pogo suppression on the Space Shuttle Main Engine (SSME), LO2 low level cutoff, Orbiter/engine propellant dump, and LO2 main feedline helium injection for geyser prevention.

  14. Automatic Energy Schemes for High Performance Applications

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  15. High-performance computing in seismology

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  16. High performance computing in linear control

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  17. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  18. Intel Xeon Phi coprocessor high performance programming

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  19. Development of high-performance blended cements

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  20. 13 CFR 126.700 - What are the performance of work requirements for HUBZone contracts?

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What are the performance of work... ADMINISTRATION HUBZONE PROGRAM Contract Performance Requirements § 126.700 What are the performance of work... meet the performance of work requirements set forth in § 125.6(c) of this chapter. (b) In addition to...

  1. Performance requirements of the MedAustron beam delivery system

    AUTHOR|(CDS)2073034

    The Austrian hadron therapy center MedAustron is currently under construction with patient treatment planned to commence in 2015. Tumors will be irradiated using proton and carbon ions, for which the steeply rising Bragg curve and finite range offer a better conformity of the dose to the geometrical shape of the tumor compared to conventional photon irradiation. The current trend is to move from passive scattering toward active scanning using a narrow pencil beam in order to reach an even better dose conformation and limit the need of patient specific hardware. The quality of the deposited dose will ultimately depend on the performance of the beam delivery chain: beam profile and extraction stability of the extracted beam, accuracy and ramp rate of the scanning magnet power supplies, and precision of the beam monitors used for verifying the delivered dose. With a sharp lateral penumbra, the transverse dose fall-off can be minimized. This is of particular importance in situations where the lesion is adjace...

  2. Utilities for high performance dispersion model PHYSIC

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  3. An integrated high performance fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  4. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  5. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  6. High-Performance Vertical Organic Electrochemical Transistors.

    Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G

    2018-02-01

    Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Discussion on sealing performance required in disposal system. Hydraulic analysis of tunnel intersections

    Sugita, Yutaka; Takahashi, Yoshiaki; Uragami, Manabu; Kitayama, Kazumi; Fujita, Tomoo; Kawakami, Susumu; Yui, Mikazu; Umeki, Hiroyuki; Miyamoto, Yoichi

    2005-09-01

    The sealing performance of a repository must be considered in the safety assessment of the geological disposal system of the high-level radioactive waste. NUMO and JNC established 'Technical Commission on Sealing Technology of Repository' based on the cooperation agreement. The objectives of this commission are to present the concept on the sealing performance required in the disposal system and to develop the direction for future R and D programme for design requirements of closure components (backfilling material, clay plug, etc.) in the presented concept. In the first phase of this commission, the current status of domestic and international sealing technologies were reviewed; and repository components and repository environments were summarized subsequently, the hydraulic analysis of tunnel intersections, where a main tunnel and a disposal tunnel in a disposal panel meet, were performed, considering components in and around the engineered barrier system (EBS). Since all tunnels are connected in the underground facility, understanding the hydraulic behaviour of tunnel intersections is an important issue to estimate migration of radionuclides from the EBS and to evaluate the required sealing performance in the disposal system. In the analytical results, it was found that the direction of hydraulic gradient, hydraulic conductivities of concrete and backfilling materials and the position of clay plug had impact on flow condition around the EBS. (author)

  8. New monomers for high performance polymers

    Gratz, Roy F.

    1993-01-01

    This laboratory has been concerned with the development of new polymeric materials with high thermo-oxidative stability for use in the aerospace and electronics industries. Currently, there is special emphasis on developing matrix resins and composites for the high speed civil transport (HSCT) program. This application requires polymers that have service lifetimes of 60,000 hr at 350 F (177 C) and that are readily processible into void-free composites, preferably by melt-flow or powder techniques that avoid the use of high boiling solvents. Recent work has focused on copolymers which have thermally stable imide groups separated by flexible arylene ether linkages, some with trifluoromethyl groups attached to the aromatic rings. The presence of trifluoromethyl groups in monomers and polymers often improves their solubility and processibility. The goal of this research was to synthesize several new monomers containing pendant trifluoromethyl groups and to incorporate these monomers into new imide/arylene ether copolymers. Initially, work was begun on the synthesis of three target compounds. The first two, 3,5-dihydroxybenzo trifluoride and 3-amino 5-hydroxybenzo trifluoride, are intermediates in the synthesis of more complex monomers. The third, 3,5-bis (3-amino-phenoxy) benzotrifluoride, is an interesting diamine that could be incorporated into a polyimide directly.

  9. High Performance Data Distribution for Scientific Community

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  10. Performance of Superconducting Cavities as Required for the SPL

    Weingarten, Wolfgang

    2008-01-01

    This document outlines an optimisation analysis for the RF cavities of the planned Superconducting Proton Linac (SPL) at CERN with regard to the operating frequency and temperature. The analysis is based on a phenomenological assessment of the field dependent Q-value, as taken from published test results from RF cavities of various proveniences. It turns out that the design Q-value at an accelerating gradient of 25 MV/m ($\\Beta$ = 1 cavity) of $1^{.}10^{10}$ at 704 (1408) MHz is attainable at 1.9 (1.6) K, respectively, however, with the present state-of-the-art manufacturing, at the expense of some reprocessing. The optimum of the total electrical grid power consumption (composed of RF and cryogenics) is estimated as a function of frequency and operating temperature for both the low and high power SPL. This document outlines an optimisation analysis for the RF cavities of the planned Superconducting Proton Linac (SPL) at CERN with regard to the operating frequency and temperature. The analysis is based on a p...

  11. Transport in JET high performance plasmas

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  12. High-performance computing for airborne applications

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  13. Transport in JET high performance plasmas

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  14. Performance of the CMS High Level Trigger

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  15. Development of a High Performance Spacer Grid

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  16. Low cost high performance uncertainty quantification

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  17. Energy Efficient Graphene Based High Performance Capacitors.

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. SISYPHUS: A high performance seismic inversion factory

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  19. Common display performance requirements for military and commercial aircraft product lines

    Hoener, Steven J.; Behrens, Arthur J.; Flint, John R.; Jacobsen, Alan R.

    2001-09-01

    Obtaining high quality Active Matrix Liquid Crystal (AMLCD) glass to meet the needs of the commercial and military aerospace business is a major challenge, at best. With the demise of all domestic sources of AMLCD substrate glass, the industry is now focused on overseas sources, which are primarily producing glass for consumer electronics. Previous experience with ruggedizing commercial glass leads to the expectation that the aerospace industry can leverage off the commercial market. The problem remains, while the commercial industry is continually changing and improving its products, the commercial and military aerospace industries require stable and affordable supplies of AMLCD glass for upwards of 20 years to support production and maintenance operations. The Boeing Engineering and Supplier Management Process Councils have chartered a group of displays experts from multiple aircraft product divisions within the Boeing Company, the Displays Process Action Team (DPAT), to address this situation from an overall corporate perspective. The DPAT has formulated a set of Common Displays Performance Requirements for use across the corporate line of commercial and military aircraft products. Though focused on the AMLCD problem, the proposed common requirements are largely independent of display technology. This paper describes the strategy being pursued within the Boeing Company to address the AMLCD supply problem and details the proposed implementation process, centered on common requirements for both commercial and military aircraft displays. Highlighted in this paper are proposed common, or standard, display sizes and the other major requirements established by the DPAT, along with the rationale for these requirements.

  20. JT-60U high performance regimes

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  1. Optical interconnection networks for high-performance computing systems

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  2. High-performance phase-field modeling

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  3. Environmentally friendly, high-performance generation

    Kalmari, A.

    2003-01-01

    The project developer, owner, and operator of the new 45 MWth BFB-based cogeneration plant in Iisalmi is Termia Oy, part of the Atro Group (formerly Savon Voima Oy). Fired on peat and wood waste and handed over to the customer in November 2002, the plant's electrical output is sold to the parent company and heat locally to customers in Iisalmi. When the construction decision was made, one of the main objectives was to utilise as high a level of indigenous fuels (peat and biomass) as possible, at a high level of efficiency. An environmental impact analysis was carried out, taking into account the impact of various fuels and emissions in terms of combustion and logistics. One main benefit of the type of plant ultimately selected was that the bulk of the fuel can be supplied from the surrounding area. This is very important in terms of fuel supply security and local employment. The government provided a EUR 2.7 million grant for the project, equivalent to 13% of the total EUR 21 million investment budget. Before the plant was built, Termia used approximately 95 GWh of indigenous fuels annually. Today, this figure is 220 GWh. The main fuel used is milled peat. Up to 30% green chips from logging residues can be used. Recycled waste fuel can cover up to 3% of the total fuel requirement

  4. Liquid Argon Calorimeter performance at High Rates

    Seifert, F; The ATLAS collaboration

    2013-01-01

    The expected increase of luminosity at HL-LHC by a factor of ten with respect to LHC luminosities has serious consequences for the signal reconstruction, radiation hardness requirements and operations of the ATLAS liquid argon calorimeters in the endcap, respectively forward region. Small modules of each type of calorimeter have been built and exposed to a high intensity proton beam of 50 GeV at IHEP/Protvino. The beam is extracted via the bent crystal technique, offering the unique opportunity to cover intensities ranging from $10^6$ p/s up to $3\\cdot10^{11}$ p/s. This exceeds the deposited energy per time expected at HL-LHC by more than a factor of 100. The correlation between beam intensity and the read-out signal has been studied. The data show clear indications of pulse shape distortion due to the high ionization build-up, in agreement with MC expectations. This is also confirmed from the dependence of the HV currents on beam intensity.

  5. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  6. Low-Cost High-Performance MRI

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. Highly automated driving, secondary task performance, and driver state.

    Merat, Natasha; Jamson, A Hamish; Lai, Frank C H; Carsten, Oliver

    2012-10-01

    A driving simulator study compared the effect of changes in workload on performance in manual and highly automated driving. Changes in driver state were also observed by examining variations in blink patterns. With the addition of a greater number of advanced driver assistance systems in vehicles, the driver's role is likely to alter in the future from an operator in manual driving to a supervisor of highly automated cars. Understanding the implications of such advancements on drivers and road safety is important. A total of 50 participants were recruited for this study and drove the simulator in both manual and highly automated mode. As well as comparing the effect of adjustments in driving-related workload on performance, the effect of a secondary Twenty Questions Task was also investigated. In the absence of the secondary task, drivers' response to critical incidents was similar in manual and highly automated driving conditions. The worst performance was observed when drivers were required to regain control of driving in the automated mode while distracted by the secondary task. Blink frequency patterns were more consistent for manual than automated driving but were generally suppressed during conditions of high workload. Highly automated driving did not have a deleterious effect on driver performance, when attention was not diverted to the distracting secondary task. As the number of systems implemented in cars increases, an understanding of the implications of such automation on drivers' situation awareness, workload, and ability to remain engaged with the driving task is important.

  10. Thermal interface pastes nanostructured for high performance

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  11. Combining high productivity with high performance on commodity hardware

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  12. Integrating advanced facades into high performance buildings

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  13. High performance magnet power supply optimization

    Jackson, L.T.

    1988-01-01

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems

  14. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  15. How to create high-performing teams.

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  16. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  17. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  18. High Performance with Prescriptive Optimization and Debugging

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  19. 14 CFR 151.53 - Performance of construction work: Labor requirements.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Performance of construction work: Labor... § 151.53 Performance of construction work: Labor requirements. A sponsor who is required to include in a... during the performance of work under the contract, to the extent necessary to determine whether the...

  20. 14 CFR 171.321 - DME and marker beacon performance requirements.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false DME and marker beacon performance... (MLS) § 171.321 DME and marker beacon performance requirements. (a) The DME equipment must meet the..._regulations/ibr_locations.html. (b) MLS marker beacon equipment must meet the performance requirements...

  1. Optimizing High Performance Self Compacting Concrete

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  2. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  3. Mediaprocessors in medical imaging for high performance and flexibility

    Managuli, Ravi; Kim, Yongmin

    2002-05-01

    New high performance programmable processors, called mediaprocessors, have been emerging since the early 1990s for various digital media applications, such as digital TV, set-top boxes, desktop video conferencing, and digital camcorders. Modern mediaprocessors, e.g., TI's TMS320C64x and Hitachi/Equator Technologies MAP-CA, can offer high performance utilizing both instruction-level and data-level parallelism. During this decade, with continued performance improvement and cost reduction, we believe that the mediaprocessors will become a preferred choice in designing imaging and video systems due to their flexibility in incorporating new algorithms and applications via programming and faster-time-to-market. In this paper, we will evaluate the suitability of these mediaprocessors in medical imaging. We will review the core routines of several medical imaging modalities, such as ultrasound and DR, and present how these routines can be mapped to mediaprocessors and their resultant performance. We will analyze the architecture of several leading mediaprocessors. By carefully mapping key imaging routines, such as 2D convolution, unsharp masking, and 2D FFT, to the mediaprocessor, we have been able to achieve comparable (if not better) performance to that of traditional hardwired approaches. Thus, we believe that future medical imaging systems will benefit greatly from these advanced mediaprocessors, offering significantly increased flexibility and adaptability, reducing the time-to-market, and improving the cost/performance ratio compared to the existing systems while meeting the high computing requirements.

  4. High Performance Circularly Polarized Microstrip Antenna

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  5. Micromachined high-performance RF passives in CMOS substrate

    Li, Xinxin; Ni, Zao; Gu, Lei; Wu, Zhengzheng; Yang, Chen

    2016-01-01

    This review systematically addresses the micromachining technologies used for the fabrication of high-performance radio-frequency (RF) passives that can be integrated into low-cost complementary metal-oxide semiconductor (CMOS)-grade (i.e. low-resistivity) silicon wafers. With the development of various kinds of post-CMOS-compatible microelectromechanical systems (MEMS) processes, 3D structural inductors/transformers, variable capacitors, tunable resonators and band-pass/low-pass filters can be compatibly integrated into active integrated circuits to form monolithic RF system-on-chips. By using MEMS processes, including substrate modifying/suspending and LIGA-like metal electroplating, both the highly lossy substrate effect and the resistive loss can be largely eliminated and depressed, thereby meeting the high-performance requirements of telecommunication applications. (topical review)

  6. High Power Flex-Propellant Arcjet Performance

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  7. Silicon Photomultiplier Performance in High ELectric Field

    Montoya, J.; Morad, J.

    2016-12-01

    Roughly 27% of the universe is thought to be composed of dark matter. The Large Underground Xenon (LUX) relies on the emission of light from xenon atoms after a collision with a dark matter particle. After a particle interaction in the detector, two things can happen: the xenon will emit light and charge. The charge (electrons), in the liquid xenon needs to be pulled into the gas section so that it can interact with gas and emit light. This allows LUX to convert a single electron into many photons. This is done by applying a high voltage across the liquid and gas regions, effectively ripping electrons out of the liquid xenon and into the gas. The current device used to detect photons is the photomultiplier tube (PMT). These devices are large and costly. In recent years, a new technology that is capable of detecting single photons has emerged, the silicon photomultiplier (SiPM). These devices are cheaper and smaller than PMTs. Their performance in a high electric fields, such as those found in LUX, are unknown. It is possible that a large electric field could introduce noise on the SiPM signal, drowning the single photon detection capability. My hypothesis is that SiPMs will not observe a significant increase is noise at an electric field of roughly 10kV/cm (an electric field within the range used in detectors like LUX). I plan to test this hypothesis by first rotating the SiPMs with no applied electric field between two metal plates roughly 2 cm apart, providing a control data set. Then using the same angles test the dark counts with the constant electric field applied. Possibly the most important aspect of LUX, is the photon detector because it's what detects the signals. Dark matter is detected in the experiment by looking at the ratio of photons to electrons emitted for a given interaction in the detector. Interactions with a low electron to photon ratio are more like to be dark matter events than those with a high electron to photon ratio. The ability to

  8. High performance parallel backprojection on FPGA

    Pfanner, Florian; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Reconstruction of tomographic images, i.e., images from a Computed Tomography scanner, is a very time consuming issue. The most calculation power is needed for the backprojection step. A closer inspection shows that the algorithm for backprojection is easy to parallelize. FPGAs are able to execute many operations in the same time, so a highly parallel algorithm is a requirement for a powerful acceleration. For data flow rate maximization, we realized the backprojection in a pipelined structure with data throughput of one clock cycle. Due the hardware limitations of the FPGA, it is not possible to reconstruct the image as a whole. So it is necessary to split up the image and reconstruct these parts separately. Despite that, a reconstruction of 512 projections into a 5122 image is calculated within 13 ms on a Virtex 5 FPGA. To save hardware resources we use fixed point arithmetic with an accuracy of 23 bit for calculation. A comparison of the result image and an image, calculated with floating point arithmetic on CPU, shows that there are no differences between these images. (orig.)

  9. Verifying cell loss requirements in high-speed communication networks

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  10. The Role of Performance Management in the High Performance Organisation

    de Waal, André A.; van der Heijden, Beatrice I.J.M.

    2014-01-01

    The allegiance of partnering organisations and their employees to an Extended Enterprise performance is its proverbial sword of Damocles. Literature on Extended Enterprises focuses on collaboration, inter-organizational integration and learning to avoid diminishing or missing allegiance becoming an

  11. Towards High Performance Processing In Modern Java Based Control Systems

    Misiowiec, M; Buttner, M

    2011-01-01

    CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profili...

  12. A high-performance digital control system for TCV

    Lister, J.B.; Dutch, M.J.; Milne, P.G.; Means, R.W.

    1997-10-01

    The TCV hybrid analogue-digital plasma control system has been superseded by a high performance Digital Plasma Control System, DPCS, made possible by recent advances in off the shelf technology. We discuss the basic requirements for such a control system and present the design and specifications which were laid down. The nominal and final performances are presented and the complete design is given in detail. The integration of the new system into the current operation of the TCV tokamak is described. The procurement of this system has required close collaboration between the end-users and two commercial suppliers with one of the latter taking full responsibility for the system integration. The impact of this approach on the design and commissioning costs for the TCV project is presented. New possibilities offered by this new system are discussed, including possible work relevant to ITER plasma control development. (author) 3 figs., 5 refs

  13. A high-performance digital control system for TCV

    Lister, J.B.; Dutch, M.J. [Ecole Polytechnique Federale, Lausanne (Switzerland). Centre de Recherche en Physique des Plasma (CRPP); Milne, P.G. [Pentland System Ltd., Livingstone (United Kingdom); Means, R.W. [HNC Software Inc., San Diego, CA (United States)

    1997-10-01

    The TCV hybrid analogue-digital plasma control system has been superseded by a high performance Digital Plasma Control System, DPCS, made possible by recent advances in off the shelf technology. We discuss the basic requirements for such a control system and present the design and specifications which were laid down. The nominal and final performances are presented and the complete design is given in detail. The integration of the new system into the current operation of the TCV tokamak is described. The procurement of this system has required close collaboration between the end-users and two commercial suppliers with one of the latter taking full responsibility for the system integration. The impact of this approach on the design and commissioning costs for the TCV project is presented. New possibilities offered by this new system are discussed, including possible work relevant to ITER plasma control development. (author) 3 figs., 5 refs.

  14. High Performance Clocks and Gravity Field Determination

    Müller, J.; Dirkx, D.; Kopeikin, S. M.; Lion, G.; Panet, I.; Petit, G.; Visser, P. N. A. M.

    2018-02-01

    Time measured by an ideal clock crucially depends on the gravitational potential and velocity of the clock according to general relativity. Technological advances in manufacturing high-precision atomic clocks have rapidly improved their accuracy and stability over the last decade that approached the level of 10^{-18}. This notable achievement along with the direct sensitivity of clocks to the strength of the gravitational field make them practically important for various geodetic applications that are addressed in the present paper. Based on a fully relativistic description of the background gravitational physics, we discuss the impact of those highly-precise clocks on the realization of reference frames and time scales used in geodesy. We discuss the current definitions of basic geodetic concepts and come to the conclusion that the advances in clocks and other metrological technologies will soon require the re-definition of time scales or, at least, clarification to ensure their continuity and consistent use in practice. The relative frequency shift between two clocks is directly related to the difference in the values of the gravity potential at the points of clock's localization. According to general relativity the relative accuracy of clocks in 10^{-18} is equivalent to measuring the gravitational red shift effect between two clocks with the height difference amounting to 1 cm. This makes the clocks an indispensable tool in high-precision geodesy in addition to laser ranging and space geodetic techniques. We show how clock measurements can provide geopotential numbers for the realization of gravity-field-related height systems and can resolve discrepancies in classically-determined height systems as well as between national height systems. Another application of clocks is the direct use of observed potential differences for the improved recovery of regional gravity field solutions. Finally, clock measurements for space-borne gravimetry are analyzed along with

  15. High-performance computing in accelerating structure design and analysis

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  16. Evaluating performance of high efficiency mist eliminators

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  17. Modeling the Non-functional Requirements in the Context of Usability, Performance, Safety and Security

    Sadiq, Mazhar

    2007-01-01

    Requirement engineering is the most significant part of the software development life cycle. Until now great emphasis has been put on the maturity of the functional requirements. But with the passage of time it reveals that the success of software development does not only pertain to the functional requirements rather non-functional requirements should also be taken into consideration. Among the non-functional requirements usability, performance, safety and security are considered important. ...

  18. Quantum Accelerators for High-performance Computing Systems

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  19. An integrated high performance Fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  20. High Performance Graphene Oxide Based Rubber Composites

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  1. Initial rheological description of high performance concretes

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  2. High thermoelectric performance of graphite nanofibers.

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2018-02-22

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high thermoelectric performance. This study unveils that the platelet form of GNFs in which graphite layers are perpendicular to the fiber axis can exhibit outstanding thermoelectric properties with a figure of merit ZT reaching 3.55 in a 0.5 nm diameter fiber and 1.1 in a 1.1 nm diameter one. Interestingly, by introducing 14 C isotope doping, ZT can even be enhanced up to more than 5, and more than 8 if we include the effect of finite phonon mean free path, which demonstrates the amazing thermoelectric potential of GNFs.

  3. Durability of high performance concrete in seawater

    Amjad Hussain Memon; Salihuddin Radin Sumadi; Rabitah Handan

    2000-01-01

    This paper presents a report on the effects of blended cements on the durability of high performance concrete (HPC) in seawater. In this research the effect of seawater was investigated. The specimens were initially subjected to water curing for seven days inside the laboratory at room temperature, followed by seawater curing exposed to tidal zone until testing. In this study three levels of cement replacement (0%, 30% and 70%) were used. The combined use of chemical and mineral admixtures has resulted in a new generation of concrete called HPC. The HPC has been identified as one of the most important advanced materials necessary in the effort to build a nation's infrastructure. HPC opens new opportunities in the utilization of the industrial by-products (mineral admixtures) in the construction industry. As a matter of fact permeability is considered as one of the fundamental properties governing the durability of concrete in the marine environment. Results of this investigation indicated that the oxygen permeability values for the blended cement concretes at the age of one year are reduced by a factor of about 2 as compared to OPC control mix concrete. Therefore both blended cement concretes are expected to withstand in the seawater exposed to tidal zone without serious deterioration. (Author)

  4. Alternative High-Performance Ceramic Waste Forms

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  5. Low Cost High Performance Nanostructured Spectrally Selective Coating

    Jin, Sungho [Univ. of California, San Diego, CA (United States)

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guided by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.

  6. Performance of a high efficiency high power UHF klystron

    Konrad, G.T.

    1977-03-01

    A 500 kW c-w klystron was designed for the PEP storage ring at SLAC. The tube operates at 353.2 MHz, 62 kV, a microperveance of 0.75, and a gain of approximately 50 dB. Stable operation is required for a VSWR as high as 2 : 1 at any phase angle. The design efficiency is 70%. To obtain this value of efficiency, a second harmonic cavity is used in order to produce a very tightly bunched beam in the output gap. At the present time it is planned to install 12 such klystrons in PEP. A tube with a reduced size collector was operated at 4% duty at 500 kW. An efficiency of 63% was observed. The same tube was operated up to 200 kW c-w for PEP accelerator cavity tests. A full-scale c-w tube reached 500 kW at 65 kV with an efficiency of 55%. In addition to power and phase measurements into a matched load, some data at various load mismatches are presented

  7. High performance coronagraphy for direct imaging of exoplanets

    Guyon O.

    2011-07-01

    Full Text Available Coronagraphy has recently been an extremely active field of research, with several high performance concepts proposed, and several new coronagraphs tested in laboratories and telescopes. Coronagraph concepts can be grouped in a few broad categories: Lyot-type coronagraphs, pupil apodization and nulling interferometers. Among existing coronagraph concepts, several approach the fundamental performance limit imposed by the physical nature of light. To achieve their full potential, coronagraphs require exquisite wavefront control and calibration. This has been, and still is, the main bottleneck for the scientifically productive use of coronagraphs on ground-based telescopes. New and promising wavefront sensing techniques suitable for high contrast imaging have however been developed in the last few years and are started to be realized in laboratories. I will review some of these enabling technologies, and show that coronagraphs are now ready for “prime time” on existing and future telescopes.

  8. 29 CFR 1620.15 - Jobs requiring equal skill in performance.

    2010-07-01

    ... EQUAL PAY ACT § 1620.15 Jobs requiring equal skill in performance. (a) In general. The jobs to which the equal pay standard is applicable are jobs requiring equal skill in their performance. Where the amount... another job, the equal pay standard cannot apply even though the jobs may be equal in all other respects...

  9. 29 CFR 1620.16 - Jobs requiring equal effort in performance.

    2010-07-01

    ..., however, that men and women are working side by side on a line assembling parts. Suppose further that one... 29 Labor 4 2010-07-01 2010-07-01 false Jobs requiring equal effort in performance. 1620.16 Section... EQUAL PAY ACT § 1620.16 Jobs requiring equal effort in performance. (a) In general. The jobs to which...

  10. 40 CFR 158.2160 - Microbial pesticides product performance data requirements.

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Microbial pesticides product... AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Microbial Pesticides § 158.2160 Microbial pesticides product performance data requirements. Product performance data must be developed for...

  11. 40 CFR Table 5 to Subpart Xxxx of... - Requirements for Performance Tests

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Requirements for Performance Tests 5 Table 5 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED.... XXXX, Table 5 Table 5 to Subpart XXXX of Part 63—Requirements for Performance Tests As stated in § 63...

  12. 45 CFR 2516.810 - What types of evaluations are grantees and subgrantees required to perform?

    2010-10-01

    ... Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE SCHOOL-BASED SERVICE-LEARNING PROGRAMS...? All grantees and subgrantees are required to perform internal evaluations which are ongoing efforts to assess performance and improve quality. Grantees and subgrantees may, but are not required to, arrange...

  13. 42 CFR 456.245 - Number of studies required to be performed.

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Number of studies required to be performed. 456.245 Section 456.245 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Ur Plan: Medical Care Evaluation Studies § 456.245 Number of studies required to be performed. The...

  14. 42 CFR 456.145 - Number of studies required to be performed.

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Number of studies required to be performed. 456.145 Section 456.145 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN...: Medical Care Evaluation Studies § 456.145 Number of studies required to be performed. The hospital must...

  15. Strategies of high-performing paramedic educational programs.

    Margolis, Gregg S; Romero, Gabriel A; Fernandez, Antonio R; Studnek, Jonathan R

    2009-01-01

    To identify the specific educational strategies used by paramedic educational programs that have attained consistently high success rates on the National Registry of Emergency Medical Technicians (NREMT) examination. NREMT data from 2003-2007 were analyzed to identify consistently high-performing paramedic educational programs. Representatives from 12 programs that have maintained a 75% first-attempt pass rate for at least four of five years and had more than 20 graduates per year were invited to participate in a focus group. Using the nominal group technique (NGT), participants were asked to answer the following question: "What are specific strategies that lead to a successful paramedic educational program?" All 12 emergency medical services (EMS) educational programs meeting the eligibility requirements participated. After completing the seven-step NGT process, 12 strategies were identified as leading to a successful paramedic educational program: 1) achieve and maintain national accreditation; 2) maintain high-level entry requirements and prerequisites; 3) provide students with a clear idea of expectations for student success; 4) establish a philosophy and foster a culture that values continuous review and improvement; 5) create your own examinations, lesson plans, presentations, and course materials using multiple current references; 6) emphasize emergency medical technician (EMT)-Basic concepts throughout the class; 7) use frequent case-based classroom scenarios; 8) expose students to as many prehospital advanced life support (ALS) patient contacts as possible, preferably where they are in charge; 9) create and administer valid examinations that have been through a review process (such as qualitative analysis); 10) provide students with frequent detailed feedback regarding their performance (such as formal examination reviews); 11) incorporate critical thinking and problem solving into all testing; and 12) deploy predictive testing with analysis prior to

  16. High-Speed Maglev Trains; German Safety Requirements

    1991-12-31

    This document is a translation of technology-specific safety requirements developed : for the German Transrapid Maglev technology. These requirements were developed by a : working group composed of representatives of German Federal Railways (DB), Tes...

  17. Assessing students' performance in software requirements engineering education using scoring rubrics

    Mkpojiogu, Emmanuel O. C.; Hussain, Azham

    2017-10-01

    The study investigates how helpful the use of scoring rubrics is, in the performance assessment of software requirements engineering students and whether its use can lead to students' performance improvement in the development of software requirements artifacts and models. Scoring rubrics were used by two instructors to assess the cognitive performance of a student in the design and development of software requirements artifacts. The study results indicate that the use of scoring rubrics is very helpful in objectively assessing the performance of software requirements or software engineering students. Furthermore, the results revealed that the use of scoring rubrics can also produce a good achievement assessments direction showing whether a student is either improving or not in a repeated or iterative assessment. In a nutshell, its use leads to the performance improvement of students. The results provided some insights for further investigation and will be beneficial to researchers, requirements engineers, system designers, developers and project managers.

  18. Improving the high performance concrete (HPC behaviour in high temperatures

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  19. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  20. Research on high-performance mass storage system

    Cheng Yaodong; Wang Lu; Huang Qiulan; Zheng Wei

    2010-01-01

    With the enlargement of scientific experiments, more and more data will be produced, which brings great challenge to storage system. Large storage capacity and high data access performance are both important to Mass storage system. This paper firstly reviews some kinds of popular storage systems including network storage system, SAN-based sharing system, WAN File system, object-based parallel file system, hierarchical storage system and cloud storage systems. Then some key technologies are presented. Finally, this paper takes BES storage system as an example and introduces its requirements, architecture and operation results. (authors)

  1. Operating System Support for High-Performance Solid State Drives

    Bjørling, Matias

    of the operating system in reducing the gap, and enabling new forms of communication and even co-design between applications and high-performance SSDs. More specifically, we studied the storage layers within the Linux kernel. We explore the following issues: (i) what are the limitations of the legacy block...... a form of application-SSD co-design? What are the impacts on operating system design? (v) What would it take to provide quality of service for applications requiring millions of I/O per second? The dissertation consists of six publications covering these issues. Two of the main contributions...

  2. Nuclear forces and high-performance computing: The perfect match

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  3. Strategy Guideline: Advanced Construction Documentation Recommendations for High Performance Homes

    Lukachko, A.; Gates, C.; Straube, J.

    2011-12-01

    As whole house energy efficiency increases, new houses become less like conventional houses that were built in the past. New materials and new systems require greater coordination and communication between industry stakeholders. The Guideline for Construction Documents for High Performance Housing provides advice to address this need. The reader will be presented with four changes that are recommended to achieve improvements in energy efficiency, durability and health in Building America houses: create coordination drawings, improve specifications, improve detail drawings, and review drawings and prepare a Quality Control Plan.

  4. Design of Ultra High Performance Fiber Reinforced Concrete Shells

    Jepsen, Michael S.; Lambertsen, Søren Heide; Damkilde, Lars

    2013-01-01

    Fiber Reinforced Concrete shell. The major challenge in the design phase has been securing sufficient stiffness of the structure while keeping the weight at a minimum. The weight/stiffness issue has been investigated by means of the finite element method, to optimize the structure regarding overall......The paper treats the redesign of the float structure of the Wavestar wave energy converter. Previously it was designed as a glass fiber structure, but due to cost reduction requirements a redesign has been initiated. The new float structure will be designed as a double curved Ultra High Performance...

  5. Spectrally high performing quantum cascade lasers

    Toor, Fatima

    Quantum cascade (QC) lasers are versatile semiconductor light sources that can be engineered to emit light of almost any wavelength in the mid- to far-infrared (IR) and terahertz region from 3 to 300 mum [1-5]. Furthermore QC laser technology in the mid-IR range has great potential for applications in environmental, medical and industrial trace gas sensing [6-10] since several chemical vapors have strong rovibrational frequencies in this range and are uniquely identifiable by their absorption spectra through optical probing of absorption and transmission. Therefore, having a wide range of mid-IR wavelengths in a single QC laser source would greatly increase the specificity of QC laser-based spectroscopic systems, and also make them more compact and field deployable. This thesis presents work on several different approaches to multi-wavelength QC laser sources that take advantage of band-structure engineering and the uni-polar nature of QC lasers. Also, since for chemical sensing, lasers with narrow linewidth are needed, work is presented on a single mode distributed feedback (DFB) QC laser. First, a compact four-wavelength QC laser source, which is based on a 2-by-2 module design, with two waveguides having QC laser stacks for two different emission wavelengths each, one with 7.0 mum/11.2 mum, and the other with 8.7 mum/12.0 mum is presented. This is the first design of a four-wavelength QC laser source with widely different emission wavelengths that uses minimal optics and electronics. Second, since there are still several unknown factors that affect QC laser performance, results on a first ever study conducted to determine the effects of waveguide side-wall roughness on QC laser performance using the two-wavelength waveguides is presented. The results are consistent with Rayleigh scattering effects in the waveguides, with roughness effecting shorter wavelengths more than longer wavelengths. Third, a versatile time-multiplexed multi-wavelength QC laser system that

  6. Cost optimal building performance requirements. Calculation methodology for reporting on national energy performance requirements on the basis of cost optimality within the framework of the EPBD

    Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)

    2011-05-15

    On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and

  7. Nova performance at ultra high fluence levels

    Hunt, J.T.

    1986-01-01

    Nova is a ten beam high power Nd:glass laser used for interial confinement fusion research. It was operated in the high power high energy regime following the completion of construction in December 1984. During this period several interesting nonlinear optical phenomena were observed. These phenomena are discussed in the text. 11 refs., 5 figs

  8. Spectral method and its high performance implementation

    Wu, Zedong

    2014-01-01

    We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system.

  9. High performance distributed objects in large hadron collider experiments

    Gutleber, J.

    1999-11-01

    This dissertation demonstrates how object-oriented technology can support the development of software that has to meet the requirements of high performance distributed data acquisition systems. The environment for this work is a system under planning for the Compact Muon Solenoid experiment at CERN that shall start its operation in the year 2005. The long operational phase of the experiment together with a tight and puzzling interaction with custom devices make the quest for an evolvable architecture that exhibits a high level of abstraction the driving issue. The question arises if an existing approach already fits our needs. The presented work casts light on these problems and as a result comprises the following novel contributions: - Application of object technology at hardware/software boundary. Software components at this level must be characterised by high efficiency and extensibility at the same time. - Identification of limitations when deploying commercial-off-the-shelf middleware for distributed object-oriented computing. - Capturing of software component properties in an efficiency model for ease of comparison and improvement. - Proof of feasibility that the encountered deficiencies in middleware can be avoided and that with the use of software components the imposed requirements can be met. - Design and implementation of an on-line software control system that allows to take into account the ever evolving requirements by avoiding hardwired policies. We conclude that state-of-the-art middleware cannot meet the required efficiency of the planned data acquisition system. Although new tool generations already provide a certain degree of configurability, the obligation to follow standards specifications does not allow the necessary optimisations. We identified the major limiting factors and argue that a custom solution following a component model with narrow interfaces can satisfy our requirements. This approach has been adopted for the current design

  10. High-precision performance testing of the LHC power converters

    Bastos, M; Dreesen, P; Fernqvist, G; Fournier, O; Hudson, G

    2007-01-01

    The magnet power converters for LHC were procured in three parts, power part, current transducers and control electronics, to enable a maximum of industrial participation in the manufacturing and still guarantee the very high precision (a few parts in 10-6) required by LHC. One consequence of this approach was several stages of system tests: factory reception tests, CERN reception tests, integration tests , short-circuit tests and commissioning on the final load in the LHC tunnel. The majority of the power converters for LHC have now been delivered, integrated into complete converter and high-precision performance testing is well advanced. This paper presents the techniques used for high-precision testing and the results obtained.

  11. Corrosion resistance of high-performance materials titanium, tantalum, zirconium

    2012-01-01

    Corrosion resistance is the property of a material to resist corrosion attack in a particular aggressive environment. Although titanium, tantalum and zirconium are not noble metals, they are the best choice whenever high corrosion resistance is required. The exceptionally good corrosion resistance of these high–performance metals and their alloys results from the formation of a very stable, dense, highly adherent, and self–healing protective oxide film on the metal surface. This naturally occurring oxide layer prevents chemical attack of the underlying metal surface. This behavior also means, however, that high corrosion resistance can be expected only under neutral or oxidizing conditions. Under reducing conditions, a lower resistance must be reckoned with. Only very few inorganic and organic substances are able to attack titanium, tantalum or zirconium at ambient temperature. As the extraordinary corrosion resistance is coupled with an excellent formability and weldability these materials are very valua...

  12. Analysis of production factors in high performance concrete

    Gilberto Carbonari

    2003-01-01

    Full Text Available The incorporation of silica fume and superplasticizers in high strength and high performance concrete, along with a low water-cement ratio, leads to significant changes in the workability and the energy needed to homogenize and compact the concrete. Moreover, several aspects of concrete production that are not critical for conventional concrete are important for high strength concrete. This paper will discuss the need for controlling the humidity of the aggregates, optimizing the mixing sequence used in the fabrication, and the slump loss. The application of a silica fume concrete in typical building columns will be analyzed considering the required consolidation, the variability of the material strength within the structural element and the relation between core and molded specimen strength. Comparisons will also be made with conventional concrete.

  13. HIGH PERFORMANCE ADVANCED TOKAMAK REGIMES FOR NEXT-STEP EXPERIMENTS

    GREENFIELD, C.M.; MURAKAMI, M.; FERRON, J.R.; WADE, M.R.; LUCE, T.C.; PETTY, C.C.; MENARD, J.E; PETRIE, T.W.; ALLEN, S.L.; BURRELL, K.H.; CASPER, T.A; DeBOO, J.C.; DOYLE, E.J.; GAROFALO, A.M; GORELOV, Y.A; GROEBNER, R.J.; HOBIRK, J.; HYATT, A.W; JAYAKUMAR, R.J; KESSEL, C.E; LA HAYE, R.J; JACKSON, G.L; LOHR, J.; MAKOWSKI, M.A.; PINSKER, R.I.; POLITZER, P.A.; PRATER, R.; STRAIT, E.J.; TAYLOR, T.S; WEST, W.P.

    2003-01-01

    OAK-B135 Advanced Tokamak (AT) research in DIII-D seeks to provide a scientific basis for steady-state high performance operation in future devices. These regimes require high toroidal beta to maximize fusion output and poloidal beta to maximize the self-driven bootstrap current. Achieving these conditions requires integrated, simultaneous control of the current and pressure profiles, and active magnetohydrodynamic (MHD) stability control. The building blocks for AT operation are in hand. Resistive wall mode stabilization via plasma rotation and active feedback with non-axisymmetric coils allows routine operation above the no-wall beta limit. Neoclassical tearing modes are stabilized by active feedback control of localized electron cyclotron current drive (ECCD). Plasma shaping and profile control provide further improvements. Under these conditions, bootstrap supplies most of the current. Steady-state operation requires replacing the remaining Ohmic current, mostly located near the half-radius, with noninductive external sources. In DIII-D this current is provided by ECCD, and nearly stationary AT discharges have been sustained with little remaining Ohmic current. Fast wave current drive is being developed to control the central magnetic shear. Density control, with divertor cryopumps, of AT discharges with edge localized moding (ELMing) H-mode edges facilitates high current drive efficiency at reactor relevant collisionalities. A sophisticated plasma control system allows integrated control of these elements. Close coupling between modeling and experiment is key to understanding the separate elements, their complex nonlinear interactions, and their integration into self-consistent high performance scenarios. Progress on this development, and its implications for next-step devices, will be illustrated by results of recent experiment and simulation efforts

  14. Durability and Performance of High Performance Infiltration Cathodes

    Samson, Alfred Junio; Søgaard, Martin; Hjalmarsson, Per

    2013-01-01

    The performance and durability of solid oxide fuel cell (SOFC) cathodes consisting of a porous Ce0.9Gd0.1O1.95 (CGO) infiltrated with nitrates corresponding to the nominal compositions La0.6Sr0.4Co1.05O3-δ (LSC), LaCoO3-δ (LC), and Co3O4 are discussed. At 600°C, the polarization resistance, Rp......, varied as: LSC (0.062Ωcm2)cathode was found to depend on the infiltrate firing temperature and is suggested to originate...... of the infiltrate but also from a better surface exchange property. A 450h test of an LSC-infiltrated CGO cathode showed an Rp with final degradation rate of only 11mΩcm2kh-1. An SOFC with an LSC-infiltrated CGO cathode tested for 1,500h at 700°C and 0.5Acm-2 (60% fuel, 20% air utilization) revealed no measurable...

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. Case Study of Using High Performance Commercial Processors in Space

    Ferguson, Roscoe C.; Olivas, Zulema

    2009-01-01

    The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.

  17. From adaptive to high-performance structures

    Teuffel, P.

    2011-01-01

    Multiple design aspects influence the building performance such as architectural criteria, various environmental impacts and user behaviour. Specific examples are sun, wind, temperatures, function, occupancy, socio-cultural aspects and other contextual aspects and needs. Even though these aspects

  18. Site safety requirements for high level waste disposal

    Chen Weiming; Wang Ju

    2006-01-01

    This paper outlines the content, status and trend of site safety requirements of International Atomic Energy Agency, America, France, Sweden, Finland and Japan. Site safety requirements are usually represented as advantageous vis-a-vis disadvantagous conditions, and potential advantage vis-a-vis disadvantage conditions, respectively in aspects of geohydrology, geochemistry, lithology, climate and human intrusion etc. Study framework and steps of site safety requirements for China are discussed under the view of systems science. (authors)

  19. High-performance-vehicle technology. [fighter aircraft propulsion

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  20. Wear performance of garnet aluminium composites at high contact pressure

    Sharma, Anju; Arora, Rama; Kumar, Suresh; Singh, Gurmel; Pandey, O. P.

    2016-05-01

    To satisfy the needs of the engineering sector, researchers and material scientists in this area adopted the development of composites with tailor made properties to enhance efficiency and cost savings in the manufacturing sector. The technology of the mineral industry is shaping the supply and demand of minerals derived materials. The composites are best classified as high performance materials have high strength-to-weight ratios, and require controlled manufacturing environments for optimum performance. Natural mineral garnet was used as the reinforcement of composite because of satisfactory mechanical properties as well as an attractive ecological alternative to others ceramics. For this purpose, samples have been prepared with different sizesof the garnet reinforcement using the mechanical stirring method to achieve the homogeneously dispersed strengthening phase. A systematic study of the effect of high contact pressure on the sliding wear behaviour of garnet reinforced LM13 alloy composites is presented in this paper. The SEM analysis of the worn samples and debris reveals the clues about the wear mechanism. The drastic improvement in the wear resistance of the composites at high contact pressure shows the high potential of the material to be used in engineering applications.

  1. Performance of a full-scale ITER metal hydride storage bed in comparison with requirements

    Beloglazov, S.; Glugla, M.; Fanghaenel, E.; Perevezentsev, A.; Wagner, R.

    2008-01-01

    The storage of hydrogen isotopes as metal hydride is the technique chosen for the ITER Tritium Plant Storage and Delivery System (SDS). A prototype storage bed of a full-scale has been designed, manufactured and intensively tested at the Tritium Laboratory, addressing main performance parameters specified for the ITER application. The main requirements for the hydrogen storage bed are a strict physical limitation of the tritium storage capacity (currently 70 g T 2 ), a high supply flow rate of hydrogen isotopes, in-situ calorimetry capabilities with an accuracy of 1 g and a fully tritium compatible design. The pressure composition isotherm of the ZrCo hydrogen system, as a reference material for ITER, is characterised by significant slope. As a result technical implementation of the ZrCo hydride bed in the SDS system requires further considerations. The paper presents the experience from the operation of ZrCo getter bed including loading/de-loading operation, calorimetric loop performance, and active gas cooling of the bed for fast absorption operation. The implications of hydride material characteristics on the SDS system configuration and design are discussed. (authors)

  2. Commercially-driven human interplanetary propulsion systems: Rationale, concept, technology, and performance requirements

    Williams, C.H.; Borowski, S.K.

    1996-01-01

    Previous studies of human interplanetary missions are largely characterized by long trip times, limited performance capabilities, and enormous costs. Until these missions become dramatically more open-quote open-quote commercial-friendly close-quote close-quote, their funding source and rationale will be restricted to national governments and their political/scientific interests respectively. A rationale is discussed for human interplanetary space exploration predicated on the private sector. Space propulsion system requirements are identified for interplanetary transfer times of no more than a few weeks/months to and between the major outer planets. Nuclear fusion is identified as the minimum requisite space propulsion technology. A conceptual design is described and evolutionary catalyzed-DD to DHe 3 fuel cycles are proposed. Magnetic nozzles for direct thrust generation and quantifying the operational aspects of the energy exchange mechanisms between high energy reaction products and neutral propellants are identified as two of the many key supporting technologies essential to satisfying system performance requirements. Government support of focused, breakthrough technologies is recommended at funding levels appropriate to other ongoing federal research. copyright 1996 American Institute of Physics

  3. Device Characterization of High Performance Quantum Dot Comb Laser

    Rafi, Kazi

    2012-02-01

    The cost effective comb based laser sources are considered to be one of the prominent emitters used in optical communication (OC) and photonic integrated circuits (PIC). With the rising demand for delivering triple-play services (voice, data and video) in FTTH and FTTP-based WDM-PON networks, metropolitan area network (MAN), and short-reach rack-to-rack optical computer communications, a versatile and cost effective WDM transmitter design is required, where several DFB lasers can be replaced by a cost effective broadband comb laser to support on-chip optical signaling. Therefore, high performance quantum dot (Q.Dot) comb lasers need to satisfy several challenges before real system implementations. These challenges include a high uniform broadband gain spectrum from the active layer, small relative intensity noise with lower bit error rate (BER) and better temperature stability. Thus, such short wavelength comb lasers offering higher bandwidth can be a feasible solution to address these challenges. However, they still require thorough characterization before implementation. In this project, we briefly characterized the novel quantum dot comb laser using duty cycle based electrical injection and temperature variations where we have observed the presence of reduced thermal conductivity in the active layer. This phenomenon is responsible for the degradation of device performance. Hence, different performance trends, such as broadband emission and spectrum stability were studied with pulse and continuous electrical pumping. The tested comb laser is found to be an attractive solution for several applications but requires further experiments in order to be considered for photonic intergraded circuits and to support next generation computer-communications.

  4. High-performance mass storage system for workstations

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  5. High Performance Lead--free Piezoelectric Materials

    Gupta, Shashaank

    2013-01-01

    Piezoelectric materials find applications in number of devices requiring inter-conversion of mechanical and electrical energy.  These devices include different types of sensors, actuators and energy harvesting devices. A number of lead-based perovskite compositions (PZT, PMN-PT, PZN-PT etc.) have dominated the field in last few decades owing to their giant piezoresponse and convenient application relevant tunability. With increasing environmental concerns, in the last one decade, focus has be...

  6. RISC Processors and High Performance Computing

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  7. A high performance thermoacoustic Stirling-engine

    Tijani, M.E.H.; Spoelstra, S. [Energy research Centre of the Netherlands (ECN), PO Box 1, 1755 ZG Petten (Netherlands)

    2011-11-10

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  8. Psychological factors in developing high performance athletes

    Elbe, Anne-Marie; Wikman, Johan Michael

    2017-01-01

    calls for great efforts in dealing with competitive pressure and demands mental strength with regard to endurance, self-motivation and willpower. But while it is somewhat straightforward to specify the physical and physiological skills needed for top performance in a specific sport, it becomes less...... clear with regard to the psychological skills that are needed. Therefore, the main questions to be addressed in this chapter are: (1) which psychological skills are needed to reach top performance? And (2) (how) can these skills be developed in young talents?...

  9. High Performance Expectations: Concept and causes

    Andersen, Lotte Bøgh; Jacobsen, Christian Bøtcher

    2017-01-01

    literature research, HPE is defined as the degree to which leaders succeed in expressing ambitious expectations to their employees’ achievement of given performance criteria, and it is analyzed how leadership behavior affects employee-perceived HPE. This study applies a large-scale leadership field...... experiment with 3,730 employees nested in 471 organizations and finds that transformational leadership training as well as transactional and combined training of the leaders significantly increased employees’ HPE relative to a control group. Furthermore, transformational leadership and the use of pecuniary...... rewards seem to be important mechanisms. This implies that public leaders can actually affect HPE through their leadership and thus potentially organizational performance as well....

  10. Exploration of the Trade Space Between Unmanned Aircraft Systems Descent Maneuver Performance and Sense-and-Avoid System Performance Requirements

    Jack, Devin P.; Hoffler, Keith D.; Johnson, Sally C.

    2014-01-01

    A need exists to safely integrate Unmanned Aircraft Systems (UAS) into the United States' National Airspace System. Replacing manned aircraft's see-and-avoid capability in the absence of an onboard pilot is one of the key challenges associated with safe integration. Sense-and-avoid (SAA) systems will have to achieve yet-to-be-determined required separation distances for a wide range of encounters. They will also need to account for the maneuver performance of the UAS they are paired with. The work described in this paper is aimed at developing an understanding of the trade space between UAS maneuver performance and SAA system performance requirements, focusing on a descent avoidance maneuver. An assessment of current manned and unmanned aircraft performance was used to establish potential UAS performance test matrix bounds. Then, near-term UAS integration work was used to narrow down the scope. A simulator was developed with sufficient fidelity to assess SAA system performance requirements. The simulator generates closest-point-of-approach (CPA) data from the wide range of UAS performance models maneuvering against a single intruder with various encounter geometries. Initial attempts to model the results made it clear that developing maneuver performance groups is required. Discussion of the performance groups developed and how to know in which group an aircraft belongs for a given flight condition and encounter is included. The groups are airplane, flight condition, and encounter specific, rather than airplane-only specific. Results and methodology for developing UAS maneuver performance requirements are presented for a descent avoidance maneuver. Results for the descent maneuver indicate that a minimum specific excess power magnitude can assure a minimum CPA for a given time-to-go prediction. However, smaller amounts of specific excess power may achieve or exceed the same CPA if the UAS has sufficient speed to trade for altitude. The results of this study will

  11. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  12. High Rate Performing Li-ion Battery

    2015-02-09

    journal article will be sufficient in most cases. This document may be as long or as short as needed to give a fair account of the work performed...Klink, J. J. & Moser, J. EPR Study of Vanadium (4+) in the Anatase and Rutile Phases of TiO2. Phys. Rev. B 34, 3060-3068 (1986). 40 Abragam, A

  13. Engendering a high performing organisational culture through ...

    Concluding that Africa's poor organisational performances are attributable to some inadequacies in the cultural foundations of countries and organisations, this paper argues for internal branding as the way forward for African organisations. Through internal branding an African organization can use a systematic and ...

  14. Mastering JavaScript high performance

    Adams, Chad R

    2015-01-01

    If you are a JavaScript developer with some experience in development and want to increase the performance of JavaScript projects by building faster web apps, then this book is for you. You should know the basic concepts of JavaScript.

  15. Gamma and Xray spectroscopy at high performance

    Borchert, G.L.

    1984-01-01

    The author determines that for many interesting problems in gamma and Xray spectroscopy it is necessary to use crystal diffractometers. The basic features of such instruments are discussed and the special performance of crystal spectrometers is demonstrated by means of typical examples of various applications

  16. High Performance Fortran for Aerospace Applications

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  17. High-Performance Computing Paradigm and Infrastructure

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  18. High performance management bij franchise-supermarkten

    Sloot, Laurens; van Nierop, Erjen; de Waal, Andre

    In dit artikel wordt een onderzoek gepresenteerd naar de mate waarin franchise-supermarkten voldoen aan de vijf factoren van high performanceorganisaties (HPO): hoge kwaliteit managers, hoge kwaliteit medewerkers, openheid en actiegerichtheid, continue verbetering en vernieuwing, en

  19. Practical experience and lessons learned through implementation of Appendix VIII performance demonstration requirements

    Ashwin, P.J.; Becker, F.L.; Latiolais, C.L.; Spanner, J.C.

    1996-01-01

    To provide the US nuclear industry with a uniform implementation of the Performance Demonstration requirements within the 1989 edition of ASME Section XI, Appendix VIII, representatives from all US nuclear utilities formed the Performance Demonstration Initiative (PDI). The PDI recognized the potential benefits that Appendix VIII offered the nuclear industry and initiated a proactive approach to implement the requirements. In doing so it was expected that performance demonstration of ultrasonic examination procedures would allow for improvement in the efficiency and credibility of inservice inspection to be realized. Explicit within the performance demonstration requirements of Appendix VIII is the need for a Performance Demonstration Administrator, a difficult requirement to fulfill. Not only must the administrator exhibit the attributes of understanding the demonstration requirements, but also have solid technical knowledge, integrity and be able to interface with the industry at all levels, from operations to regulatory. For the nuclear industry, the EPRI NDE Center is an obvious choice to fulfill this position. This paper provides a brief background of the PDI, a nuclear industry-wide initiative to implement the performance demonstration requirements of Appendix VIII. Even though the consensus approach adopted by the PDI is discussed, the paper's primary objective is to provide examples of the lessons learned by the Center through the specific requirements of Appendix VIII

  20. High performance fuel technology development : Development of high performance cladding materials

    Park, Jeongyong; Jeong, Y. H.; Park, S. Y.

    2012-04-01

    The superior in-pile performance of the HANA claddings have been verified by the successful irradiation test and in the Halden research reactor up to the high burn-up of 67GWD/MTU. The in-pile corrosion and creep resistances of HANA claddings were improved by 40% and 50%, respectively, over Zircaloy-4. HANA claddings have been also irradiated in the commercial reactor up to 2 reactor cycles, showing the corrosion resistance 40% better than that of ZIRLO in the same fuel assembly. Long-term out-of-pile performance tests for the candidates of the next generation cladding materials have produced the highly reliable test results. The final candidate alloys were selected and they showed the corrosion resistance 50% better than the foreign advanced claddings, which is beyond the original target. The LOCA-related properties were also improved by 20% over the foreign advanced claddings. In order to establish the optimal manufacturing process for the inner and outer claddings of the dual-cooled fuel, 18 different kinds of specimens were fabricated with various cold working and annealing conditions. Based on the performance tests and various out-of-pile test results obtained from the specimens, the optimal manufacturing process was established for the inner and outer cladding tubes of the dual-cooled fuel

  1. Analysis of Valve Requirements for High-Efficiency Digital Displacement Fluid Power Motors

    Rømer, Daniel; Johansen, Per; Pedersen, Henrik C.

    2013-01-01

    Digital displacement fluid power motors have been shown to enable high-efficiency operation in a wide operation range, including the part load range where conventional fluid power motors suffers from poor efficiencies. The use of these digital displacement motors set new requirements for the valve...... transition time and flow-pressure coefficient are normalized, leading to a presentation of the general efficiency map of the digital displacement motor. Finally the performance of existing commercial valves with respect to digital motors is commented....

  2. Menhir: An Environment for High Performance Matlab

    Stéphane Chauveau

    1999-01-01

    Full Text Available In this paper we present Menhir a compiler for generating sequential or parallel code from the Matlab language. The compiler has been designed in the context of using Matlab as a specification language. One of the major features of Menhir is its retargetability to generate parallel and sequential C or Fortran code. We present the compilation process and the target system description for Menhir. Preliminary performances are given and compared with MCC, the MathWorks Matlab compiler.

  3. Inclusion control in high-performance steels

    Holappa, L.E.K.; Helle, A.S.

    1995-01-01

    Progress of clean steel production, fundamentals of oxide and sulphide inclusions as well as inclusion morphology in normal and calcium treated steels are described. Effects of cleanliness and inclusion control on steel properties are discussed. In many damaging constructional and engineering applications the nonmetallic inclusions have a quite decisive role in steel performance. An example of combination of good mechanical properties and superior machinability by applying inclusion control is presented. (author)

  4. Emerging technologies for high performance infrared detectors

    Tan Chee Leong; Mohseni Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as...

  5. Fibre optic connectors with high-return-loss performance

    Knott, Michael P.; Johnson, R.; Cooke, K.; Longhurst, P. C.

    1990-09-01

    This paper describes the development of a single mode fibre optic connector with high return loss performance without the use of index matching. Partial reflection of incident light at a fibre optic connector interface is a recognised problem where the result can be increased noise and waveform distortion. This is particularly important for video transmission in subscriber networks which requires a high signal to noise ratio. A number of methods can be used to improve the return loss. The method described here uses a process which angles the connector endfaces. Measurements show typical return losses of -55dB can be achieved for an end angle of 6 degrees. Insertion loss results are also presented.

  6. Development of a high performance liquid chromatography method ...

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  7. High Performance Home Building Guide for Habitat for Humanity Affiliates

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  8. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  9. 77 FR 11995 - Passenger Vessel Operator Financial Responsibility Requirements for Non-Performance of...

    2012-02-28

    ... Vessel Operator Financial Responsibility Requirements for Non-Performance of Transportation AGENCY..., 2011, the Commission issued its Notice of Proposed Rulemaking (NPRM) to update its financial... cost of financial responsibility coverage because of the use of alternative coverage options. However...

  10. Influence of Basalt FRP Mesh Reinforcement on High-Performance Concrete Thin Plates at High Temperatures

    Hulin, Thomas; Lauridsen, Dan H.; Hodicky, Kamil

    2015-01-01

    A basalt fiber–reinforced polymer (BFRP) mesh was introduced as reinforcement in high-performance concrete (HPC) thin plates (20–30 mm) for implementation in precast sandwich panels. An experimental program studied the BFRP mesh influence on HPC exposed to high temperature. A set of standard...... furnace tests compared performances of HPC with and without BFRP mesh, assessing material behavior; another set including polypropylene (PP) fibers to avoid spalling compared the performance of BFRP mesh reinforcement to that of regular steel reinforcement, assessing mechanical properties......, requiring the use of steel. Microscope observations highlighted degradation of the HPC-BFRP mesh interface with temperature due to the melting polymer matrix of the mesh. These observations call for caution when using fiber-reinforced polymer (FRP) reinforcement in elements exposed to fire hazard....

  11. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  12. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  13. Development of High Performance Piezoelectric Polyimides

    Simpson, Joycelyn O.; St.Clair, Terry L.; Welch, Sharon S.

    1996-01-01

    In this work a series of polyimides are investigated which exhibit a strong piezoelectric response and polarization stability at temperatures in excess of 100 C. This work was motivated by the need to develop piezoelectric sensors suitable for use in high temperature aerospace applications.

  14. Tank waste remediation system high-level waste vitrification system development and testing requirements

    Calmus, R.B.

    1995-01-01

    This document provides the fiscal year (FY) 1995 recommended high-level waste melter system development and testing (D and T) requirements. The first phase of melter system testing (FY 1995) will focus on the feasibility of high-temperature operation of recommended high-level waste melter systems. These test requirements will be used to establish the basis for defining detailed testing work scope, cost, and schedules. This document includes a brief summary of the recommended technologies and technical issues associated with each technology. In addition, this document presents the key D and T activities and engineering evaluations to be performed for a particular technology or general melter system support feature. The strategy for testing in Phase 1 (FY 1995) is to pursue testing of the recommended high-temperature technologies, namely the high-temperature, ceramic-lined, joule-heated melter, referred to as the HTCM, and the high-frequency, cold-wall, induction-heated melter, referred to as the cold-crucible melter (CCM). This document provides a detailed description of the FY 1995 D and T needs and requirements relative to each of the high-temperature technologies

  15. Developing Flexible, High Performance Polymers with Self-Healing Capabilities

    Jolley, Scott T.; Williams, Martha K.; Gibson, Tracy L.; Caraccio, Anne J.

    2011-01-01

    Flexible, high performance polymers such as polyimides are often employed in aerospace applications. They typically find uses in areas where improved physical characteristics such as fire resistance, long term thermal stability, and solvent resistance are required. It is anticipated that such polymers could find uses in future long duration exploration missions as well. Their use would be even more advantageous if self-healing capability or mechanisms could be incorporated into these polymers. Such innovative approaches are currently being studied at the NASA Kennedy Space Center for use in high performance wiring systems or inflatable and habitation structures. Self-healing or self-sealing capability would significantly reduce maintenance requirements, and increase the safety and reliability performance of the systems into which these polymers would be incorporated. Many unique challenges need to be overcome in order to incorporate a self-healing mechanism into flexible, high performance polymers. Significant research into the incorporation of a self-healing mechanism into structural composites has been carried out over the past decade by a number of groups, notable among them being the University of I1linois [I]. Various mechanisms for the introduction of self-healing have been investigated. Examples of these are: 1) Microcapsule-based healant delivery. 2) Vascular network delivery. 3) Damage induced triggering of latent substrate properties. Successful self-healing has been demonstrated in structural epoxy systems with almost complete reestablishment of composite strength being achieved through the use of microcapsulation technology. However, the incorporation of a self-healing mechanism into a system in which the material is flexible, or a thin film, is much more challenging. In the case of using microencapsulation, healant core content must be small enough to reside in films less than 0.1 millimeters thick, and must overcome significant capillary and surface

  16. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  17. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  18. 40 CFR 80.815 - What are the gasoline toxics performance requirements for refiners and importers?

    2010-07-01

    ... toxics requirements of this subpart apply separately for each of the following types of gasoline produced...) The gasoline toxics performance requirements of this subpart apply to gasoline produced at a refinery... not apply to gasoline produced by a refinery approved under § 80.1334, pursuant to § 80.1334(c). (2...

  19. High performance flexible electronics for biomedical devices.

    Salvatore, Giovanni A; Munzenrieder, Niko; Zysset, Christoph; Kinkeldei, Thomas; Petti, Luisa; Troster, Gerhard

    2014-01-01

    Plastic electronics is soft, deformable and lightweight and it is suitable for the realization of devices which can form an intimate interface with the body, be implanted or integrated into textile for wearable and biomedical applications. Here, we present flexible electronics based on amorphous oxide semiconductors (a-IGZO) whose performance can achieve MHz frequency even when bent around hair. We developed an assembly technique to integrate complex electronic functionalities into textile while preserving the softness of the garment. All this and further developments can open up new opportunities in health monitoring, biotechnology and telemedicine.

  20. High-performance commercial building facades

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to

  1. Miniaturized high performance sensors for space plasmas

    Young, D.T.

    1996-01-01

    Operating under ever more constrained budgets, NASA has turned to a new paradigm for instrumentation and mission development in which smaller, faster, better, cheaper is of primary consideration for future space plasma investigations. The author presents several examples showing the influence of this new paradigm on sensor development and discuss certain implications for the scientific return from resource constrained sensors. The author also discusses one way to improve space plasma sensor performance which is to search out new technologies, measurement techniques and instrument analogs from related fields including among others, laboratory plasma physics

  2. High Performance Building Mockup in FLEXLAB

    McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Selkowitz, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-08-30

    Genentech has ambitious energy and indoor environmental quality performance goals for Building 35 (B35) being constructed by Webcor at the South San Francisco campus. Genentech and Webcor contracted with the Lawrence Berkeley National Laboratory (LBNL) to test building systems including lighting, lighting controls, shade fabric, and automated shading controls in LBNL’s new FLEXLAB facility. The goal of the testing is to ensure that the systems installed in the new office building will function in a way that reduces energy consumption and provides a comfortable work environment for employees.

  3. High performance computations using dynamical nucleation theory

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  4. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  5. High-Speed, High-Performance DQPSK Optical Links with Reduced Complexity VDFE Equalizers

    Maki Nanou

    2017-02-01

    Full Text Available Optical transmission technologies optimized for optical network segments sensitive to power consumption and cost, comprise modulation formats with direct detection technologies. Specifically, non-return to zero differential quaternary phase shift keying (NRZ-DQPSK in deployed fiber plants, combined with high-performance, low-complexity electronic equalizers to compensate residual impairments at the receiver end, can be proved as a viable solution for high-performance, high-capacity optical links. Joint processing of the constructive and the destructive signals at the single-ended DQPSK receiver provides improved performance compared to the balanced configuration, however, at the expense of higher hardware requirements, a fact that may not be neglected especially in the case of high-speed optical links. To overcome this bottleneck, the use of partially joint constructive/destructive DQPSK equalization is investigated in this paper. Symbol-by-symbol equalization is performed by means of Volterra decision feedback-type equalizers, driven by a reduced subset of signals selected from the constructive and the destructive ports of the optical detectors. The proposed approach offers a low-complexity alternative for electronic equalization, without sacrificing much of the performance compared to the fully-deployed counterpart. The efficiency of the proposed equalizers is demonstrated by means of computer simulation in a typical optical transmission scenario.

  6. A high performance totally ordered multicast protocol

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  7. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-01-01

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  8. High-performance sport, marijuana, and cannabimimetics.

    Hilderbrand, Richard L

    2011-11-01

    The prohibition on use of cannabinoids in sporting competitions has been widely debated and continues to be a contentious issue. Information continues to accumulate on the adverse health effects of smoked marijuana and the decrement of performance caused by the use of cannabinoids. The objective of this article is to provide an overview of cannabinoids and cannabimimetics that directly or indirectly impact sport, the rules of sport, and performance of the athlete. This article reviews some of the history of marijuana in Olympic and Collegiate sport, summarizes the guidelines by which a substance is added to the World Anti-Doping Agency Prohibited List, and updates information on the pharmacologic effects of cannabinoids and their mechanism of action. The recently marketed cannabimimetics Spice and K2 are included in the discussion as they activate the same receptors as are activated by THC. The article also provides a view as to why the World Anti-Doping Agency prohibits cannabinoid or cannabimimetic use incompetition and should continue to do so.

  9. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-06-05

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  10. A high level language for a high performance computer

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  11. Australia's new high performance research reactor

    Miller, R.; Abbate, P.M.

    2003-01-01

    A contract for the design and construction of the Replacement Research Reactor was signed in July 2000 between ANSTO and INVAP from Argentina. Since then the detailed design has been completed, a construction authorization has been obtained, and construction has commenced. The reactor design embodies modern safety thinking together with innovative solutions to ensure a highly safe and reliable plant. Also significant effort has been placed on providing the facility with diverse and ample facilities to maximize its use for irradiating material for radioisotope production as well as providing high neutron fluxes for neutron beam research. The project management organization and planing is commensurate with the complexity of the project and the number of players involved. (author)

  12. High Performance Single Nanowire Tunnel Diodes

    Wallentin, Jesper; Persson, Johan Mikael; Wagner, Jakob Birkedal

    NWs were contacted in a NW-FET setup. Electrical measurements at room temperature display typical tunnel diode behavior, with a Peak-to-Valley Current Ratio (PVCR) as high as 8.2 and a peak current density as high as 329 A/cm2. Low temperature measurements show improved PVCR of up to 27.6....... is the tunnel (Esaki) diode, which provides a low-resistance connection between junctions. We demonstrate an InP-GaAs NW axial heterostructure with tunnel diode behavior. InP and GaAs can be readily n- and p-doped, respectively, and the heterointerface is expected to have an advantageous type II band alignment...

  13. Future Vehicle Technologies : high performance transportation innovations

    Pratt, T. [Future Vehicle Technologies Inc., Maple Ridge, BC (Canada)

    2010-07-01

    Battery management systems (BMS) were discussed in this presentation, with particular reference to the basic BMS design considerations; safety; undisclosed information about BMS; the essence of BMS; and Future Vehicle Technologies' BMS solution. Basic BMS design considerations that were presented included the balancing methodology; prismatic/cylindrical cells; cell protection; accuracy; PCB design, size and components; communications protocol; cost of manufacture; and expandability. In terms of safety, the presentation addressed lithium fires; high voltage; high voltage ground detection; crash/rollover shutdown; complete pack shutdown capability; and heat shields, casings, and impact protection. BMS bus bar engineering considerations were discussed along with good chip design. It was concluded that FVTs advantage is a unique skillset in automotive technology and the development of speed and cost effectiveness. tabs., figs.

  14. Radiation cured coatings for high performance products

    Parkins, J.C.; Teesdale, D.H.

    1984-01-01

    Development over the past ten years of radiation curable coating and lacquer systems and the means of curing them has led to new products in the packaging, flooring, furniture and other industries. Solventless lacquer systems formulated with acrylates and other resins enable high levels of durability, scuff resistance and gloss to be achieved. Ultra violet and electron beam radiation curing are used, the choice depending on the nature of the coating, the product and the scale of the operation. (author)

  15. High thermoelectric performance of graphite nanofibers

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2017-01-01

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high ...

  16. Information processing among high-performance managers

    S.C. Garcia-Santos

    2010-01-01

    Full Text Available The purpose of this study was to evaluate the information processing of 43 business managers with a professional superior performance. The theoretical framework considers three models: the Theory of Managerial Roles of Henry Mintzberg, the Theory of Information Processing, and Process Model Response to Rorschach by John Exner. The participants have been evaluated by Rorschach method. The results show that these managers are able to collect data, evaluate them and establish rankings properly. At same time, they are capable of being objective and accurate in the problems assessment. This information processing style permits an interpretation of the world around on basis of a very personal and characteristic processing way or cognitive style.

  17. High temperature performance of polymer composites

    Keller, Thomas

    2014-01-01

    The authors explain the changes in the thermophysical and thermomechanical properties of polymer composites under elevated temperatures and fire conditions. Using microscale physical and chemical concepts they allow researchers to find reliable solutions to their engineering needs on the macroscale. In a unique combination of experimental results and quantitative models, a framework is developed to realistically predict the behavior of a variety of polymer composite materials over a wide range of thermal and mechanical loads. In addition, the authors treat extreme fire scenarios up to more than 1000°C for two hours, presenting heat-protection methods to improve the fire resistance of composite materials and full-scale structural members, and discuss their performance after fire exposure. Thanks to the microscopic approach, the developed models are valid for a variety of polymer composites and structural members, making this work applicable to a wide audience, including materials scientists, polymer chemist...

  18. High performance concrete with blended cement

    Biswas, P.P.; Saraswati, S.; Basu, P.C.

    2012-01-01

    Principal objectives of the proposed project are two folds. Firstly, to develop the HPC mix suitable to NPP structures with blended cement, and secondly to study its durability necessary for desired long-term performance. Three grades of concrete to b considered in the proposed projects are M35, M50 and M60 with two types of blended cements, i.e. Portland slag cement (PSC) and Portland pozzolana cement (PPC). Three types of mineral admixtures - silica fume, fly ash and ground granulated blast furnace slag will be used. Concrete mixes with OPc and without any mineral admixture will be considered as reference case. Durability study of these mixes will be carried out

  19. High Performance Fuel Technology Development(I)

    Song, Kun Woo; Kim, Keon Sik; Bang, Jeong Yong; Park, Je Keon; Chen, Tae Hyun; Kim, Hyung Kyu

    2010-04-01

    The dual-cooled annular fuel has been investigated for the purpose of achieving the power uprate of 20% and decreasing pellet temperature by 30%. The 12x12 rod array and basic design was developed, which is mechanically compatible with the OPR-1000. The reactor core analysis has been performed using this design, and the results have shown that the criteria of nuclear, thermohydraulic and safety design are satisfied and pellet temperature can be lowered by 40% even in 120% power. The basic design of fuel component was developed and the cladding thickness was designed through analysis and experiments. The solutions have been proposed and analyzed to the technical issues such as 'inner channel blockage' and 'imbalance between inner and outer coolant'. The annular pellet was fabricated with good control of shape and size, and especially, a new sintering technique has been developed to control the deviation of inner diameter within ±5μm. The irradiation test of annular pellets has been conducted up to 10 MWD/kgU to find out the densification and swelling behaviors. The 11 types of materials candidates have developed for the PCI-endurance pellet, and the material containing the Mn-Al additive showed its creep performance of much better than UO2 material. The HANA cladding has been irradiated up to 61 MWD/kgU, and the results have shown that its oxidation resistance is better by 40% than that of Zircaloy. The 30 types of candidate materials for next generation have been developed through alloy design and property tests

  20. Carbon nanotubes for high-performance logic

    Chen, Zhihong; Wong, H.S. Phillip; Mitra, Subhasish; Bol, Aggeth; Peng, Lianmao; Hills, Gage; Thissen, Nick

    2014-01-01

    Single-wall carbon nanotubes (CNTs) were discovered in 1993 and have been an area of intense research since then. They offer the right dimensions to explore material science and physical chemistry at the nanoscale and are the perfect system to study low-dimensional physics and transport. In the past decade, more attention has been shifted toward making use of this unique nanomaterial in real-world applications. In this article, we focus on potential applications of CNTs in the high-performanc...

  1. Single High Fidelity Geometric Data Sets for LCM - Model Requirements

    2006-11-01

    designed specifically to withstand severe underwater explosion (UNDEX) loading caused by the detonation of weapons such as bombs, missiles, mines and... Explosions ( BLEVEs ): The energy from a BLEVE is from a sudden change of phase of stored material. Tanks of liquids immersed in pool fires BLEVE when the...2.10.3 Summary of Data Requirements ....................................................... 46 2.11 Underwater Explosion

  2. High-performance silicon nanowire bipolar phototransistors

    Tan, Siew Li; Zhao, Xingyan; Chen, Kaixiang; Crozier, Kenneth B.; Dan, Yaping

    2016-07-01

    Silicon nanowires (SiNWs) have emerged as sensitive absorbing materials for photodetection at wavelengths ranging from ultraviolet (UV) to the near infrared. Most of the reports on SiNW photodetectors are based on photoconductor, photodiode, or field-effect transistor device structures. These SiNW devices each have their own advantages and trade-offs in optical gain, response time, operating voltage, and dark current noise. Here, we report on the experimental realization of single SiNW bipolar phototransistors on silicon-on-insulator substrates. Our SiNW devices are based on bipolar transistor structures with an optically injected base region and are fabricated using CMOS-compatible processes. The experimentally measured optoelectronic characteristics of the SiNW phototransistors are in good agreement with simulation results. The SiNW phototransistors exhibit significantly enhanced response to UV and visible light, compared with typical Si p-i-n photodiodes. The near infrared responsivities of the SiNW phototransistors are comparable to those of Si avalanche photodiodes but are achieved at much lower operating voltages. Compared with other reported SiNW photodetectors as well as conventional bulk Si photodiodes and phototransistors, the SiNW phototransistors in this work demonstrate the combined advantages of high gain, high photoresponse, low dark current, and low operating voltage.

  3. Generic algorithms for high performance scalable geocomputing

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  4. 14 CFR 151.45 - Performance of construction work: General requirements.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Performance of construction work: General... § 151.45 Performance of construction work: General requirements. (a) All construction work under a... work under a project until— (1) The sponsor has furnished three conformed copies of the contract to the...

  5. 14 CFR 151.49 - Performance of construction work: Contract requirements.

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Performance of construction work: Contract... § 151.49 Performance of construction work: Contract requirements. (a) Contract provisions. In addition to any other provisions necessary to ensure completion of the work in accordance with the grant...

  6. Air Force Officials did not Consistently Comply with Requirements for Assessing Contractor Performance

    2016-01-29

    31 Appendix B. Improvement in PAR Completion Statistics _________________________________ 33 vi...agencies must perform frequent evaluation of compliance with reporting requirements so they can readily identify delinquent past performance efforts...Reporting Program,” August 13, 2011 Appendixes DODIG-2016-043 │ 33 Appendix B Improvement in PAR Completion Statistics The Senate Armed Services Committee

  7. 40 CFR 63.344 - Performance test requirements and test methods.

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  8. 42 CFR 457.710 - State plan requirements: Strategic objectives and performance goals.

    2010-10-01

    .... The State's strategic objectives, performance goals and performance measures must include a common... 42 Public Health 4 2010-10-01 2010-10-01 false State plan requirements: Strategic objectives and...) ALLOTMENTS AND GRANTS TO STATES Strategic Planning, Reporting, and Evaluation § 457.710 State plan...

  9. 46 CFR 160.037-3 - Materials, workmanship, construction, and performance requirements.

    2010-10-01

    ... 46 Shipping 6 2010-10-01 2010-10-01 false Materials, workmanship, construction, and performance...) EQUIPMENT, CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Hand Orange Smoke Distress Signals § 160.037-3 Materials, workmanship, construction, and performance requirements. (a...

  10. High performance embedded system for real-time pattern matching

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  11. High performance embedded system for real-time pattern matching

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  12. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  13. Development of high performance hybrid rocket fuels

    Zaseck, Christopher R.

    . In order to examine paraffin/additive combustion in a motor environment, I conducted experiments on well characterized aluminum based additives. In particular, I investigate the influence of aluminum, unpassivated aluminum, milled aluminum/polytetrafluoroethylene (PTFE), and aluminum hydride on the performance of paraffin fuels for hybrid rocket propulsion. I use an optically accessible combustor to examine the performance of the fuel mixtures in terms of characteristic velocity efficiency and regression rate. Each combustor test consumes a 12.7 cm long, 1.9 cm diameter fuel strand under 160 kg/m 2s of oxygen at up to 1.4 MPa. The experimental results indicate that the addition of 5 wt.% 30 mum or 80 nm aluminum to paraffin increases the regression rate by approximately 15% compared to neat paraffin grains. At higher aluminum concentrations and nano-scale particles sizes, the increased melt layer viscosity causes slower regression. Alane and Al/PTFE at 12.5 wt.% increase the regression of paraffin by 21% and 32% respectively. Finally, an aging study indicates that paraffin can protect air and moisture sensitive particles from oxidation. The opposed burner and aluminum/paraffin hybrid rocket experiments show that additives can alter bulk fuel properties, such as viscosity, that regulate entrainment. The general effect of melt layer properties on the entrainment and regression rate of paraffin is not well understood. Improved understanding of how solid additives affect the properties and regression of paraffin is essential to maximize performance. In this document I investigate the effect of melt layer properties on paraffin regression using inert additives. Tests are performed in the optical cylindrical combustor at ˜1 MPa under a gaseous oxygen mass flux of ˜160 kg/m2s. The experiments indicate that the regression rate is proportional to mu0.08rho 0.38kappa0.82. In addition, I explore how to predict fuel viscosity, thermal conductivity, and density prior to testing

  14. Emerging technologies for high performance infrared detectors

    Tan, Chee Leong; Mohseni, Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III-V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  15. Emerging technologies for high performance infrared detectors

    Tan Chee Leong

    2018-01-01

    Full Text Available Infrared photodetectors (IRPDs have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  16. Video performance for high security applications

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  17. High Dynamic Performance Nonlinear Source Emulator

    Nguyen-Duy, Khiem; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    As research and development of renewable and clean energy based systems is advancing rapidly, the nonlinear source emulator (NSE) is becoming very essential for testing of maximum power point trackers or downstream converters. Renewable and clean energy sources play important roles in both...... terrestrial and nonterrestrial applications. However, most existing NSEs have only been concerned with simulating energy sources in terrestrial applications, which may not be fast enough for testing of nonterrestrial applications. In this paper, a high-bandwidth NSE is developed that is able to simulate...... change in the input source but also to a load step between nominal and open circuit. Moreover, all of these operation modes have a very fast settling time of only 10 μs, which is hundreds of times faster than that of existing works. This attribute allows for higher speed and a more efficient maximum...

  18. High-Performance Energy Applications and Systems

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  19. Pursuit of a scalable high performance multi-petabyte database

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  20. Performance Issues in High Performance Fortran Implementations of Sensor-Based Applications

    David R. O'hallaron

    1997-01-01

    Full Text Available Applications that get their inputs from sensors are an important and often overlooked application domain for High Performance Fortran (HPF. Such sensor-based applications typically perform regular operations on dense arrays, and often have latency and through put requirements that can only be achieved with parallel machines. This article describes a study of sensor-based applications, including the fast Fourier transform, synthetic aperture radar imaging, narrowband tracking radar processing, multibaseline stereo imaging, and medical magnetic resonance imaging. The applications are written in a dialect of HPF developed at Carnegie Mellon, and are compiled by the Fx compiler for the Intel Paragon. The main results of the study are that (1 it is possible to realize good performance for realistic sensor-based applications written in HPF and (2 the performance of the applications is determined by the performance of three core operations: independent loops (i.e., loops with no dependences between iterations, reductions, and index permutations. The article discusses the implications for HPF implementations and introduces some simple tests that implementers and users can use to measure the efficiency of the loops, reductions, and index permutations generated by an HPF compiler.

  1. Flexible and biocompatible high-performance solid-state micro-battery for implantable orthodontic system

    Kutbee, Arwa T.; Bahabry, Rabab R.; Alamoudi, Kholod O.; Ghoneim, Mohamed T.; Cordero, Marlon D.; Almuslem, Amani S.; Gumus, Abdurrahman; Diallo, Elhadj M.; Nassar, Joanna M.; Hussain, Aftab M.; Khashab, Niveen M.; Hussain, Muhammad Mustafa

    2017-01-01

    To augment the quality of our life, fully compliant personalized advanced health-care electronic system is pivotal. One of the major requirements to implement such systems is a physically flexible high-performance biocompatible energy storage

  2. Performance requirements for the double-shell tank system: Phase 1

    Claghorn, R.D.

    1998-01-01

    This document establishes performance requirements for the double-shell tank system. These requirements, in turn, will be incorporated in the System Specification for the Double-Shell Tank System (Grenard and Claghorn 1998). This version of the document establishes requirements that are applicable to the first phase (Phase 1) of the Tank Waste Remediation System (TWRS) mission described in the TWRS Mission Analysis Report (Acree 1998). It does not specify requirements for either the Phase 2 mission or the double-shell tank system closure period

  3. Saccharomyces cerevisiae vineyard strains have different nitrogen requirements that affect their fermentation performances.

    Lemos Junior, W J F; Viel, A; Bovo, B; Carlot, M; Giacomini, A; Corich, V

    2017-11-01

    In this work the fermentation performances of seven vineyard strains, together with the industrial strain EC1118, have been investigated at three differing yeast assimilable nitrogen (YAN) concentrations (300 mg N l -1 , 150 mg N l -1 and 70 mg N l -1 ) in synthetic musts. The results indicated that the response to different nitrogen levels is strain dependent. Most of the strains showed a dramatic decrease of the fermentation at 70 mg N l -1 but no significant differences in CO 2 production were found when fermentations at 300 mg N l -1 and 150 mg N l -1 were compared. Only one among the vineyard strains showed a decrease of the fermentation when 150 mg N l -1 were present in the must. These results contribute to shed light on strain nitrogen requirements and offer new perspectives to manage the fermentation process during winemaking. Selected vineyard Saccharomyces cerevisiae strains can improve the quality and the complexity of local wines. Wine quality is also influenced by nitrogen availability that modulates yeast fermentation activity. In this work, yeast nitrogen assimilation was evaluated to clarify the nitrogen requirements of vineyard strains. Most of the strains needed high nitrogen levels to express the best fermentation performances. The results obtained indicate the critical nitrogen levels. When the nitrogen concentration was above the critical level, the fermentation process increased, but if the level of nitrogen was further increased no effect on the fermentation was found. © 2017 The Society for Applied Microbiology.

  4. High performance multiple stream data transfer

    Rademakers, F.; Saiz, P.

    2001-01-01

    The ALICE detector at LHC (CERN), will record raw data at a rate of 1.2 Gigabytes per second. Trying to analyse all this data at CERN will not be feasible. As originally proposed by the MONARC project, data collected at CERN will be transferred to remote centres to use their computing infrastructure. The remote centres will reconstruct and analyse the events, and make available the results. Therefore high-rate data transfer between computing centres (Tiers) will become of paramount importance. The authors will present several tests that have been made between CERN and remote centres in Padova (Italy), Torino (Italy), Catania (Italy), Lyon (France), Ohio (United States), Warsaw (Poland) and Calcutta (India). These tests consisted, in a first stage, of sending raw data from CERN to the remote centres and back, using a ftp method that allows connections of several streams at the same time. Thanks to these multiple streams, it is possible to increase the rate at which the data is transferred. While several 'multiple stream ftp solutions' already exist, the authors' method is based on a parallel socket implementation which allows, besides files, also objects (or any large message) to be send in parallel. A prototype will be presented able to manage different transfers. This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer

  5. A CAD Open Platform for High Performance Reconfigurable Systems in the EXTRA Project

    Rabozzi, M.; Brondolin, R.; Natale, G.; Del Sozzo, E.; Huebner, M.; Brokalakis, A.; Ciobanu, C.; Stroobandt, D.; Santambrogio, M.D.; Hübner, M.; Reis, R.; Stan, M.; Voros, N.

    2017-01-01

    As the power wall has become one of the main limiting factors for the performance of general purpose processors, the trend in High Performance Computing (HPC) is moving towards application-specific accelerators in order to meet the stringent performance requirements for exascale computing while

  6. Some aspects related to the high performance in gamma spectrometry

    Vieru, Gheorghe; Mihaiu, Ramona; Nistor, Viorica

    2010-01-01

    Full text: Gamma spectroscopy is the science (or art) of identification and/or quantification of radionuclides through gamma-ray energy spectrum analysis. It is a recognized technique, well illustrated by the following examples: environmental radioactivity monitoring, health physics personnel monitoring, reactor corrosion monitoring, nuclear materials safeguards and homeland security, as well as nuclear forensics, materials testing, nuclear medicine and radiopharmaceuticals, and industrial process monitoring. Within the Reliability and Testing Laboratory of INR Pitesti there is now available such a detector of high performance that includes all aspects related to cooling, signal processing (with a high resolution) and the dedicated software applications. At the present time, taking into account the 'nuclear renaissance' the new experimental testing laboratories require complete set-up rather than laboratories having a mixture of the old equipment. The paper presents the mechanically cooled HPGe spectrometers with an undergoing rapid evolution, which eliminate traditional intensive liquid nitrogen management. Also, there are described the digital signal processing virtually by eliminating system drift, and, at the same time, requiring recalibrations much less frequently. Another important aspect presented is related to the software applications, which cover a broad spectrum of nuclear measurement techniques. (authors)

  7. Technologies of high-performance thermography systems

    Breiter, R.; Cabanski, Wolfgang A.; Mauk, K. H.; Kock, R.; Rode, W.

    1997-08-01

    A family of 2 dimensional detection modules based on 256 by 256 and 486 by 640 platinum silicide (PtSi) focal planes, or 128 by 128 and 256 by 256 mercury cadmium telluride (MCT) focal planes for applications in either the 3 - 5 micrometer (MWIR) or 8 - 10 micrometer (LWIR) range was recently developed by AIM. A wide variety of applications is covered by the specific features unique for these two material systems. The PtSi units provide state of the art correctability with long term stable gain and offset coefficients. The MCT units provide extremely fast frame rates like 400 Hz with snapshot integration times as short as 250 microseconds and with a thermal resolution NETD less than 20 mK for e.g. the 128 by 128 LWIR module. The unique design idea general for all of these modules is the exclusively digital interface, using 14 bit analog to digital conversion to provide state of the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. Device specific features like bias voltages etc. are identified during the final test and stored in a memory on the driving electronics. This concept allows an easy exchange of IDCAs of the same type without any need for tuning or e.g. the possibility to upgrade a PtSi based unit to an MCT module by just loading the suitable software. Miniaturized digital signal processor (DSP) based image correction units were developed for testing and operating the units with output data rates of up to 16 Mpixels/s. These boards provide the ability for freely programmable realtime functions like two point correction and various data manipulations in thermography applications.

  8. High energy permanent magnets - Solutions to high performance devices

    Ma, B.M.; Willman, C.J.

    1986-01-01

    Neodymium iron boron magnets are a special class of magnets providing the highest level of performance with the least amount of material. Crucible Research Center produced the highest energy product magnet of 45 MGOe - a world record. Commercialization of this development has already taken place. Crucible Magnetics Division, located in Elizabethtown, Kentucky, is currently manufacturing and marketing six different grades of NdFeB magnets. Permanent magnets find application in motors, speakers, electron beam focusing devices for military and Star Wars. The new NdFeB magnets are of considerable interest for a wide range of applications

  9. An integrated framework for high level design of high performance signal processing circuits on FPGAs

    Benkrid, K.; Belkacemi, S.; Sukhsawas, S.

    2005-06-01

    This paper proposes an integrated framework for the high level design of high performance signal processing algorithms' implementations on FPGAs. The framework emerged from a constant need to rapidly implement increasingly complicated algorithms on FPGAs while maintaining the high performance needed in many real time digital signal processing applications. This is particularly important for application developers who often rely on iterative and interactive development methodologies. The central idea behind the proposed framework is to dynamically integrate high performance structural hardware description languages with higher level hardware languages in other to help satisfy the dual requirement of high level design and high performance implementation. The paper illustrates this by integrating two environments: Celoxica's Handel-C language, and HIDE, a structural hardware environment developed at the Queen's University of Belfast. On the one hand, Handel-C has been proven to be very useful in the rapid design and prototyping of FPGA circuits, especially control intensive ones. On the other hand, HIDE, has been used extensively, and successfully, in the generation of highly optimised parameterisable FPGA cores. In this paper, this is illustrated in the construction of a scalable and fully parameterisable core for image algebra's five core neighbourhood operations, where fully floorplanned efficient FPGA configurations, in the form of EDIF netlists, are generated automatically for instances of the core. In the proposed combined framework, highly optimised data paths are invoked dynamically from within Handel-C, and are synthesized using HIDE. Although the idea might seem simple prima facie, it could have serious implications on the design of future generations of hardware description languages.

  10. High-quality thorium TRISO fuel performance in HTGRs

    Verfondern, Karl [Forschungszentrum Juelich GmbH (Germany); Allelein, Hans-Josef [Forschungszentrum Juelich GmbH (Germany); Technische Hochschule Aachen (Germany); Nabielek, Heinz; Kania, Michael J.

    2013-11-01

    Thorium as a nuclear fuel has received renewed interest, because of its widespread availability and the good irradiation performance of Th and mixed (Th,U) oxide compounds as fuels in nuclear power systems. Early HTGR development employed thorium together with high-enriched uranium (HEU). After 1980, HTGR fuel systems switched to low-enriched uranium (LEU). After completing fuel development for the AVR and the THTR with BISO coated particles, the German program expanded its efforts utilizing thorium and HEU TRISO coated particles in advanced HTGR concepts for process heat applications (PNP) and direct-cycle electricity production (HHT). The combination of a low-temperature isotropic (LTI) inner and outer pyrocarbon layers surrounding a strong, stable SiC layer greatly improved manufacturing conditions and the subsequent contamination and defective particle fractions in production fuel elements. In addition, this combination provided improved mechanical strength and a higher degree of solid fission product retention, not known previously with high-temperature isotropic (HTI) BISO coatings. The improved performance of the HEU (Th, U)O{sub 2} TRISO fuel system was successfully demonstrated in three primary areas of development: manufacturing, irradiation testing under normal operating conditions, and accident simulation testing. In terms of demonstrating performance for advanced HTGR applications, the experimental failure statistic from manufacture and irradiation testing are significantly below the coated particle requirements specified for PNP and HHT designs at the time. Covering a range to 1300 C in normal operations and 1600 C in accidents, with burnups to 13% FIMA and fast fluences to 8 x 10{sup 25} n/m{sup 2} (E> 16 fJ), the performance results exceed the design limits on manufacturing and operational requirements for the German HTR-Modul concept, which are 6.5 x 10{sup -5} for manufacturing, 2 x 10{sup -4} for normal operating conditions, and 5 x 10{sup -4

  11. High-quality thorium TRISO fuel performance in HTGRs

    Verfondern, Karl; Allelein, Hans-Josef; Nabielek, Heinz; Kania, Michael J.

    2013-01-01

    Thorium as a nuclear fuel has received renewed interest, because of its widespread availability and the good irradiation performance of Th and mixed (Th,U) oxide compounds as fuels in nuclear power systems. Early HTGR development employed thorium together with high-enriched uranium (HEU). After 1980, HTGR fuel systems switched to low-enriched uranium (LEU). After completing fuel development for the AVR and the THTR with BISO coated particles, the German program expanded its efforts utilizing thorium and HEU TRISO coated particles in advanced HTGR concepts for process heat applications (PNP) and direct-cycle electricity production (HHT). The combination of a low-temperature isotropic (LTI) inner and outer pyrocarbon layers surrounding a strong, stable SiC layer greatly improved manufacturing conditions and the subsequent contamination and defective particle fractions in production fuel elements. In addition, this combination provided improved mechanical strength and a higher degree of solid fission product retention, not known previously with high-temperature isotropic (HTI) BISO coatings. The improved performance of the HEU (Th, U)O 2 TRISO fuel system was successfully demonstrated in three primary areas of development: manufacturing, irradiation testing under normal operating conditions, and accident simulation testing. In terms of demonstrating performance for advanced HTGR applications, the experimental failure statistic from manufacture and irradiation testing are significantly below the coated particle requirements specified for PNP and HHT designs at the time. Covering a range to 1300 C in normal operations and 1600 C in accidents, with burnups to 13% FIMA and fast fluences to 8 x 10 25 n/m 2 (E> 16 fJ), the performance results exceed the design limits on manufacturing and operational requirements for the German HTR-Modul concept, which are 6.5 x 10 -5 for manufacturing, 2 x 10 -4 for normal operating conditions, and 5 x 10 -4 for accident conditions. These

  12. Highlighting High Performance: Clearview Elementary School, Hanover, Pennsylvania

    2002-08-01

    Case study on high performance building features of Clearview Elementary School in Hanover, Pennsylvania. Clearview Elementary School in Hanover, Pennsylvania, is filled with natural light, not only in classrooms but also in unexpected, and traditionally dark, places like stairwells and hallways. The result is enhanced learning. Recent scientific studies conducted by the California Board for Energy Efficiency, involving 21,000 students, show test scores were 15% to 26% higher in classrooms with daylighting. Clearview's ventilation system also helps students and teachers stay healthy, alert, and focused on learning. The school's superior learning environment comes with annual average energy savings of about 40% over a conventional school. For example, with so much daylight, the school requires about a third less energy for electric lighting than a typical school. The school's innovative geothermal heating and cooling system uses the constant temperature of the Earth to cool and heat the building. The building and landscape designs work together to enhance solar heating in the winter, summer cooling, and daylighting all year long. Students and teachers have the opportunity to learn about high-performance design by studying their own school. At Clearview, the Hanover Public School District has shown that designing a school to save energy is affordable. Even with its many innovative features, the school's $6.35 million price tag is just $150,000 higher than average for elementary schools in Pennsylvania. Projected annual energy cost savings of approximately $18,000 mean a payback in 9 years. Reasonable construction costs demonstrate that other school districts can build schools that conserve energy, protect natural resources, and provide the educational and health benefits that come with high-performance buildings.

  13. Software Systems for High-performance Quantum Computing

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  14. Power/energy use cases for high performance computing

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  15. Figure and finish characterization of high performance metal mirrors

    Takacs, P.Z.; Church, E.L.

    1991-10-01

    Most metal mirrors currently used in synchrotron radiation (SR) beam lines to reflect soft x-rays are made of electroless nickel plate on an aluminum substrate. This material combination has allowed optical designers to incorporate exotic cylindrical aspheres into grazing incidence x-ray beam-handling systems by taking advantage of single-point diamond machining techniques. But the promise of high-quality electroless nickel surfaces has generally exceeded the performance. We will examine the evolution of electroless nickel surfaces through a study of the quality of mirrors delivered for use at the National Synchrotron Light Source over the past seven years. We have developed techniques to assess surface quality based on the measurement of surface roughness and figure errors with optical profiling instruments. It is instructive to see how the quality of the surface is related to the complexity of the machine operations required to produce it

  16. IMPULSE---an advanced, high performance nuclear thermal propulsion system

    Petrosky, L.J.; Disney, R.K.; Mangus, J.D.; Gunn, S.A.; Zweig, H.R.

    1993-01-01

    IMPULSE is an advanced nuclear propulsion engine for future space missions based on a novel conical fuel. Fuel assemblies are formed by stacking a series of truncated (U, Zr)C cones with non-fueled lips. Hydrogen flows radially inward between the cones to a central plenum connected to a high performance bell nozzle. The reference IMPULSE engine rated at 75,000 lb thrust and 1800 MWt weighs 1360 kg and is 3.65 meters in height and 81 cm in diameter. Specific impulse is estimated to be 1000 for a 15 minute life at full power. If longer life times are required, the operating temperature can be reduced with a concomitant decrease in specific impulse. Advantages of this concept include: well defined coolant paths without outlet flow restrictions; redundant orificing; very low thermal gradients and hence, thermal stresses, across the fuel elements; and reduced thermal stresses because of the truncated conical shape of the fuel elements

  17. DOE research in utilization of high-performance computers

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  18. High-Performance Management Practices and Employee Outcomes in Denmark

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...

  19. High Performance Fuel Desing for Next Generation Pressurized Water Reactors

    Mujid S. Kazimi; Pavel Hejzlar

    2006-01-01

    The use of internally and externally cooled annular fuel rods for high power density Pressurized Water Reactors is assessed. The assessment included steady state and transient thermal conditions, neutronic and fuel management requirements, mechanical vibration issues, fuel performance issues, fuel fabrication methods and economic assessment. The investigation was conducted by a team from MIT, Westinghouse, Gamma Engineering, Framatome ANP, and AECL. The analyses led to the conclusion that raising the power density by 50% may be possible with this advanced fuel. Even at the 150% power level, the fuel temperature would be a few hundred degrees lower than the current fuel temperature. Significant economic and safety advantages can be obtained by using this fuel in new reactors. Switching to this type of fuel for existing reactors would yield safety advantages, but the economic return is dependent on the duration of plant shutdown to accommodate higher power production. The main feasibility issue for the high power performance appears to be the potential for uneven splitting of heat flux between the inner and outer fuel surfaces due to premature closure of the outer fuel-cladding gap. This could be overcome by using a very narrow gap for the inner fuel surface and/or the spraying of a crushable zirconium oxide film at the fuel pellet outer surface. An alternative fuel manufacturing approach using vobropacking was also investigated but appears to yield lower than desirable fuel density

  20. High-performance teams and the physician leader: an overview.

    Majmudar, Aalap; Jain, Anshu K; Chaudry, Joseph; Schwartz, Richard W

    2010-01-01

    The complexity of health care delivery within the United States continues to escalate in an exponential fashion driven by an explosion of medical technology, an ever-expanding research enterprise, and a growing emphasis on evidence-based practices. The delivery of care occurs on a continuum that spans across multiple disciplines, now requiring complex coordination of care through the use of novel clinical teams. The use of teams permeates the health care industry and has done so for many years, but confusion about the structure and role of teams in many organizations contributes to limited effectiveness and suboptimal outcomes. Teams are an essential component of graduate medical education training programs. The health care industry's relative lack of focus regarding the fundamentals of teamwork theory has contributed to ineffective team leadership at the physician level. As a follow-up to our earlier manuscripts on teamwork, this article clarifies a model of teamwork and discusses its application to high-performance teams in health care organizations. Emphasized in this discussion is the role played by the physician leader in ensuring team effectiveness. By educating health care professionals on the fundamentals of high-performance teamwork, we hope to stimulate the development of future physician leaders who use proven teamwork principles to achieve the goals of trainee education and excellent patient care. Copyright 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  1. High Performance Fuel Desing for Next Generation Pressurized Water Reactors

    Mujid S. Kazimi; Pavel Hejzlar

    2006-01-31

    The use of internally and externally cooled annular fule rods for high power density Pressurized Water Reactors is assessed. The assessment included steady state and transient thermal conditions, neutronic and fuel management requirements, mechanical vibration issues, fuel performance issues, fuel fabrication methods and econmic assessment. The investigation was donducted by a team from MIT, Westinghouse, Gamma Engineering, Framatome ANP, and AECL. The analyses led to the conclusion that raising the power density by 50% may be possible with this advanced fuel. Even at the 150% power level, the fuel temperature would be a few hundred degrees lower than the current fuel temperatre. Significant economic and safety advantages can be obtained by using this fuel in new reactors. Switching to this type of fuel for existing reactors would yield safety advantages, but the economic return is dependent on the duration of plant shutdown to accommodate higher power production. The main feasiblity issue for the high power performance appears to be the potential for uneven splitting of heat flux between the inner and outer fuel surfaces due to premature closure of the outer fuel-cladding gap. This could be overcome by using a very narrow gap for the inner fuel surface and/or the spraying of a crushable zirconium oxide film at the fuel pellet outer surface. An alternative fuel manufacturing approach using vobropacking was also investigated but appears to yield lower than desirable fuel density.

  2. High-Performance Ducts in Hot-Dry Climates

    Hoeschele, Marc [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chitwood, Rick [National Renewable Energy Laboratory (NREL), Golden, CO (United States); German, Alea [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Weitzel, Elizabeth [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2015-07-30

    Duct thermal losses and air leakage have long been recognized as prime culprits in the degradation of heating, ventilating, and air-conditioning (HVAC) system efficiency. Both the U.S. Department of Energy’s Zero Energy Ready Home program and California’s proposed 2016 Title 24 Residential Energy Efficiency Standards require that ducts be installed within conditioned space or that other measures be taken to provide similar improvements in delivery effectiveness (DE). Pacific Gas & Electric Company commissioned a study to evaluate ducts in conditioned space and high-performance attics (HPAs) in support of the proposed codes and standards enhancements included in California’s 2016 Title 24 Residential Energy Efficiency Standards. The goal was to work with a select group of builders to design and install high-performance duct (HPD) systems, such as ducts in conditioned space (DCS), in one or more of their homes and to obtain test data to verify the improvement in DE compared to standard practice. Davis Energy Group (DEG) helped select the builders and led a team that provided information about HPD strategies to them. DEG also observed the construction process, completed testing, and collected cost data.

  3. Evaluation of High-Performance Network Technologies for ITER

    Zagar, K.; Kolaric, P.; Sabjan, R.; Zagar, A. [Cosylab d.d., Ljubljana (Slovenia); Hunt, S. [Alceli Hunt Beratung, Meisterschwanden (Switzerland)

    2009-07-01

    To facilitate fast feedback control of plasma, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, four types of high-performance communication have been identified. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024*1024) and frame rate (30 Hz). In this article, we present some combinations of common off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time LINUS operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming. This document is composed of an abstract followed by the presentation transparencies. (authors)

  4. Evaluation of high-performance network technologies for ITER

    Zagar, K., E-mail: klemen.zagar@cosylab.co [Cosylab d.d., 1000 Ljubljana (Slovenia); Hunt, S. [Alceli Hunt Beratung, 5616 Meisterschwanden (Switzerland); Kolaric, P.; Sabjan, R.; Zagar, A.; Dedic, J. [Cosylab d.d., 1000 Ljubljana (Slovenia)

    2010-07-15

    For the fast feedback plasma controllers, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, the ITER CODAC team identified four types of high-performance communication applications. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio-Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024 x 1024) and frame rate (30 Hz). In this article, we present some combinations of common-off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time Linux operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming.

  5. Evaluation of high-performance network technologies for ITER

    Zagar, K.; Hunt, S.; Kolaric, P.; Sabjan, R.; Zagar, A.; Dedic, J.

    2010-01-01

    For the fast feedback plasma controllers, ITER's Control, Data Access and Communication system (CODAC) will need to provide a mechanism for hard real-time communication between its distributed nodes. In particular, the ITER CODAC team identified four types of high-performance communication applications. Synchronous Databus Network (SDN) is to provide an ability to distribute parameters of plasma (estimated to about 5000 double-valued signals) across the system to allow for 1 ms control cycles. Event Distribution Network (EDN) and Time Communication Network (TCN) are to allow synchronization of node I/O operations to 10 ns. Finally, the Audio-Video Network (AVN) is to provide sufficient bandwidth for streaming of surveillance and diagnostics video at a high resolution (1024 x 1024) and frame rate (30 Hz). In this article, we present some combinations of common-off-the-shelf (COTS) technologies that allow the above requirements to be met. Also, we present the performances achieved in a practical (though small scale) technology demonstrator, which involved a real-time Linux operating running on National Instruments' PXI platform, UDP communication implemented directly atop the Ethernet network adapter, CISCO switches, Micro Research Finland's timing and event solution, and GigE audio-video streaming.

  6. High performance computing environment for multidimensional image analysis.

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  7. High performance data acquisition with InfiniBand

    Adamczewski, Joern; Essel, Hans G.; Kurz, Nikolaus; Linev, Sergey

    2008-01-01

    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. In this concept any data filtering is done behind the network. Therefore the network must achieve up to 1 GByte/s bi-directional data transfer per node. Detailed simulations have been done to optimize scheduling mechanisms for such event building networks. For real performance tests InfiniBand has been chosen as one of the fastest available network technology. The measurements of network event building have been performed on different Linux clusters from four to over hundred nodes. Several InfiniBand libraries have been tested like uDAPL, Verbs, or MPI. The tests have been integrated in the data acquisition backbone core software DABC, a general purpose data acquisition library. Detailed results are presented. In the worst cases (over hundred nodes) 50% of the required bandwidth can be already achieved. It seems possible to improve these results by further investigations

  8. High performance real-time flight simulation at NASA Langley

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  9. Passive and Active Monitoring on a High Performance Research Network

    Matthews, Warren

    2001-01-01

    The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10 12 ). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge has arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data

  10. Intelligent Facades for High Performance Green Buildings. Final Technical Report

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Intelligent Facades for High Performance Green Buildings: Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building- integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring on- site solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high- quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building envelope. The advantage of being able to use the entire solar spectrum for

  11. Passive and Active Monitoring on a High Performance Research Network.

    Matthews, Warren

    2001-05-01

    The bold network challenges described in ''Internet End-to-end Performance Monitoring for the High Energy and Nuclear Physics Community'' presented at PAM 2000 have been tackled by the intrepid administrators and engineers providing the network services. After less than a year, the BaBar collaboration has collected almost 100 million particle collision events in a database approaching 165TB (Tera=10{sup 12}). Around 20TB has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, for processing and around 40 TB of simulated events have been imported to SLAC from Lawrence Livermore National Laboratory (LLNL). An unforseen challenge has arisen due to recent events and highlighted security concerns at DoE funded labs. New rules and regulations suggest it is only a matter of time before many active performance measurements may not be possible between many sites. Yet, at the same time, the importance of understanding every aspect of the network and eradicating packet loss for high throughput data transfers has become apparent. Work at SLAC to employ passive monitoring using netflow and OC3MON is underway and techniques to supplement and possibly replace the active measurements are being considered. This paper will detail the special needs and traffic characterization of a remarkable research project, and how the networking hurdles have been resolved (or not!) to achieve the required high data throughput. Results from active and passive measurements will be compared, and methods for achieving high throughput and the effect on the network will be assessed along with tools that directly measure throughput and applications used to actually transfer data.

  12. High-Level Synthesis: Productivity, Performance, and Software Constraints

    Yun Liang

    2012-01-01

    Full Text Available FPGAs are an attractive platform for applications with high computation demand and low energy consumption requirements. However, design effort for FPGA implementations remains high—often an order of magnitude larger than design effort using high-level languages. Instead of this time-consuming process, high-level synthesis (HLS tools generate hardware implementations from algorithm descriptions in languages such as C/C++ and SystemC. Such tools reduce design effort: high-level descriptions are more compact and less error prone. HLS tools promise hardware development abstracted from software designer knowledge of the implementation platform. In this paper, we present an unbiased study of the performance, usability and productivity of HLS using AutoPilot (a state-of-the-art HLS tool. In particular, we first evaluate AutoPilot using the popular embedded benchmark kernels. Then, to evaluate the suitability of HLS on real-world applications, we perform a case study of stereo matching, an active area of computer vision research that uses techniques also common for image denoising, image retrieval, feature matching, and face recognition. Based on our study, we provide insights on current limitations of mapping general-purpose software to hardware using HLS and some future directions for HLS tool development. We also offer several guidelines for hardware-friendly software design. For popular embedded benchmark kernels, the designs produced by HLS achieve 4X to 126X speedup over the software version. The stereo matching algorithms achieve between 3.5X and 67.9X speedup over software (but still less than manual RTL design with a fivefold reduction in design effort versus manual RTL design.

  13. High ionization radiation field remote visualization device - shielding requirements

    Fernandez, Antonio P. Rodrigues; Omi, Nelson M.; Silveira, Carlos Gaia da; Calvo, Wilson A. Pajero

    2011-01-01

    The high activity sources manipulation hot-cells use special and very thick leaded glass windows. This window provides a single sight of what is being manipulated inside the hot-cell. The use of surveillance cameras would replace the leaded glass window, provide other sights and show more details of the manipulated pieces, using the zoom capacity. Online distant manipulation may be implemented, too. The limitation is their low ionizing radiation resistance. This low resistance also limited the useful time of robots made to explore or even fix problematic nuclear reactor core, industrial gamma irradiators and high radioactive leaks. This work is a part of the development of a high gamma field remote visualization device using commercial surveillance cameras. These cameras are cheap enough to be discarded after the use for some hours of use in an emergency application, some days or some months in routine applications. A radiation shield can be used but it cannot block the camera sight which is the shield weakness. Estimates of the camera and its electronics resistance may be made knowing each component behavior. This knowledge is also used to determine the optical sensor type and the lens material, too. A better approach will be obtained with the commercial cameras working inside a high gamma field, like the one inside of the IPEN Multipurpose Irradiator. The goal of this work is to establish the radiation shielding needed to extend the camera's useful time to hours, days or months, depending on the application needs. (author)

  14. Academic performance in high school as factor associated to academic performance in college

    Mileidy Salcedo Barragán

    2008-12-01

    Full Text Available This study intends to find the relationship between academic performance in High School and College, focusing on Natural Sciences and Mathematics. It is a descriptive correlational study, and the variables were academic performance in High School, performance indicators and educational history. The correlations between variables were established with Spearman’s correlation coefficient. Results suggest that there is a positive relationship between academic performance in High School and Educational History, and a very weak relationship between performance in Science and Mathematics in High School and performance in College.

  15. Determining required valve performance for discrete control of PTO cylinders for wave energy

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2012-01-01

    investigates the required valve performance to achieve this energy efficient operation, while meeting basic dynamic requirements. The components making up the total energy loss during shifting is identified by analytically expressing the losses from the governing differential equations. From the analysis...... a framework for evaluating the adequacy of a valve’s response is established, and the analysis shows the results may be normalised for a wider range of systems. Finally, the framework is successfully applied to the Wavestar converter....

  16. High performance GPU processing for inversion using uniform grid searches

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on

  17. Superconductor Requirements and Characterization for High Field Accelerator Magnets

    Barzi, E.; Zlobin, A. V.

    2015-05-01

    The 2014 Particle Physics Project Prioritization Panel (P5) strategic plan for U.S. High Energy Physics (HEP) endorses a continued world leadership role in superconducting magnet technology for future Energy Frontier Programs. This includes 10 to 15 T Nb3Sn accelerator magnets for LHC upgrades and a future 100 TeV scale pp collider, and as ultimate goal that of developing magnet technologies above 20 T based on both High Temperature Superconductors (HTS) and Low Temperature Superconductors (LTS) for accelerator magnets. To achieve these objectives, a sound conductor development and characterization program is needed and is herein described. This program is intended to be conducted in close collaboration with U.S. and International labs, Universities and Industry.

  18. Symmetry and illumination uniformity requirements for high density laser-driven implosions

    Mead, W.C.; Lindl, J.D.

    1976-01-01

    As laser capabilities increase, implosions will be performed to achieve high densities. Criteria are discussed for formation of a low-density corona, preheated supersonically, which increases the tolerance of high convergence implosions to non-uniform illumination by utilizing thermal smoothing. We compare optimized double shell target designs without and with atmosphere production. Two significant penalties are incurred with atmosphere production using 1 μm laser light. First, a large initial shock at the ablation surface limits the pulse shaping flexibility, and degrades implosion performance. Second, the mass and heat capacity of the atmosphere reduce the energy delivered to the ablation surface and the driving pressures obtained for a given input energy. Improvement is possible using 2 μm light for the initial phase of the implosion. We present results of 2-D simulations which evaluate combined symmetry and stability requirements. At l = 8, the improvement produced in the example is a factor of 10, giving tolerance of 10 percent

  19. An Analysis of Testing Requirements for Fluoride Salt Cooled High Temperature Reactor Components

    Holcomb, David Eugene [ORNL; Cetiner, Sacit M [ORNL; Flanagan, George F [ORNL; Peretz, Fred J [ORNL; Yoder Jr, Graydon L [ORNL

    2009-11-01

    This report provides guidance on the component testing necessary during the next phase of fluoride salt-cooled high temperature reactor (FHR) development. In particular, the report identifies and describes the reactor component performance and reliability requirements, provides an overview of what information is necessary to provide assurance that components will adequately achieve the requirements, and then provides guidance on how the required performance information can efficiently be obtained. The report includes a system description of a representative test scale FHR reactor. The reactor parameters presented in this report should only be considered as placeholder values until an FHR test scale reactor design is completed. The report focus is bounded at the interface between and the reactor primary coolant salt and the fuel and the gas supply and return to the Brayton cycle power conversion system. The analysis is limited to component level testing and does not address system level testing issues. Further, the report is oriented as a bottom-up testing requirements analysis as opposed to a having a top-down facility description focus.

  20. High performance computing in power and energy systems

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  1. Argonne National Laboratory high performance network support of APS experiments

    Knot, M.J.; McMahon, R.J.

    1996-01-01

    Argonne National Laboratory is currently positioned to provide access to high performance regional and national networks. Much of the impetus for this effort is the anticipated needs of the upcoming experimental program at the APS. Some APS collaborative access teams (CATs) are already pressing for network speed improvements and security enhancements. Requirements range from the need for high data rate, secure transmission of experimental data, to the desire to establish a open-quote open-quote virtual experimental environment close-quote close-quote at their home institution. In the near future, 155 megabit/sec (Mb/s) national and regional asynchronous transfer mode (ATM) networks will be operational and available to APS users. Full-video teleconferencing, virtual presence operation of experiments, and high speed, secure transmission of data are being tested and, in some cases, will be operational. We expect these efforts to enable a substantial improvement in the speed of processing experimental results as well as an increase in convenience to the APS experimentalist. copyright 1996 American Institute of Physics

  2. Communication Requirements and Interconnect Optimization forHigh-End Scientific Applications

    Kamil, Shoaib; Oliker, Leonid; Pinar, Ali; Shalf, John

    2007-11-12

    The path towards realizing peta-scale computing isincreasingly dependent on building supercomputers with unprecedentednumbers of processors. To prevent the interconnect from dominating theoverall cost of these ultra-scale systems, there is a critical need forhigh-performance network solutions whose costs scale linearly with systemsize. This work makes several unique contributions towards attaining thatgoal. First, we conduct one of the broadest studies to date of high-endapplication communication requirements, whose computational methodsinclude: finite-difference, lattice-bolzmann, particle in cell, sparselinear algebra, particle mesh ewald, and FFT-based solvers. Toefficiently collect this data, we use the IPM (Integrated PerformanceMonitoring) profiling layer to gather detailed messaging statistics withminimal impact to code performance. Using the derived communicationcharacterizations, we next present fit-trees interconnects, a novelapproach for designing network infrastructure at a fraction of thecomponent cost of traditional fat-tree solutions. Finally, we propose theHybrid Flexibly Assignable Switch Topology (HFAST) infrastructure, whichuses both passive (circuit) and active (packet) commodity switchcomponents to dynamically reconfigure interconnects to suit thetopological requirements of scientific applications. Overall ourexploration leads to a promising directions for practically addressingthe interconnect requirements of future peta-scale systems.

  3. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  4. Inclusive vision for high performance computing at the CSIR

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  5. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Čeko, Marta; Gracely, John L.; Fitzcharles, Mary-Ann; Seminowicz, David A.; Schweinhardt, Petra

    2015-01-01

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or “negative” [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient

  6. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Čeko, Marta; Gracely, John L; Fitzcharles, Mary-Ann; Seminowicz, David A; Schweinhardt, Petra; Bushnell, M Catherine

    2015-08-19

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or "negative" [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient. We studied the

  7. High Performance Low Mass Nanowire Enabled Heatpipe, Phase II

    National Aeronautics and Space Administration — Heat pipes are widely used for passive, two-phase electronics cooling. As advanced high power, high performance electronics in space based and terrestrial...

  8. The association of students requiring remediation in the internal medicine clerkship with poor performance during internship.

    Hemann, Brian A; Durning, Steven J; Kelly, William F; Dong, Ting; Pangaro, Louis N; Hemmer, Paul A

    2015-04-01

    To determine whether the Uniformed Services University (USU) system of workplace performance assessment for students in the internal medicine clerkship at the USU continues to be a sensitive predictor of subsequent poor performance during internship, when compared with assessments in other USU third year clerkships. Utilizing Program Director survey results from 2007 through 2011 and U.S. Medical Licensing Examination (USMLE) Step 3 examination results as the outcomes of interest, we compared performance during internship for students who had less than passing performance in the internal medicine clerkship and required remediation, against students whose performance in the internal medicine clerkship was successful. We further analyzed internship ratings for students who received less than passing grades during the same time period on other third year clerkships such as general surgery, pediatrics, obstetrics and gynecology, family medicine, and psychiatry to evaluate whether poor performance on other individual clerkships were associated with future poor performance at the internship level. Results for this recent cohort of graduates were compared with previously published findings. The overall survey response rate for this 5 year cohort was 81% (689/853). Students who received a less than passing grade in the internal medicine clerkship and required further remediation were 4.5 times more likely to be given poor ratings in the domain of medical expertise and 18.7 times more likely to demonstrate poor professionalism during internship. Further, students requiring internal medicine remediation were 8.5 times more likely to fail USMLE Step 3. No other individual clerkship showed any statistically significant associations with performance at the intern level. On the other hand, 40% of students who successfully remediated and did graduate were not identified during internship as having poor performance. Unsuccessful clinical performance which requires remediation in

  9. Factors Affecting University Entrants' Performance in High-Stakes Tests: A Multiple Regression Analysis

    Uy, Chin; Manalo, Ronaldo A.; Cabauatan, Ronaldo R.

    2015-01-01

    In the Philippines, students seeking admission to a university are usually required to meet certain entrance requirements, including passing the entrance examinations with questions on IQ and English, mathematics, and science. This paper aims to determine the factors that affect the performance of entrants into business programmes in high-stakes…

  10. High performance leadership in unusually challenging educational circumstances

    Andy Hargreaves

    2015-04-01

    Full Text Available This paper draws on findings from the results of a study of leadership in high performing organizations in three sectors. Organizations were sampled and included on the basis of high performance in relation to no performance, past performance, performance among similar peers and performance in the face of limited resources or challenging circumstances. The paper concentrates on leadership in four schools that met the sample criteria.  It draws connections to explanations of the high performance ofEstoniaon the OECD PISA tests of educational achievement. The article argues that leadership in these four schools that performed above expectations comprised more than a set of competencies. Instead, leadership took the form of a narrative or quest that pursued an inspiring dream with relentless determination; took improvement pathways that were more innovative than comparable peers; built collaboration and community including with competing schools; and connected short-term success to long-term sustainability.

  11. Construction products performances and basic requirements for fire safety of facades in energy rehabilitation of buildings

    Laban Mirjana Đ.

    2015-01-01

    Full Text Available Construction product means any product or kit which is produced and placed on the market for incorporation in a permanent manner in construction works, or parts thereof, and the performance of which has an effect on the performance of the construction works with respect to the basic requirements for construction works. Safety in case of fire and Energy economy and heat retention represent two among seven basic requirements which building has to meet according to contemporary technical rules on planning and construction. Performances of external walls building materials (particularly reaction to fire could significantly affect to fire spread on the façade and other building parts. Therefore, façade shaping and materialization in building renewal process, has to meet the fire safety requirement, as well as the energy requirement. Brief survey of fire protection regulations development in Serbia is presented in the paper. Preventive measures for fire risk reduction in building façade energy renewal are proposed according to contemporary fire safety requirements.

  12. A Lithium-Air Battery Stably Working at High Temperature with High Rate Performance.

    Pan, Jian; Li, Houpu; Sun, Hao; Zhang, Ye; Wang, Lie; Liao, Meng; Sun, Xuemei; Peng, Huisheng

    2018-02-01

    Driven by the increasing requirements for energy supply in both modern life and the automobile industry, the lithium-air battery serves as a promising candidate due to its high energy density. However, organic solvents in electrolytes are likely to rapidly vaporize and form flammable gases under increasing temperatures. In this case, serious safety problems may occur and cause great harm to people. Therefore, a kind of lithium-air that can work stably under high temperature is desirable. Herein, through the use of an ionic liquid and aligned carbon nanotubes, and a fiber shaped design, a new type of lithium-air battery that can effectively work at high temperatures up to 140 °C is developed. Ionic liquids can offer wide electrochemical windows and low vapor pressures, as well as provide high thermal stability for lithium-air batteries. The aligned carbon nanotubes have good electric and heat conductivity. Meanwhile, the fiber format can offer both flexibility and weavability, and realize rapid heat conduction and uniform heat distribution of the battery. In addition, the high temperature has also largely improved the specific powers by increasing the ionic conductivity and catalytic activity of the cathode. Consequently, the lithium-air battery can work stably at 140 °C with a high specific current of 10 A g -1 for 380 cycles, indicating high stability and good rate performance at high temperatures. This work may provide an effective paradigm for the development of high-performance energy storage devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. HIGH PERFORMANCE STATIONARY DISCHARGES IN THE DIII-D TOKAMAK

    Luce, T.C.; Wade, M.R.; Ferron, J.R.; Politzer, P.A.; Hyatt, A.W.; Sips, A.C.C.; Murakami, M.

    2003-01-01

    Recent experiments in the DIII-D tokamak [J.L. Luxon, Nucl. Fusion 42,614 (2002)] have demonstrated high β with good confinement quality under stationary conditions. Two classes of stationary discharges are observed--low q 95 discharges with sawteeth and higher q 95 without sawteeth. The discharges are deemed stationary when the plasma conditions are maintained for times greater than the current profile relaxation time. In both cases the normalized fusion performance (β N H 89P /q 95 2 ) reaches or exceeds the value of this parameter projected for Q fus = 10 in the International Thermonuclear Experimental Reactor (ITER) design [R. Aymar, et al., Plasma Phys. Control. Fusion 44, 519 (2002)]. The presence of sawteeth reduces the maximum achievable normalized β, while confinement quality (confinement time relative to scalings) is largely independent of q 95 . Even with the reduced β limit, the normalized fusion performance maximizes at the lowest q 95 . Projections to burning plasma conditions are discussed, including the methodology of the projection and the key physics issues which still require investigation

  14. Sympathetic Tone Induced by High Acoustic Tempo Requires Fast Respiration.

    Ken Watanabe

    Full Text Available Many studies have revealed the influences of music, and particularly its tempo, on the autonomic nervous system (ANS and respiration patterns. Since there is the interaction between the ANS and the respiratory system, namely sympatho-respiratory coupling, it is possible that the effect of musical tempo on the ANS is modulated by the respiratory system. Therefore, we investigated the effects of the relationship between musical tempo and respiratory rate on the ANS. Fifty-two healthy people aged 18-35 years participated in this study. Their respiratory rates were controlled by using a silent electronic metronome and they listened to simple drum sounds with a constant tempo. We varied the respiratory rate-acoustic tempo combination. The respiratory rate was controlled at 15 or 20 cycles per minute (CPM and the acoustic tempo was 60 or 80 beats per minute (BPM or the environment was silent. Electrocardiograms and an elastic chest band were used to measure the heart rate and respiratory rate, respectively. The mean heart rate and heart rate variability (HRV were regarded as indices of ANS activity. We observed a significant increase in the mean heart rate and the low (0.04-0.15 Hz to high (0.15-0.40 Hz frequency ratio of HRV, only when the respiratory rate was controlled at 20 CPM and the acoustic tempo was 80 BPM. We suggest that the effect of acoustic tempo on the sympathetic tone is modulated by the respiratory system.

  15. Sympathetic Tone Induced by High Acoustic Tempo Requires Fast Respiration.

    Watanabe, Ken; Ooishi, Yuuki; Kashino, Makio

    2015-01-01

    Many studies have revealed the influences of music, and particularly its tempo, on the autonomic nervous system (ANS) and respiration patterns. Since there is the interaction between the ANS and the respiratory system, namely sympatho-respiratory coupling, it is possible that the effect of musical tempo on the ANS is modulated by the respiratory system. Therefore, we investigated the effects of the relationship between musical tempo and respiratory rate on the ANS. Fifty-two healthy people aged 18-35 years participated in this study. Their respiratory rates were controlled by using a silent electronic metronome and they listened to simple drum sounds with a constant tempo. We varied the respiratory rate-acoustic tempo combination. The respiratory rate was controlled at 15 or 20 cycles per minute (CPM) and the acoustic tempo was 60 or 80 beats per minute (BPM) or the environment was silent. Electrocardiograms and an elastic chest band were used to measure the heart rate and respiratory rate, respectively. The mean heart rate and heart rate variability (HRV) were regarded as indices of ANS activity. We observed a significant increase in the mean heart rate and the low (0.04-0.15 Hz) to high (0.15-0.40 Hz) frequency ratio of HRV, only when the respiratory rate was controlled at 20 CPM and the acoustic tempo was 80 BPM. We suggest that the effect of acoustic tempo on the sympathetic tone is modulated by the respiratory system.

  16. Estimation of waste package performance requirements for a nuclear waste repository in basalt

    Wood, B.J.

    1980-07-01

    A method of developing waste package performance requirements for specific nuclides is described, and based on federal regulations concerning permissible concentrations in solution at the point of discharge to the accessible environment, a simple and conservative transport model, and baseline and potential worst-case release scenarios

  17. 40 CFR 158.2070 - Biochemical pesticides product performance data requirements.

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Biochemical pesticides product performance data requirements. 158.2070 Section 158.2070 Protection of Environment ENVIRONMENTAL PROTECTION... efficacy data unless the pesticide product bears a claim to control public health pests, such as pest...

  18. Defense Organization Officials Did Not Consistently Comply With Requirements for Assessing Contractor Performance

    2017-02-01

    must evaluate compliance with reporting requirements frequently so they can readily identify delinquent past performance reports.24 The FAR also...problems the contractor recovered from without impact to the contract/ order. There should have been no significant weaknesses identified. A...contractor had trouble overcoming and state how it impacted the Government. A Marginal rating should be supported by referencing the management tool

  19. 20 CFR 641.879 - What are the fiscal and performance reporting requirements for recipients?

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false What are the fiscal and performance reporting requirements for recipients? 641.879 Section 641.879 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION... Quarterly Progress Report (QPR) to the Department in electronic format via the Internet within 30 days after...

  20. 75 FR 76254 - Official Performance and Procedural Requirements for Grain Weighing Equipment and Related Grain...

    2010-12-08

    ... DEPARTMENT OF AGRICULTURE Grain Inspection, Packers and Stockyards Administration 7 CFR Part 802 [Docket GIPSA-2010-FGIS-0012] RIN 0580-AB19 Official Performance and Procedural Requirements for Grain Weighing Equipment and Related Grain Handling Systems AGENCY: Grain Inspection, Packers and Stockyards...

  1. Comparison of exertion required to perform standard and active compression-decompression cardiopulmonary resuscitation.

    Shultz, J J; Mianulli, M J; Gisch, T M; Coffeen, P R; Haidet, G C; Lurie, K G

    1995-02-01

    Active compression-decompression (ACD) cardiopulmonary resuscitation (CPR) utilizes a hand-held suction device with a pressure gauge that enables the operator to compress as well as actively decompress the chest. This new CPR method improves hemodynamic and ventilatory parameters when compared with standard CPR. ACD-CPR is easy to perform but may be more labor intensive. The purpose of this study was to quantify and compare the work required to perform ACD and standard CPR. Cardiopulmonary testing was performed on six basic cardiac life support- and ACD-trained St. Paul, MN fire-fighter personnel during performance of 10 min each of ACD and standard CPR on a mannequin equipped with a compression gauge. The order of CPR techniques was determined randomly with > 1 h between each study. Each CPR method was performed at 80 compressions/min (timed with a metronome), to a depth of 1.5-2 inches, and with a 50% duty cycle. Baseline cardiopulmonary measurements were similar at rest prior to performance of both CPR methods. During standard and ACD-CPR, respectively, rate-pressure product was 18.2 +/- 3.0 vs. 23.8 +/- 1.7 (x 1000, P CPR compared with standard CPR. Both methods require subanaerobic energy expenditure and can therefore be sustained for a sufficient length of time by most individuals to optimize resuscitation efforts. Due to the slightly higher work requirement, ACD-CPR may be more difficult to perform compared with standard CPR for long periods of time, particularly by individuals unaccustomed to the workload requirement of CPR, in general.

  2. Applying Required Navigation Performance Concept for Traffic Management of Small Unmanned Aircraft Systems

    Jung, Jaewoo; D'Souza, Sarah N.; Johnson, Marcus A.; Ishihara, Abraham K.; Modi, Hemil C.; Nikaido, Ben; Hasseeb, Hashmatullah

    2016-01-01

    In anticipation of a rapid increase in the number of civil Unmanned Aircraft System(UAS) operations, NASA is researching prototype technologies for a UAS Traffic Management (UTM) system that will investigate airspace integration requirements for enabling safe, efficient low-altitude operations. One aspect a UTM system must consider is the correlation between UAS operations (such as vehicles, operation areas and durations), UAS performance requirements, and the risk to people and property in the operational area. This paper investigates the potential application of the International Civil Aviation Organizations (ICAO) Required Navigation Performance (RNP) concept to relate operational risk with trajectory conformance requirements. The approach is to first define a method to quantify operational risk and then define the RNP level requirement as a function of the operational risk. Greater operational risk corresponds to more accurate RNP level, or smaller tolerable Total System Error (TSE). Data from 19 small UAS flights are used to develop and validate a formula that defines this relationship. An approach to assessing UAS-RNP conformance capability using vehicle modeling and wind field simulation is developed to investigate how this formula may be applied in a future UTM system. The results indicate the modeled vehicles flight path is robust to the simulated wind variation, and it can meet RNP level requirements calculated by the formula. The results also indicate how vehicle-modeling fidelity may be improved to adequately verify assessed RNP level.

  3. High-performance liquid chromatographic radioenzymatic assay for plasma catecyholamines

    Klaniecki, T.S.; Corder, C.N.; McDonald, R.H. Jr.; Feldman, J.A.

    1977-01-01

    A new assay method for plasma catecholamimes (CA) requiring only 50 μl has been developed, which uses high performance liquid chromatography (HPLC). The norepinephrine (NE), dopamine (D), and epinephrine (E) compounds found in plasma are radioactively o-methylated with S-[methyl- 3 H]-adenosyl-L-methionine ( 3 H-SAM) 3 H-SAM by the reaction of catechol-o-methyl transferase (COMT). The reaction is terminated and a standard mixture of nonradioactive o-methylated analogues of NE, D, and E is added to act as a carrier. Following separation by HPLC, the D,L-normetanephrine (NMN), 3-methoxy-4-hydroxyphenylethyl-amine or 3-methoxytyramine (3-MOT), and metanephrine (MN) radioactive peaks are collected which represent NE, D, and E, respectively. Then MNM and MN are oxidized to vanillin, and 3-MOT is acetylated. The products are subsequently separated by solvent extraction. This is necessary in order to avoid high radioactive blanks and to allow quantitation of the radioactivity by liquid scintillation spectrometry. The mean supine levels of NE, D, and E in normal subjects were respectively 182, 33, and 87 pg/ml of plasma. Similar assays on patients with pheochromocytoma revealed 797, 80, and 470 pg/ml

  4. WARRIOR II, a high performance modular electric robot system

    Downton, G.C.

    1996-01-01

    A high performance electric robot, WARRIOR, was built for in-reactor welding at the Oldbury nuclear power plant in the United Kingdom in the mid 1980s. WARRIOR II has been developed as a lighter, smaller diameter articulated welding robot which can be deployed on its umbilical down a stand pipe for remote docking with the manipulator system which delivers it to its work site. A key feature of WARRIOR II has been the development of a prototype spherical modular joint. The module provides the drive torque necessary to motivate the robot arm, acts as the joint bearing, has standard mechanical interfaces for the limb sections, accurately measures the joint angle and has cable services running through the centre. It can act either as a bend or rotate joint and the interconnecting limb sections need only to be simple tubular sections. A wide range of manipulator configurations to suit the access constraints of particular problems can be achieved with a set of joint modules and limb sections. A general purpose motion controller has also been developed which is capable of kinematically controlling any configuration of WARRIOR II thus contributing to the realisation of the concept of a general purpose tool which can be used over and over again, at short notice, in any situation where a high precision, light weight, versatile manipulator is required. (UK)

  5. A High Performance Delta-Sigma Modulator for Neurosensing.

    Xu, Jian; Zhao, Menglian; Wu, Xiaobo; Islam, Md Kafiul; Yang, Zhi

    2015-08-07

    Recorded neural data are frequently corrupted by large amplitude artifacts that are triggered by a variety of sources, such as subject movements, organ motions, electromagnetic interferences and discharges at the electrode surface. To prevent the system from saturating and the electronics from malfunctioning due to these large artifacts, a wide dynamic range for data acquisition is demanded, which is quite challenging to achieve and would require excessive circuit area and power for implementation. In this paper, we present a high performance Delta-Sigma modulator along with several design techniques and enabling blocks to reduce circuit area and power. The modulator was fabricated in a 0.18-µm CMOS process. Powered by a 1.0-V supply, the chip can achieve an 85-dB peak signal-to-noise-and-distortion ratio (SNDR) and an 87-dB dynamic range when integrated over a 10-kHz bandwidth. The total power consumption of the modulator is 13 µW, which corresponds to a figure-of-merit (FOM) of 45 fJ/conversion step. These competitive circuit specifications make this design a good candidate for building high precision neurosensors.

  6. High performance graphics processors for medical imaging applications

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  7. Lightweight Provenance Service for High-Performance Computing

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  8. High-Level software requirements specification for the TWRS controlled baseline database system

    Spencer, S.G.

    1998-01-01

    This Software Requirements Specification (SRS) is an as-built document that presents the Tank Waste Remediation System (TWRS) Controlled Baseline Database (TCBD) in its current state. It was originally known as the Performance Measurement Control System (PMCS). Conversion to the new system name has not occurred within the current production system. Therefore, for simplicity, all references to TCBD are equivalent to PMCS references. This SRS will reference the PMCS designator from this point forward to capture the as-built SRS. This SRS is written at a high-level and is intended to provide the design basis for the PMCS. The PMCS was first released as the electronic data repository for cost, schedule, and technical administrative baseline information for the TAAS Program. During its initial development, the PMCS was accepted by the customer, TARS Business Management, with no formal documentation to capture the initial requirements

  9. Sex Differences in Mathematics Performance among Senior High ...

    This study explored sex differences in mathematics performance of students in the final year of high school and changes in these differences over a 3-year period in Ghana. A convenience sample of 182 students, 109 boys and 72 girls in three high schools in Ghana was used. Mathematics performance was assessed using ...

  10. Control switching in high performance and fault tolerant control

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By usi...

  11. Mechanical Properties of High Performance Cementitious Grout (II)

    Sørensen, Eigil V.

    The present report is an update of the report “Mechanical Properties of High Performance Cementitious Grout (I)” [1] and describes tests carried out on the high performance grout MASTERFLOW 9500, marked “WMG 7145 FP”, developed by BASF Construction Chemicals A/S and designed for use in grouted...

  12. Development of new high-performance stainless steels

    Park, Yong Soo

    2002-01-01

    This paper focused on high-performance stainless steels and their development status. Effect of nitrogen addition on super-stainless steel was discussed. Research activities at Yonsei University, on austenitic and martensitic high-performance stainless, steels, and the next-generation duplex stainless steels were introduced

  13. Development of High-Performance Cast Crankshafts. Final Technical Report

    Bauer, Mark E [General Motors, Detroit, MI (United States)

    2017-03-31

    The objective of this project was to develop technologies that would enable the production of cast crankshafts that can replace high performance forged steel crankshafts. To achieve this, the Ultimate Tensile Strength (UTS) of the new material needs to be 850 MPa with a desired minimum Yield Strength (YS; 0.2% offset) of 615 MPa and at least 10% elongation. Perhaps more challenging, the cast material needs to be able to achieve sufficient local fatigue properties to satisfy the durability requirements in today’s high performance gasoline and diesel engine applications. The project team focused on the development of cast steel alloys for application in crankshafts to take advantage of the higher stiffness over other potential material choices. The material and process developed should be able to produce high-performance crankshafts at no more than 110% of the cost of current production cast units, perhaps the most difficult objective to achieve. To minimize costs, the primary alloy design strategy was to design compositions that can achieve the required properties with minimal alloying and post-casting heat treatments. An Integrated Computational Materials Engineering (ICME) based approach was utilized, rather than relying only on traditional trial-and-error methods, which has been proven to accelerate alloy development time. Prototype melt chemistries designed using ICME were cast as test specimens and characterized iteratively to develop an alloy design within a stage-gate process. Standard characterization and material testing was done to validate the alloy performance against design targets and provide feedback to material design and manufacturing process models. Finally, the project called for Caterpillar and General Motors (GM) to develop optimized crankshaft designs using the final material and manufacturing processing path developed. A multi-disciplinary effort was to integrate finite element analyses by engine designers and geometry-specific casting

  14. Micromagnetics on high-performance workstation and mobile computational platforms

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  15. High-Performance Secure Database Access Technologies for HEP Grids

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  16. High-Performance Secure Database Access Technologies for HEP Grids

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  17. Energy-Performance-Based Design-Build Process: Strategies for Procuring High-Performance Buildings on Typical Construction Budgets: Preprint

    Scheib, J.; Pless, S.; Torcellini, P.

    2014-08-01

    NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy use requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.

  18. High Performance Walls in Hot-Dry Climates

    Hoeschele, Marc [National Renewable Energy Lab. (NREL), Golden, CO (United States); Springer, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dakin, Bill [National Renewable Energy Lab. (NREL), Golden, CO (United States); German, Alea [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  19. High Performance Computing Modernization Program Kerberos Throughput Test Report

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  20. High power valve regulated lead-acid batteries for new vehicle requirements

    Trinidad, Francisco; Sáez, Francisco; Valenciano, Jesús

    The performance of high power VRLA ORBITAL™ batteries is presented. These batteries have been designed with isolated cylindrical cells, providing high reliability to the recombination process, while maintaining, at the same time, a very high compression (>80 kPa) over the life of the battery. Hence, the resulting VRLA modules combine a high rate capability with a very good cycle performance. Two different electrochemically active material compositions have been developed: high porosity and low porosity for starting and deep cycle applications, respectively (depending on the power demand and depth of discharge). Although, the initial performance of the starting version is higher, after a few cycles the active material of the deep cycle version is fully developed, and this achieves the same high rate capability. Both types are capable of supplying the necessary reliability for cranking at the lowest temperature (-40°C). Specific power of over 500 W/kg is achievable at a much lower cost than for nickel-metal hydride systems. Apart from the initial performance, an impressive behaviour of the cycling version has been found in deep cycle applications, due to the highly compressed and high density active material. When submitted to continuous discharge-charge cycles at 75% (IEC 896-2 specification) and 100% (BCI deep cycle) DoD, it has been found that the batteries are still healthy after more than 1000 and 700 cycles, respectively. However, it has been proven that the application of an IUi algorithm (up to 110% of overcharging) with a small constant current charging period at the end of the charge is absolutely necessary to achieve the above results. Without the final boosting period, the cycle life of the battery could be substantially shortened. The high specific power and reliability observed in the tests carried out, would allow ORBITAL™ batteries to comply with the more demanding requirements that are being introduced in conventional and future hybrid electric