WorldWideScience

Sample records for high performance microprocessor

  1. Architectural and compiler techniques for energy reduction in high-performance microprocessors

    Bellas, Nikolaos

    1999-11-01

    The microprocessor industry has started viewing power, along with area and performance, as a decisive design factor in today's microprocessors. The increasing cost of packaging and cooling systems poses stringent requirements on the maximum allowable power dissipation. Most of the research in recent years has focused on the circuit, gate, and register-transfer (RT) levels of the design. In this research, we focus on the software running on a microprocessor and we view the program as a power consumer. Our work concentrates on the role of the compiler in the construction of "power-efficient" code, and especially its interaction with the hardware so that unnecessary processor activity is saved. We propose techniques that use extra hardware features and compiler-driven code transformations that specifically target activity reduction in certain parts of the CPU which are known to be large power and energy consumers. Design for low power/energy at this level of abstraction entails larger energy gains than in the lower stages of the design hierarchy in which the design team has already made the most important design commitments. The role of the compiler in generating code which exploits the processor organization is also fundamental in energy minimization. Hence, we propose a hardware/software co-design paradigm, and we show what code transformations are necessary by the compiler so that "wasted" power in a modern microprocessor can be trimmed. More specifically, we propose a technique that uses an additional mini cache located between the instruction cache (I-Cache) and the CPU core; the mini cache buffers instructions that are nested within loops and are continuously fetched from the I-Cache. This mechanism can create very substantial energy savings, since the I-Cache unit is one of the main power consumers in most of today's high-performance microprocessors. Results are reported for the SPEC95 benchmarks in the R-4400 processor which implements the MIPS2 instruction

  2. Sub-50 nm gate length SOI transistor development for high performance microprocessors

    Horstmann, M.; Greenlaw, D.; Feudel, Th.; Wei, A.; Frohberg, K.; Burbach, G.; Gerhardt, M.; Lenski, M.; Stephan, R.; Wieczorek, K.; Schaller, M.; Hohage, J.; Ruelke, H.; Klais, J.; Huebler, P.; Luning, S.; Bentum, R. van; Grasshoff, G.; Schwan, C.; Cheek, J.; Buller, J.; Krishnan, S.; Raab, M.; Kepler, N.

    2004-01-01

    Partial depleted (PD) SOI technologies have reached maturity for production of high speed, low power microprocessors. The paper will highlight several challenges found during the course of development for bringing 40 nm gate length (L GATE ) PD SOI transistors into volume manufacturing for high-speed microprocessors. The key innovations developed for this transistor in order to overcome classical gate oxide and L GATE scaling is an unique differential triple spacer structure, stressed overlayer films inducing strain in the Silicon channel and optimized junctions. This transistor structure yields an outstanding ring oscillator speed with an unloaded inverter delay of 5.5 ps. The found improvements are highly manufacturable and scaleable for future device technologies like FD SOI

  3. Design Example of Useful Memory Latency for Developing a Hazard Preventive Pipeline High-Performance Embedded-Microprocessor

    Ching-Hwa Cheng

    2013-01-01

    Full Text Available The existence of structural, control, and data hazards presents a major challenge in designing an advanced pipeline/superscalar microprocessor. An efficient memory hierarchy cache-RAM-Disk design greatly enhances the microprocessor's performance. However, there are complex relationships among the memory hierarchy and the functional units in the microprocessor. Most past architectural design simulations focus on the instruction hazard detection/prevention scheme from the viewpoint of function units. This paper emphasizes that additional inboard memory can be well utilized to handle the hazardous conditions. When the instruction meets hazardous issues, the memory latency can be utilized to prevent performance degradation due to the hazard prevention mechanism. By using the proposed technique, a better architectural design can be rapidly validated by an FPGA at the start of the design stage. In this paper, the simulation results prove that our proposed methodology has a better performance and less power consumption compared to the conventional hazard prevention technique.

  4. Microprocessors

    Cornillie, O A R

    1985-01-01

    Microprocessors presents an overview of the state of the art in the field of microprocessors and illustrates, with the aid of patents, its utilization and application. Organized into six parts, the book begins with an introduction to the microprocessor, microcomputer, and software. Parts I-III focus on program control, digital control, and electrical motor control. Subsequent parts show the medical applications, measuring instruments, and treatment of data in microprocessors.

  5. Evaluation of the performance of microprocessor-based colorimeter

    Randhawa, S. S.; Gupta, R. C.; Bhandari, A. K.; Malhotra, P. S.

    1992-01-01

    Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood ...

  6. Evaluation of the performance of microprocessor-based colorimeter.

    Randhawa, S S; Gupta, R C; Bhandari, A K; Malhotra, P S

    1992-01-01

    Colorimetric estimations have an important role in quantitative studies. An inexpensive and portable microprocessor-based colorimeter developed by the authors is described in this paper. The colorimeter uses a light emitting diode as the light source; a pinphotodiode as the detector and an 8085A microprocessor. Blood urea, glucose, total protein, albumin and bilirubin from patient blood samples were analysed with the instrument and results obtained were compared with assays of the same blood using a Spectronic 21. A good correlation was found between the results from the two instruments.

  7. High-speed multiple-channel analog to digital data-acquisition module for microprocessor systems

    Ethridge, C.D.

    1977-01-01

    Intelligent data acquisition and instrumentation systems established by the incorporation of microprocessor technology require high-speed analog to digital conversion of multiple-channel input signals. Sophisticated data systems or subsystems are enabled by the microprocessor software flexibility to establish adaptive input data procedures. These adaptive procedures are enhanced by versatile interface circuitry which is software controlled

  8. High speed serial link for UA1 microprocessor network

    Cittolin, S; Zurfluh, E

    1981-01-01

    The UA1 data acquisition system consists of a set of distributed microprocessor units. An interprocessor link, independent of the CAMAC data readout, has been developed in order to have continuous remote control and run-time data handling, e.g. transmission of calibration programs/parameters, equipment test/status and histogram accumulation. The data transmission system is designed to be used in a loop configuration equipped with transceivers for twisted pair cables (RS-422). As an economical system, it is running as an ancillary serial loop-link between microprocessors, like Data Acquisition Crate Controllers and systems with distributed intelligence. The software driver consists of a loop-controller package, which may run in a BAMBI Computer Language environment and a fully interrupt controlled program for all other secondary stations. A special single-character mode provides a handy link for remote debugging in a pseudo-full-duplex mode. The format is based on the HDLC protocol without sequence numbering. ...

  9. High speed serial link for UA1 microprocessor network

    Cittolin, S.; Loefstedt, B.; Zurfluh, E.

    1981-01-01

    The UA1 data acquisition system consists of a set of distributed microprocessor units. An interprocessor link, independent of the CAMAC data readout, has been developed in order to have continuous remote control and run-time data handling, e.g. transmission of calibration programs/parameters, equipment rest/status and histogram accumulation. The data transmission system is designed to be used in a loop configuration equipped with transceivers for twisted pair cables (RS-422). As an economical system it is running as an ancillary serial loop-link between microprocessors Like Data Acquisition Crate Controllers and systems with distributed intelligence. The software driver consists of a loop-controller package, which may run in a BAMBI Computer Language environment and a fully interrupt controlled program for all other secondary stations. A special single-character mode provides a handy link for remote debugging in a pseudo-full-duplex mode. The format is based on the HDLC protocol without sequence numbering. The Chip MC-6854 from Motorola, Inc. enables an implementation with few components. (orig.)

  10. High speed serial link for UA1 microprocessor network

    Cittolin, Sergio; Zurfluh, E

    1981-01-01

    The UA1 data acquisition system consists of a set of distributed microprocessor units. An interprocessor link, independent of the CAMAC data readout, has been developed in order to have continuous remote control and run-time data handling, e.g. transmission of calibration programs/parameters, equipment test/status and histogram accumulation. The data transmission system is designed to be used in a loop configuration equipped with transceivers for twisted pair cables (RS-422). As an economical system, it is running as an ancillary serial loop-link between microprocessors, like data acquisition crate controllers and systems with distributed intelligence. The software driver consists of a loop-controller package, which may run in a BAMBI computer language environment and a fully interrupt controlled program for all other secondary stations. A special single-character mode provides a handy link for remote debugging in a pseudo-full-duplex mode. The format is based on the HDLC protocol without sequence numbering. ...

  11. Designs and performance of three new microprocessor-controlled knee joints.

    Thiele, Julius; Schöllig, Christina; Bellmann, Malte; Kraft, Marc

    2018-02-09

    A crossover design study with a small group of subjects was used to evaluate the performance of three microprocessor-controlled exoprosthetic knee joints (MPKs): C-Leg 4, Plié 3 and Rheo Knee 3. Given that the mechanical designs and control algorithms of the joints determine the user outcome, the influence of these inherent differences on the functional characteristics was investigated in this study. The knee joints were evaluated during level-ground walking at different velocities in a motion analysis laboratory. Additionally, technical analyses using patents, technical documentations and X-ray computed tomography (CT) for each knee joint were performed. The technical analyses showed that only C-Leg 4 and Rheo Knee 3 allow microprocessor-controlled adaptation of the joint resistances for different gait velocities. Furthermore, Plié 3 is not able to provide stance extension damping. The biomechanical results showed that only if a knee joint adapts flexion and extension resistances by the microprocessor all known advantages of MPKs can become apparent. But not all users may benefit from the examined functions: e.g. a good accommodation to fast walking speeds or comfortable stance phase flexion. Hence, a detailed comparison of user demands and performance of the designated knee joint is mandatory to ensure a maximum in user outcome.

  12. Small Microprocessor for ASIC or FPGA Implementation

    Kleyner, Igor; Katz, Richard; Blair-Smith, Hugh

    2011-01-01

    A small microprocessor, suitable for use in applications in which high reliability is required, was designed to be implemented in either an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The design is based on commercial microprocessor architecture, making it possible to use available software development tools and thereby to implement the microprocessor at relatively low cost. The design features enhancements, including trapping during execution of illegal instructions. The internal structure of the design yields relatively high performance, with a significant decrease, relative to other microprocessors that perform the same functions, in the number of microcycles needed to execute macroinstructions. The problem meant to be solved in designing this microprocessor was to provide a modest level of computational capability in a general-purpose processor while adding as little as possible to the power demand, size, and weight of a system into which the microprocessor would be incorporated. As designed, this microprocessor consumes very little power and occupies only a small portion of a typical modern ASIC or FPGA. The microprocessor operates at a rate of about 4 million instructions per second with clock frequency of 20 MHz.

  13. Microprocessor-controlled meter of high Q-values

    Bun'kov, S.N.; Konstantinov, V.I.; Masalov, V.L.; Sevrukova, L.M.; Tokarev, A.D.; Usiv, Yu.V.

    1990-01-01

    The paper describes the functional model of a high-precision microcomputer-controlled test facility for studying the electric and physical parameters of superconducting cavities. The basic unit of the test facility is high-stability retunable RF oscillator. It is designed following the scheme of the frequency phase tuning using standard equipment. The systematic error in measuring the loaded Q-value of reentrant cavities is not larger than 5%. A dedicated built-in microcomputer is used to control the measuring test facility and to make the commutations required. 2 refs.; 2 figs

  14. Microprocessor interfacing

    Vears, R E

    2014-01-01

    Microprocessor Interfacing provides the coverage of the Business and Technician Education Council level NIII unit in Microprocessor Interfacing (syllabus U86/335). Composed of seven chapters, the book explains the foundation in microprocessor interfacing techniques in hardware and software that can be used for problem identification and solving. The book focuses on the 6502, Z80, and 6800/02 microprocessor families. The technique starts with signal conditioning, filtering, and cleaning before the signal can be processed. The signal conversion, from analog to digital or vice versa, is expl

  15. Radiation hardened COTS-based 32-bit microprocessor

    Haddad, N.; Brown, R.; Cronauer, T.; Phan, H.

    1999-01-01

    A high performance radiation hardened 32-bit RISC microprocessor based upon a commercial single chip CPU has been developed. This paper presents the features of radiation hardened microprocessor, the methods used to radiation harden this device, the results of radiation testing, and shows that the RAD6000 is well-suited for the vast majority of space applications. (authors)

  16. The microprocessor boom

    Anon.

    1979-01-01

    The applications of microprocessors in high energy physics experiments are discussed. Many benefits are predicted for data acquisition and handling systems and for control and monitoring functions. (W.D.L.).

  17. Microprocessor engineering

    Holdsworth, B

    2013-01-01

    Microprocessor Engineering provides an insight in the structures and operating techniques of a small computer. The book is comprised of 10 chapters that deal with the various aspects of computing. The first two chapters tackle the basic arithmetic and logic processes. The third chapter covers the various memory devices, both ROM and RWM. Next, the book deals with the general architecture of microprocessor. The succeeding three chapters discuss the software aspects of machine operation, while the last remaining three chapters talk about the relationship of the microprocessor with the outside wo

  18. Microprocessor control of a wind turbine generator

    Gnecco, A. J.; Whitehead, G. T.

    1978-01-01

    This paper describes a microprocessor based system used to control the unattended operation of a wind turbine generator. The turbine and its microcomputer system are fully described with special emphasis on the wide variety of tasks performed by the microprocessor for the safe and efficient operation of the turbine. The flexibility, cost and reliability of the microprocessor were major factors in its selection.

  19. Microprocessor monitored Auger spectrometer

    Sapin, Michel; Ghaleb, Dominique; Pernot, Bernard.

    1982-05-01

    The operation of an Auger spectrometer, used for studying surface impurity diffusion, has been fully automatized with the help of a microprocessor. The characteristics, performance and practical use of the system are described together with the main advantage for the experimentator [fr

  20. Microprocessors in automatic chemical analysis

    Goujon de Beauvivier, M.; Perez, J.-J.

    1979-01-01

    Application of microprocessors to programming and computing of solutions chemical analysis by a sequential technique is examined. Safety, performances reliability are compared to other methods. An example is given on uranium titration by spectrophotometry [fr

  1. Microprocessor based techniques at CESR

    Giannini, G.; Cornell Univ., Ithaca, NY

    1981-01-01

    Microprocessor based systems succesfully used in connection with the High Energy Physics experimental program at the Cornell Electron Storage Ring are described. The multiprocessor calibration system for the CUSB calorimeter is analyzed in view of present and future applications. (orig.)

  2. Microprocessorized message multiplexer

    Ejzman, S.; Guglielmi, L.; Jaeger, J.J.

    1980-07-01

    The 'Microprocessorized Message Multiplexer' is an elementary development tool used to create and debug the software of a target microprocessor (User Module: UM). It connects together four devices: a terminal, a cassette recorder, the target microprocessor and a host computer where macro and editor for the M 6800 microprocessor are resident [fr

  3. A high resolution wire scanner beam profile monitor with a microprocessor data acquisition system

    Cutler, R.I.; Mohr, D.L.; Whittaker, J.K.; Yoder, N.R.

    1983-01-01

    A beam profile monitor has been constructed for the NBS-LANL Racetrack Microtron. The monitor consists of two perpendicular 30 μm diameter carbon wires that are driven through an electron beam by a pneumatic actuator. A long-lifetime, electroformed nickel bellows is used for the linear-motion vacuum feedthrough. Secondary emission current from the wires and a signal from a transducer measuring the position of the wires are simultaneously digitized by a microprocessor to yield beam current density profiles in two dimensions. The wire scanner is designed for use with both pulsed and cw beams

  4. Microprocessors principles and applications

    Debenham, Michael J

    1979-01-01

    Microprocessors: Principles and Applications deals with the principles and applications of microprocessors and covers topics ranging from computer architecture and programmed machines to microprocessor programming, support systems and software, and system design. A number of microprocessor applications are considered, including data processing, process control, and telephone switching. This book is comprised of 10 chapters and begins with a historical overview of computers and computing, followed by a discussion on computer architecture and programmed machines, paying particular attention to t

  5. Microprocessor aided data acquisition at VEDAS

    Ziem, P.; Drescher, B.; Kapper, K.; Kowallik, R.

    1985-01-01

    Three microprocessor systems have been developed to support data acquisition in nuclear physics multiparameter experiments. A bit-slice processor accumulates up to 256 1-dim spectra and 16 2-dim spectra. A microprocessor, based on the AM 29116 ALU, performs a fast consistency check on the coincidence data. A VME-Bus double-processor displays a colored scatterplot

  6. Fermilab ACP multi-microprocessor project

    Gaines, I.; Areti, H.; Biel, J.; Bracker, S.; Case, G.; Fischler, M.; Husby, D.; Nash, T.

    1984-08-01

    We report on the status of the Fermilab Advanced Computer Program's project to provide more cost-effective computing engines for the high energy physics community. The project will exploit the cheap, but powerful, commercial microprocessors now available by constructing modular multi-microprocessor systems. A working test bed system as well as plans for the next stages of the project are described

  7. Microprocessors in detectors and analysis

    Siskind, E.J.

    1982-01-01

    The increasing need in high energy physics experiments for computation power for both online and offline applications, coupled with the current microprocessor revolution, has led to the examination of the use of microprocessors in various aspects of HEP computing. A brief (and admittedly somewhat biased) review is given of current hardware products, the costs of developing and producing hardware systems, and the costs of providing appropriate software support tools which allow one to make effective use of physicists' time, and the applicability of certain systems to the various needs of HEP computing

  8. Microprocessors in detectors and analysis

    Siskind, E.J.

    1982-01-01

    The increasing need in high energy physics experiments for computation power for both online and offline applications, coupled with the current microprocessor revolution, has led us to examine the use of microprocessors in various aspects of HEP computing. The following article is a brief (and admittedly somewhat biased) review of current hardware products, the costs of developing and producing hardware systems, and the costs of providing appropriate software support tools which allow one to make effective use of physicists' time, and the applicability of certain systems to the various needs of HEP computing

  9. Microprocessing in European High Energy Physics Experiments - ECFA Working Group on Data Processing Standards - Report of the Microprocessor Subgroup May 1982

    European Committee for Future Accelerators (ECFA)

    1982-01-01

    This document contains two reports on the use of microprocessors in European High-Energy Physics experiments. The first is a presentation of data collected by a sub-group of the ECFA working group on data procesing standards. The working group is organised by E. Lillestol, University of Bergen and E.M. Rimmer, CERN, DD Division; the Microprocessor sub-group organiser is L.O. Hertzberger, NIKHEF, Amsterdam. Data are given from projects numbered 81 - 194, and some CERN projects are included. Even though there is some duplication of information, a second report has been appended which covers a wider range of CERN projects. This was the result of a microprocessor survey made at CERN by P. Scharff-Hansen, DD Division, at the request of E. Gabthuler. The ECFA working group intends to have reports for all the sub-groups (10 in number) available in machine-readable form at the CERN computer centre. However, it was felt that the information herein is most valuable to designers and users of microprocessors, and that it...

  10. Architecture of 32 bit CISC (Complex Instruction Set Computer) microprocessors

    Jove, T.M.; Ayguade, E.; Valero, M.

    1988-01-01

    In this paper we describe the main topics about the architecture of the best known 32-bit CISC microprocessors; i80386, MC68000 family, NS32000 series and Z80000. We focus on the high level languages support, operating system design facilities, memory management, techniques to speed up the overall performance and program debugging facilities. (Author)

  11. Newnes microprocessor pocket book

    Money, Steve

    2014-01-01

    Newnes Microprocessor Pocket Book explains the basic hardware operation of a microprocessor and describes the actions of the various types of instruction that can be executed. A summary of the characteristics of many of the popular microprocessors is presented. Apart from the popular 8- and 16-bit microprocessors, some details are also given of the popular single chip microcomputers and of the reduced instruction set computer (RISC) type processors such as the Transputer, Novix FORTH processor, and Acorn ARM processor.Comprised of 15 chapters, this book discusses the principles involved in bot

  12. CFD-simulation of radiator for air cooling of microprocessors in a limitided space

    Trofimov V. E.

    2016-12-01

    Full Text Available One of the final stages of microprocessors development is heat test. This procedure is performed on a special stand, the main element of which is the switching PCB with one or more mounted microprocessor sockets, chipsets, interfaces, jumpers and other components which provide various modes of microprocessor operation. The temperature of microprocessor housing is typically changed using thermoelectric module. The cold surface of the module with controlled temperature is in direct thermal contact with the microprocessor housing designed for cooler installation. On the hot surface of the module a radiator is mounted. The radiator dissipates the cumulative heat flow from both the microprocessor and the module. High density PCB layout, the requirement of free access to the jumpers and interfaces, and the presence of numerous sensors limit the space for radiator mounting and require the use of an extremely compact radiator, especially in air cooling conditions. One of the possible solutions for this problem may reduce the area of the radiator heat-transfer surfaces due to a sharp growth of the heat transfer coefficient without increasing the air flow rate. To ensure a sharp growth of heat transfer coefficient on the heat-transfer surface one should make in the surface one or more dead-end cavities into which the impact air jets would flow. CFD simulation of this type of radiator has been conducted. The heat-aerodynamic characteristics and design recommendations for removing heat from microprocessors in a limited space have been determined.

  13. Memory, microprocessor, and ASIC

    Chen, Wai-Kai

    2003-01-01

    System Timing. ROM/PROM/EPROM. SRAM. Embedded Memory. Flash Memories. Dynamic Random Access Memory. Low-Power Memory Circuits. Timing and Signal Integrity Analysis. Microprocessor Design Verification. Microprocessor Layout Method. Architecture. ASIC Design. Logic Synthesis for Field Programmable Gate Array (EPGA) Technology. Testability Concepts and DFT. ATPG and BIST. CAD Tools for BIST/DFT and Delay Faults.

  14. OS Friendly Microprocessor Architecture

    2017-04-01

    NOTES Patrick La Fratta is now affiliated with Micron Technology, Inc., Boise, Idaho. 14. ABSTRACT We present an introduction to the patented ...Operating System Friendly Microprocessor Architecture (OSFA). The software framework to support the hardware-level security features is currently patent ...Army is assignee. OS Friendly Microprocessor Architecture. United States Patent 9122610. 2015 Sep. 2. Jungwirth P, inventor; US Army is assignee

  15. Microprocessor controller for phasing the accelerator

    Howry, S.K.; Wilmunder, A.R.

    1977-03-01

    A microprocessor controller is being developed to perform automatic phasing of the SLAC accelerator. It will replace the existing relay/analog boxes which are ten years old. The new system is all solid state except for the stepping motors that drive the phase shifters. A description is given of the components of the system, the control algorithm, microprocessor hardware and software design and development, and interaction with SLAC's computer control system

  16. Multi-core Microprocessors

    Based on empirical data, Gordon Moore .... there are numerous models of the same Intel microprocessor such as Pentium. 3). ... returns. The limit on instruction and thread-level processing coupled with ..... This style of parallel programming is.

  17. Microprocessor hardware reliability

    Wright, R I

    1982-01-01

    Microprocessor-based technology has had an impact in nearly every area of industrial electronics and many applications have important safety implications. Microprocessors are being used for the monitoring and control of hazardous processes in the chemical, oil and power generation industries, for the control and instrumentation of aircraft and other transport systems and for the control of industrial machinery. Even in the field of nuclear reactor protection, where designers are particularly conservative, microprocessors are used to implement certain safety functions and may play increasingly important roles in protection systems in the future. Where microprocessors are simply replacing conventional hard-wired control and instrumentation systems no new hazards are created by their use. In the field of robotics, however, the microprocessor has opened up a totally new technology and with it has created possible new and as yet unknown hazards. The paper discusses some of the design and manufacturing techniques which may be used to enhance the reliability of microprocessor based systems and examines the available reliability data on lsi/vlsi microcircuits. 12 references.

  18. Flexible nanoscale high-performance FinFETs

    Sevilla, Galo T.; Ghoneim, Mohamed T.; Fahad, Hossain M.; Rojas, Jhonathan Prieto; Hussain, Aftab M.; Hussain, Muhammad Mustafa

    2014-01-01

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show

  19. Optimization of Reciprocals and Square Roots on the i860 Microprocessor

    Sinclair, Robert

    1996-01-01

    The i860 microprocessor lacks both a divide and a square root instruction. The consequences of this for code involving many reciprocal square roots, such as many-body simulations involving Coulomb-like potentials, are discussed with a particular emphasis on high performance.......The i860 microprocessor lacks both a divide and a square root instruction. The consequences of this for code involving many reciprocal square roots, such as many-body simulations involving Coulomb-like potentials, are discussed with a particular emphasis on high performance....

  20. Real-time fetal ECG system design using embedded microprocessors

    Meyer-Baese, Uwe; Muddu, Harikrishna; Schinhaerl, Sebastian; Kumm, Martin; Zipf, Peter

    2016-05-01

    The emphasis of this project lies in the development and evaluation of new robust and high fidelity fetal electrocardiogram (FECG) systems to determine the fetal heart rate (FHR). Recently several powerful algorithms have been suggested to improve the FECG fidelity. Until now it is unknown if these algorithms allow a real-time processing, can be used in mobile systems (low power), and which algorithm produces the best error rate for a given system configuration. In this work we have developed high performance, low power microprocessor-based biomedical systems that allow a fair comparison of proposed, state-of-the-art FECG algorithms. We will evaluate different soft-core microprocessors and compare these solutions to other commercial off-the-shelf (COTS) hardcore solutions in terms of price, size, power, and speed.

  1. Automated mixed traffic transit vehicle microprocessor controller

    Marks, R. A.; Cassell, P.; Johnston, A. R.

    1981-01-01

    An improved Automated Mixed Traffic Vehicle (AMTV) speed control system employing a microprocessor and transistor chopper motor current controller is described and its performance is presented in terms of velocity versus time curves. The on board computer hardware and software systems are described as is the software development system. All of the programming used in this controller was implemented using FORTRAN. This microprocessor controller made possible a number of safety features and improved the comfort associated with starting and shopping. In addition, most of the vehicle's performance characteristics can be altered by simple program parameter changes. A failure analysis of the microprocessor controller was generated and the results are included. Flow diagrams for the speed control algorithms and complete FORTRAN code listings are also included.

  2. Application of multiwall carbon nanotubes for thermal dissipation in a micro-processor

    Bui Hung Thang; Phan Ngoc Hong; Phan Hong Khoi; Phan Ngoc Minh [Institute of Materials Science, Vietnam Academy of Science and Technology, 18 Hoang Quoc Viet Road, Cau Giay District, Hanoi (Viet Nam)], E-mail: minhpn@ims.vast.ac.vn

    2009-09-01

    One of the most valuable properties of the carbon nanotubes materials is its high thermal conductivity with 2000 W/m.K (compared to thermal conductivity of Ag 419 W/m.K). It suggested an approach in applying the CNTs in thermal dissipation media to improve the performance of computer processors and other high power electronic devices. In this research, the multiwall carbon nanotubes (MWCNTs) made by thermal chemical vapour deposition (CVD) at our laboratory was employed as the heat dissipation media in a microprocessor a Personal Computer with configuration: Intel Pentium IV 3.066 GHz, 512Mb of RAM and Windows XP Service Pack 2 Operating System. We directly measured the temperature of the microprocessor during the operation of the computer in two modes: 100% usage CPU mode and over-clocking mode. The measured results showed that when using our thermal dissipation media (a mixture of the mentioned commercial thermal compound and 2 wt.%. MWCNTs), the temperature of the microprocessor decreased 5 deg. C, and the time for increasing the temperature of the microprocessor was three times longer than that when using commercial thermal compound. In over-clocking mode, the processor speed reached 3.8 GHz with 165 MHz of system bus clock speed; it was 1.24 times higher than that in non over-clocking mode. The results confirmed a promising way of using MWCNTs as the thermal dissipation media for microprocessor and high power electronic devices.

  3. Application of multiwall carbon nanotubes for thermal dissipation in a micro-processor

    Thang, Bui Hung; Hong, Phan Ngoc; Khoi, Phan Hong; Minh, Phan Ngoc

    2009-09-01

    One of the most valuable properties of the carbon nanotubes materials is its high thermal conductivity with 2000 W/m.K (compared to thermal conductivity of Ag 419 W/m.K). It suggested an approach in applying the CNTs in thermal dissipation media to improve the performance of computer processors and other high power electronic devices. In this research, the multiwall carbon nanotubes (MWCNTs) made by thermal chemical vapour deposition (CVD) at our laboratory was employed as the heat dissipation media in a microprocessor a Personal Computer with configuration: Intel Pentium IV 3.066 GHz, 512Mb of RAM and Windows XP Service Pack 2 Operating System. We directly measured the temperature of the microprocessor during the operation of the computer in two modes: 100% usage CPU mode and over-clocking mode. The measured results showed that when using our thermal dissipation media (a mixture of the mentioned commercial thermal compound and 2 wt.%. MWCNTs), the temperature of the microprocessor decreased 5°C, and the time for increasing the temperature of the microprocessor was three times longer than that when using commercial thermal compound. In over-clocking mode, the processor speed reached 3.8 GHz with 165 MHz of system bus clock speed; it was 1.24 times higher than that in non over-clocking mode. The results confirmed a promising way of using MWCNTs as the thermal dissipation media for microprocessor and high power electronic devices.

  4. Application of multiwall carbon nanotubes for thermal dissipation in a micro-processor

    Bui Hung Thang; Phan Ngoc Hong; Phan Hong Khoi; Phan Ngoc Minh

    2009-01-01

    One of the most valuable properties of the carbon nanotubes materials is its high thermal conductivity with 2000 W/m.K (compared to thermal conductivity of Ag 419 W/m.K). It suggested an approach in applying the CNTs in thermal dissipation media to improve the performance of computer processors and other high power electronic devices. In this research, the multiwall carbon nanotubes (MWCNTs) made by thermal chemical vapour deposition (CVD) at our laboratory was employed as the heat dissipation media in a microprocessor a Personal Computer with configuration: Intel Pentium IV 3.066 GHz, 512Mb of RAM and Windows XP Service Pack 2 Operating System. We directly measured the temperature of the microprocessor during the operation of the computer in two modes: 100% usage CPU mode and over-clocking mode. The measured results showed that when using our thermal dissipation media (a mixture of the mentioned commercial thermal compound and 2 wt.%. MWCNTs), the temperature of the microprocessor decreased 5 deg. C, and the time for increasing the temperature of the microprocessor was three times longer than that when using commercial thermal compound. In over-clocking mode, the processor speed reached 3.8 GHz with 165 MHz of system bus clock speed; it was 1.24 times higher than that in non over-clocking mode. The results confirmed a promising way of using MWCNTs as the thermal dissipation media for microprocessor and high power electronic devices.

  5. High-Performance Computing Paradigm and Infrastructure

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  6. Dynamic instruction set extension of microprocessors with embedded FPGAs

    Bauer, Heiner

    2017-01-01

    Increasingly complex applications and recent shifts in technology scaling have created a large demand for microprocessors which can perform tasks more quickly and more energy efficient. Conventional microarchitectures exploit multiple levels of parallelism to increase instruction throughput and use application specific instruction sets or hardware accelerators to increase energy efficiency. Reconfigurable microprocessors adopt the same principle of providing application specific hardware, how...

  7. An integrated high performance fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  8. CFD-simulation of radiator for air cooling of microprocessors in a limitided space

    Trofimov V. E.; Pavlov A. L.; Mokrousova E. A.

    2016-01-01

    One of the final stages of microprocessors development is heat test. This procedure is performed on a special stand, the main element of which is the switching PCB with one or more mounted microprocessor sockets, chipsets, interfaces, jumpers and other components which provide various modes of microprocessor operation. The temperature of microprocessor housing is typically changed using thermoelectric module. The cold surface of the module with controlled temperature is in direct thermal c...

  9. Microprocessorized NMR measurement

    Rijllart, A.

    1984-01-01

    An MC68000 CAMAC microprocessor system for fast and accurate NMR signal measurement will be presented. A stand-alone CAMAC microprocessor system (MC68000 STAC) with a special purpose interface sweeps a digital frequency synthesizer and digitizes the NMR signal with a 16-bit ADC of 17 μs conversion time. It averages the NMR signal data over many sweeps and then transfers it through CAMAC to a computer for calculation of the signal parameters. The computer has full software control over the timing and sweep settings of this signal averager, and thus allows optimization of noise suppression. Several of these processor systems can be installed in the same crate for parallel processing, and the flexibility of the STAC also allows easy adaptation to other applications such as transient recording or phase-sensitive detection. (orig.)

  10. Process control by microprocessors

    Arndt, W [ed.

    1978-12-01

    Papers from the workshop Process Control by Microprocessors being organized by the Karlsruhe Nuclear Research Center, Project PDV, together with the VDI/VDE-Gesellschaft fuer Mess- und Regelungstechnik are presented. The workshop was held on December 13 and 14, 1978 at the facilities of the Nuclear Research Center. The papers are arranged according to the topics of the workshop; one chapter deals with today's state of the art of microprocessor hardware and software technology; 5 chapters are dedicated to applications. The report also contains papers which will not be presented at the workshop. Both the workshop and the report are expected to improve and distribute the know-how about this modern technology.

  11. The micro-processor controlled process radiation monitoring system for reactor safety systems

    Mizuno, K.; Noguchi, A.; Kumagami, S.; Gotoh, Y.; Kumahara, T.; Arita, S.

    1986-01-01

    Digital computers are soon expected to be applied to various real-time safety and safety-related systems in nuclear power plants. Hitachi is now engaged in the development of a micro-processor controlled process radiation monitoring system, which operates on digital processing methods employed with a log ratemeter. A newly defined methodology of design and test procedures is being applied as a means of software program verification for these safety systems. Recently implemented micro-processor technology will help to achieve an advanced man-machine interface and highly reliable performance. (author)

  12. Microprocessor controller for stepping motors

    Strait, B.G.; Thuot, M.E.

    1977-01-01

    A new concept for digital computer control of multiple stepping motors which operate in a severe electromagnetic pulse environment is presented. The motors position mirrors in the beam-alignment system of a 100-kJ CO 2 laser. An asynchronous communications channel of a computer is used to send coded messages, containing the motor address and stepping-command information, to the stepping-motor controller in a bit serial format over a fiber-optics communications link. The addressed controller responds by transmitting to the computer its address and other motor information, thus confirming the received message. Each controller is capable of controlling three stepping motors. The controller contains the fiber-optics interface, a microprocessor, and the stepping-motor driven circuits. The microprocessor program, which resides in an EPROM, decodes the received messages, transmits responses, performs the stepping-motor sequence logic, maintains motor-position information, and monitors the motor's reference switch. For multiple stepping-motor application, the controllers are connected in a daisy chain providing control of many motors from one asynchronous communications channel of the computer

  13. Instrument for bone mineral measurement using a microprocessor as the control and arithmetic element

    Alberi, J.L.; Hardy, W.H. II.

    1975-11-01

    A self-contained instrument for the determination of bone mineral content by photon absorptometry is described. A high-resolution detection system allows measurements to be made at up to 16 photon energies. Control and arithmetic functions are performed by a microprocessor. Analysis capability and limitations are discussed

  14. High-performance computing for airborne applications

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  15. The European Logarithmic Microprocessor

    Coleman, J. N.; Softley, C. I.; Kadlec, Jiří; Matoušek, R.; Tichý, Milan; Pohl, Zdeněk; Heřmánek, Antonín; Benschop, N. F.

    2008-01-01

    Roč. 57, č. 4 (2008), s. 532-546 ISSN 0018-9340 Grant - others:Evropská komise(BE) ESPRIT 33544 Institutional research plan: CEZ:AV0Z10750506 Source of funding: R - rámcový projekt EK Keywords : Processor architecture * arithmetic unit * logarithmic arithmetic Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.611, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/kadlec-the%20european%20logarithmic%20microprocessor.pdf

  16. TRIESTE: College on Microprocessors

    Anon.

    1981-01-01

    The International Centre for Theoretical Physics, set up at Trieste in 1964, has as its major task the provision of a stimulating intellectual environment for physicists from developing countries. This goal is furthered by a varied programme of courses for visiting scientists. Not all the courses remain in the rarefied atmosphere of theory and in September a very successful 'College on Microprocessors: Technology and Applications in Physics' was held. It was a prime example of the efforts being made to spread important modern technology into the developing countries

  17. Energy conservation applications of microprocessors

    Shih, James Y.

    1979-07-01

    A survey of the application of microprocessors for industrial and commercial energy conservation has been made. Microprocessor applications for HVAC, chiller control, and automotive equipment are discussed. A case study of successful replacement of a conventional cooling plant control is recounted. The rapid advancement of microelectronic technology will affect efficient energy control, more sophisticated control methodology, and more investment in controls.

  18. A microarchitecture for resource-limited superscalar microprocessors

    Basso, Todd David

    1999-11-01

    Microelectronic components in space and satellite systems must be resistant to total dose radiation, single-even upset, and latchup in order to accomplish their missions. The demand for inexpensive, high-volume, radiation hardened (rad-hard) integrated circuits (ICs) is expected to increase dramatically as the communication market continues to expand. Motorola's Complementary Gallium Arsenide (CGaAsTM) technology offers superior radiation tolerance compared to traditional CMOS processes, while being more economical than dedicated rad-hard CMOS processes. The goals of this dissertation are to optimize a superscalar microarchitecture suitable for CGaAsTM microprocessors, develop circuit techniques for such applications, and evaluate the potential of CGaAsTM for the development of digital VLSI circuits. Motorola's 0.5 mum CGaAsTM process is summarized and circuit techniques applicable to digital CGaAsTM are developed. Direct coupled FET, complementary, and domino logic circuits are compared based on speed, power, area, and noise margins. These circuit techniques are employed in the design of a 600 MHz PowerPCTM arithmetic logic unit. The dissertation emphasizes CGaASTM-specific design considerations, specifically, low integration level. A baseline superscalar microarchitecture is defined and SPEC95 integer benchmark simulations are used to evaluate the applicability of advanced architectural features to microprocessors having low integration levels. The performance simulations center around the optimization of a simple superscalar core, small-scale branch prediction, instruction prefetching, and an off-chip primary data cache. The simulation results are used to develop a superscalar microarchitecture capable of outperforming a comparable sequential pipeline, while using only 500,000 transistors. The architecture, running at 200 MHz, is capable of achieving an estimated 153 MIPS, translating to a 27% performance increase over a comparable traditional pipelined

  19. An integrated high performance Fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  20. CAMAC multipurpose microprocessor controller

    Belyakova, M.P.; Nemesh, T.; Buj Zoan Chong.

    1978-01-01

    The use of CAMAC controllers in an autonomous system of data acquisition and measurement is considered. The system consists of a control intelligence controller, memory modules, and user modules in the CAMAC standard. The controller and all the modules have an output into the highway and this permits to exchange data among them without using special external cables. To increase the servicing rate, an auxiliary controller which has direct access to memory and controls the user modules, is additionally connected to the data acquisition and measurement system. In this case, the intelligence controller is passive. The system of data acquisition can be realized in the form of a multiple system with branch usage. The controller module width is three units, and the controller incorporates the Intel-8080-type microprocessor and the following interfaces: of CAMAC highways, of interruption, of memory bootstrap, and of data sequence channel

  1. Intel Xeon Phi coprocessor high performance programming

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  2. A realtime feedback microprocessor for the TEVATRON

    Herrup, D.A.; Chapman, L.; Franck, A.; Groves, T.; Lublinsky, B.

    1993-01-01

    A feedback microprocessor has been built for the TEVATRON. Its inputs are realtime accelerator measurements, data describing the state of the TEVATRON, and ramp tables. The microprocessor includes a finite state machine. Each state corresponds to a specific TEVATRON operation. Transitions between states are initiated by the global TEVATRON clock. Each state includes a cyclic routine which is called periodically and where all calculations are performed. The output corrections are inserted onto a fast TEVATRON-wide link from which the power supplies will read the realtime correction. The authors also store all of the input data and output corrections in a set of buffers which can easily be retrieved for diagnostic analysis. This talk will describe use of this device to control the TEVATRON tunes and discuss other uses

  3. Multiple microprocessor based nuclear reactor power monitor

    Lewis, P.S.; Ethridge, C.D.

    1979-01-01

    The reactor power monitor is a portable multiple-microprocessor controlled data acquisition device being built for the International Atomic Energy Association. Its function is to measure and record the hourly integrated operating thermal power level of a nuclear reactor for the purpose of detecting unannounced plutonium production. The monitor consists of a 3 He proportional neutron detector, a write-only cassette tape drive and control electronics based on two INTEL 8748 microprocessors. The reactor power monitor operates from house power supplied by the plant operator, but has eight hours of battery backup to cover power interruptions. Both the hourly power levels and any line power interruptions are recorded on tape and in memory. Intermediate dumps from the memory to a data terminal or strip chart recorder can be performed without interrupting data collection

  4. A feedback microprocessor for hadron colliders

    Herrup, D.A.; Chapman, L.; Franck, A.; Groves, T.; Lublinsky, B.

    1992-12-01

    A feedback microprocessor has been built for the TEVATRON. It has been constructed to be applicable to hadron colliders in general. Its inputs are realtime accelerator measurements, data describing the state of the TEVATRON, and ramp tables. The microprocessor software includes a finite state machine. Each state corresponds to a specific TEVATRON operation and has a state-specific TEVATRON model. Transitions between states are initiated by the global TEVATRON clock. Each state includes a cyclic routine which is called periodically and where all calculations are performed. The output corrections are inserted onto a fast TEVATRON-wide link from which the power supplies will read the realtime corrections. We also store all of the input data and output corrections in a set of buffers which can easily be retrieved for diagnostic analysis. In this paper we will describe this device and its use to control the TEVATRON tunes as well as other possible applications

  5. Microprocessors in physics experiments at SLAC

    Rochester, L.S.

    1981-01-01

    The increasing size and complexity of high energy physics experiments is changing the way data are collected. To implement a trigger or event filter requires complex logic which may have to be modified as the experiment proceeds. Simply to monitor a detector, large amounts of data must be processed online. The use of microprocessors or other programmable devices can help to achieve these ends flexibly and economically. At SLAC, a number of microprocessor-based systems have been built and are in use in experimental setups, and others are now being developed. This talk is a review of existing systems and their use in experiments, and of developments in progress and future plans. (orig.)

  6. Microprocessors in physics experiments at SLAC

    Rochester, L.S.

    1981-04-01

    The increasing size and complexity of high energy physics experiments is changing the way data are collected. To implement a trigger or event filter requires complex logic which may have to be modified as the experiment proceeds. Simply to monitor a detector, large amounts of data must be processed on line. The use of microprocessors or other programmable devices can help to achieve these ends flexibly and economically. At SLAC, a number of microprocessor-based systems have been built and are in use in experimental setups, and others are now being developed. This talk is a review of existing systems and their use in experiments, and of developments in progress and future plans

  7. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  8. A microprocessor based on a two-dimensional semiconductor

    Wachter, Stefan; Polyushkin, Dmitry K.; Bethge, Ole; Mueller, Thomas

    2017-04-01

    The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III-V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor--molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.

  9. Microprocessor multi-task monitor

    Ludemann, C.A.

    1983-01-01

    This paper describes a multi-task monitor program for microprocessors. Although written for the Intel 8085, it incorporates features that would be beneficial for implementation in other microprocessors used in controlling and monitoring experiments and accelerators. The monitor places permanent programs (tasks) arbitrarily located throughout ROM in a priority ordered queue. The programmer is provided with the flexibility to add new tasks or modified versions of existing tasks, without having to comply with previously defined task boundaries or having to reprogram all of ROM. Scheduling of tasks is triggered by timers, outside stimuli (interrupts), or inter-task communications. Context switching time is of the order of tenths of a milllisecond

  10. THE METHOD OF SELECTION OF THE SETPOINT HIGH SPEED FEEDER SWITCH OF 3,3KV DC WITH MICROPROCESSOR-BASED PROTECTION SYSTEMS

    P. Ye. Mykhalichenko

    2009-10-01

    Full Text Available In the article a new procedure of choice of minimum current jump for action of fast-acting switches of 3.3 kV DC traction substations intended for the use in the microprocessor protection system of feeders is described. This procedure is more perfect than existing one on the current increment and uses the results of mathematical simulation of the traction electric supply system.

  11. Satisfying STEM Education Using the Arduino Microprocessor in C Programming

    Hoffer, Brandyn M.

    There exists a need to promote better Science Technology Engineering and Math (STEM) education at the high school level. To satisfy this need a series of hands-on laboratory assignments were created to be accompanied by 2 educational trainers that contain various electronic components. This project provides an interdisciplinary, hands-on approach to teaching C programming that meets several standards defined by the Tennessee Board of Education. Together the trainers and lab assignments also introduce key concepts in math and science while allowing students hands-on experience with various electronic components. This will allow students to mimic real world applications of using the C programming language while exposing them to technology not currently introduced in many high school classrooms. The developed project is targeted at high school students performing at or above the junior level and uses the Arduino Mega open-source Microprocessor and software as the primary control unit.

  12. Future microprocessor farms: Offline and online

    Areti, H.

    1990-01-01

    Microprocessor farms have been successfully employed in high energy physics for both offline analysis and online triggers. As the experiments continue to grow in size, so do the demands for processing power. The preliminary indications are that the large collider experiments will require at least a million VAX-11/780 equivalents of processing power for online trigger decisions and offline event reconstruction. This paper examines the current technology trends and projects the processing power that may be expected with the current farm architectures. 3 refs., 6 figs

  13. Automatic Energy Schemes for High Performance Applications

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  14. Microprocessors control of fermentation process

    Fawzy, A S; Hinton, O R

    1980-01-01

    This paper presents three schemes for the solution of the optimal control of fermentation process. It also shows the advantages of using microprocessors in controlling and monitoring this process. A linear model of the system is considered. An optimal feedback controller is determined which maintains the states (substrate and organisms concentration) at desired values when the system is subjected to disturbances in the influent substrate and organisms concentration. Simulation results are presented for the three cases.

  15. Multichannel analyzer based on microprocessors

    Soares, M.

    1983-06-01

    A multichannel analyser for nuclear spectrometry, that would attend the needs of research laboratories and could be industrialized in Brazil, was developed. The design was based on INTEL 8080/85 microprocessors; other processors were also used to implement specific functions, such as shared busbar using direct memory access. A prototype was developed and tested through simulation, using a nuclear spectrometry chain. The results were fully satisfactory. (Author) [pt

  16. Microprocessor-controlled surface testing

    Droscha, H

    1982-09-01

    For the quality inspection on continuous flow material webs with transverse scanning laser beam, the microprocessor control, realized now for the first time in combination with appropriate units, shows a considerable progress. Thanks to the here used electronics, surface errors can be localized within the web according to their x-y-position, quantitative analysis can be carried out and automatic sorting and registration functions can be used.

  17. Microprocessor tester for the treat upgrade reactor trip system

    Lenkszus, F.R.; Bucher, R.G.

    1984-01-01

    The upgrading of the Transient Reactor Test (TREAT) Facility at ANL-Idaho has been designed to provide additional experimental capabilities for the study of core disruptive accident (CDA) phenomena. In addition, a programmable Automated Reactor Control System (ARCS) will permit high-power transients up to 11,000 MW having a controlled reactor period of from 15 to 0.1 sec. These modifications to the core neutronics will improve simulation of LMFBR accident conditions. Finally, a sophisticated, multiply-redundant safety system, the Reactor Trip System (RTS), will provide safe operation for both steady state and transient production operating modes. To insure that this complex safety system is functioning properly, a Dedicated Microprocessor Tester (DMT) has been implemented to perform a thorough checkout of the RTS prior to all TREAT operations

  18. Microprocessor-based data acquisition systems for Hera experiments

    Haynes, W.J.

    1989-09-01

    Sophisticated multi-microprocessor configurations are envisaged to cope with the technical challenges of the HERA electron-proton collider and the high data rates from the two large experiments H1 and ZEUS. These lecture notes concentrate on many of the techniques employed, with much emphasis being placed on the use of the IEEE standard VMEbus as a unifying element. The role of modern 32-bit CISC and RISC microprocessors, in the handling of data and the filtering of physics information, is highlighted together with the integration of personal computer stations for monitoring and control. (author)

  19. Flexible nanoscale high-performance FinFETs

    Sevilla, Galo T.

    2014-10-28

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show a soft-etch based substrate thinning process to transform silicon-on-insulator (SOI) based nanoscale FinFET into flexible FinFET and then conduct comprehensive electrical characterization under various bending conditions to understand its electrical performance. Our study shows that back-etch based substrate thinning process is gentler than traditional abrasive back-grinding process; it can attain ultraflexibility and the electrical characteristics of the flexible nanoscale FinFET show no performance degradation compared to its rigid bulk counterpart indicating its readiness to be used for flexible high-performance electronics.

  20. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  1. Application of microprocessor based controller in the Breeder Reactor Program

    Messick, N.C.; Lukas, M.P.

    1985-01-01

    This paper treats Argonne National Laboratory's experience using microprocessor based controllers presently in use on several control loops within the EBR-II reactor facility as well as tests being performed by these controllers. Also included is a discussion of the expandability, modularity, range of capabilities and higher level functions possible using such equipment

  2. Microprocessor-controlled CAMAC data link module

    Potter, J.M.

    1978-05-01

    Communication between the central control computer and remote, satellite data-acquisition/control stations at the Clinton P. Anderson Meson Physics Facility (LAMPF) is presently accomplished through the use of CAMAC-based Data Link modules. With the advent of the microprocessor, a new philosophy for digital data communications has evolved. Data Link modules containing microprocessor controllers provide link management and communication network protocol through algorithms executed in the Data Link microprocessor. 13 figures

  3. A microprocessor based mobile radiation survey system

    Gilbert, R.W.; McCormack, W.D.

    1984-01-01

    A microprocessor-based system has been designed and constructed to enhance the performance of routine radiation surveys on roads within the Hanford site. This device continually monitors system performance and output from four sodium iodide detectors mounted on the rear bumper of a 4-wheel drive truck. The gamma radiation count rate in counts-per-second is monitored, and a running average computed, with the results compared to predefined limits. If an abnormal instantaneous or average count rate is detected, an alarm is sounded with responsible data displayed on a liquid crystal panel in the cab of the vehicle. The system also has the capability to evaluate detector output using multiple time constants and to perform more complex tests and comparison of the data. Data can be archived for later analysis on conventional chart recorders or stored in digital form on magnetic tape or other digital storage media

  4. Microprocessor based mobile radiation survey system

    Gilbert, R.W.; McCormack, W.D.

    1983-12-01

    A microprocessor-based system has been designed and constructed to enhance the performance of routine radiation surveys on roads within the Hanford site. This device continually monitors system performance and output from four sodium iodide detectors mounted on the rear bumper of a 4-wheel drive truck. The gamma radiation count rate in counts-per-second is monitored, and a running average computed, with the results compared to predefined limits. If an abnormal instantaneous or average count rate is detected, an alarm is sounded with responsible data displayed on a liquid crystal panel in the cab of the vehicle. The system also has the capability to evaluate detector output using multiple time constants and to perform more complex tests and comparison of the data. Data can be archived for later analysis on conventional chart recorders or stored in digital form on magnetic tape or other digital storage media. 4 figures

  5. Microprocessors applications in the nuclear industry

    Ethridge, C.D.

    1980-01-01

    Microprocessors in the nuclear industry, particularly at the Los Alamos Scientific Laboratory, have been and are being utilized in a wide variety of applications ranging from data acquisition and control for basic physics research to monitoring special nuclear material in long-term storage. Microprocessor systems have been developed to support weapons diagnostics measurements during underground weapons testing at the Nevada Test Site. Multiple single-component microcomputers are now controlling the measurement and recording of nuclear reactor operating power levels. The CMOS microprocessor data-acquisition instrumentation has operated on balloon flights to monitor power plant emissions. Target chamber mirror-positioning equipment for laser fusion facilities employs microprocessors

  6. Microprocessor based systems for the higher technician

    Vears, RE

    2013-01-01

    Microprocessor Based Systems for the Higher Technician provides coverage of the BTEC level 4 unit in Microprocessor Based Systems (syllabus U80/674). This book is composed of 10 chapters and concentrates on the development of 8-bit microcontrollers specifically constructed around the Z80 microprocessor. The design cycle for the development of such a microprocessor based system and the use of a disk-based development system (MDS) as an aid to design are both described in detail. The book deals with the Control Program Monitor (CP/M) operating system and gives background information on file hand

  7. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  8. Use of a microprocessor in a remote working level monitor

    Keffe, D.J.; McDowell, W.P.; Groer, P.G.

    1975-01-01

    A remote working level monitor was designed to measure short-lived radon-daughter concentrations in sealed chambers having potentially high radiation levels (up to 2000 WL). The system is comprised of surface barrier detectors, multiplexer and buffers, microprocessor and teletype

  9. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  10. Microprocessor controlled digital period meter

    Keefe, D.J.; McDowell, W.P.; Rusch, G.K.

    1980-01-01

    A microprocessor controlled digital period meter has been developed and tested operationally on a reactor at Argonne National Laboratory. The principle of operation is the mathematical relationship between asymptotic periods and pulse counting circuitry. This relationship is used to calculate and display the reactor periods over a range of /plus or minus/1 second to /plus or minus/999 seconds. The time interval required to update each measurement automatically varies from 8 seconds at the lowest counting rates to 2 seconds at higher counting rates. The paper will describe hardware and software design details and show the advantages of this type of Period Meter over the conventional circuits. 1 ref

  11. Responsive design high performance

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  12. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  13. Small private key MQPKS on an embedded microprocessor.

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-03-19

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  14. Small Private Key MQPKS on an Embedded Microprocessor

    Hwajeong Seo

    2014-03-01

    Full Text Available Multivariate quadratic (MQ cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011, a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key MQ scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing MQ on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key MQ scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012.

  15. Small Private Key PKS on an Embedded Microprocessor

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic ( ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key scheme, was proposed, and its feasible implementation on an embedded microprocessor was reported at CHES2012. However, the implementation of a small private key scheme was not reported. For efficient implementation, random number generators can contribute to reduce the key size, but the cost of using a random number generator is much more complex than computing on modern microprocessors. Therefore, no feasible results have been reported on embedded microprocessors. In this paper, we propose a feasible implementation on embedded microprocessors for a small private key scheme using a pseudo-random number generator and hash function based on a block-cipher exploiting a hardware Advanced Encryption Standard (AES) accelerator. To speed up the performance, we apply various implementation methods, including parallel computation, on-the-fly computation, optimized logarithm representation, vinegar monomials and assembly programming. The proposed method reduces the private key size by about 99.9% and boosts signature generation and verification by 5.78% and 12.19% than previous results in CHES2012. PMID:24651722

  16. Microprocessor-based integrated LMFBR core surveillance

    Gmeiner, L.

    1984-06-01

    This report results from a joint study of KfK and INTERATOM. The aim of this study is to explore the advantages of microprocessors and microelectronics for a more sophisticated core surveillance, which is based on the integration of separate surveillance techniques. Due to new developments in microelectronics and related software an approach to LMFBR core surveillance can be conceived that combines a number of measurements into a more intelligent decision-making data processing system. The following techniques are considered to contribute essentially to an integrated core surveillance system: - subassembly state and thermal hydraulics performance monitoring, - temperature noise analysis, - acoustic core surveillance, - failure characterization and failure prediction based on DND- and cover gas signals, and - flux tilting techniques. Starting from a description of these techniques it is shown that by combination and correlation of these individual techniques a higher degree of cost-effectiveness, reliability and accuracy can be achieved. (orig./GL) [de

  17. Proceedings of the meeting on applications of microprocessors in accelerator controls and physics experiments, Tsukuba, March 15, 1978

    Shibata, Shinkichi; Katoh, Tadahiko

    1978-05-01

    The microprocessor was first made public in 1971. In the ensuing few years, its performance has risen, cost lowered and interface more in IC, so it is now easily incorporated in instrumentation and control. Since it is used as electronic component unlike the case of a minicomputer, it has so much larger influence. It differs from the conventional electronic components in that software is required. In the National Laboratory for High Energy Physics, microprocessors are used for performance improvements of the measuring and control instruments and for labor saving. For new component not to induce new other problems, support system and standardization are proceeding for utilization development etc. The present meeting was intended for discussions by people in the field of usage, planning, and means of joint uses for software and hardware. (Mori, K.)

  18. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  19. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  20. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  1. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  2. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  3. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  4. Microprocessor based image processing system

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  5. A microprocessor-based power control data acquisition system

    Greenberg, S.

    1982-10-01

    The project reported deals with one of the aspects of power plant control and management. In order to perform optimal distribution of power and load switching, one has to solve a specific optimization problem. In order to solve this problem one needs to collect current and power expenditure data from a large number of channels and have them processed. This particular procedure is defined as data acquisition and it constitutes the main topic of this project. A microprocessor-based data acquisition system for power management is investigated and developed. The current and power data of about 100 analog channels are sampled and collected in real-time. These data are subsequently processed to calculate the power factor (cos phi) for each channel and the maximum demand. The data is processed by an AMD 9511 Arithmetic Processing Unit and the whole system is controlled by an Intel 8080A CPU. All this information is then transfered to a universal computer through a synchronized communication channel. The optimization computations would be performed by the high level computer. Different ways of performing the search of data over a large number of channels have been investigated. A particular solution to overcome the gain and offset drift of the A/D converter, using software, has been proposed. The 8080A supervises the collection and routing of data in real time, while the 9511 performs calculation, using these data. (Author)

  6. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  7. Mold heating and cooling microprocessor conversion

    Hoffman, D. P.

    1995-07-01

    Conversion of the microprocessors and software for the Mold Heating and Cooling (MHAC) pump package control systems was initiated to allow required system enhancements and provide data communications capabilities with the Plastics Information and Control System (PICS). The existing microprocessor-based control systems for the pump packages use an Intel 8088-based microprocessor board with a maximum of 64 Kbytes of program memory. The requirements for the system conversion were developed, and hardware has been selected to allow maximum reuse of existing hardware and software while providing the required additional capabilities and capacity. The new hardware will incorporate an Intel 80286-based microprocessor board with an 80287 math coprocessor, the system includes additional memory, I/O, and RS232 communication ports.

  8. Microprocessor Protection of Power Reducing Transformers

    F. A. Romanuk

    2011-01-01

    Full Text Available The paper contains analysis of advantages and disadvantages of existing differential protection terminals of power reducing transformers. The paper shows that there are good reasons to develop microprocessor protection of power reducing transformer which contains required functions and settings and which is based on Belarusian principles of relay protection system construction. The paper presents functional structure of microprocessor terminal of power reducing transformer which is developed. 

  9. Microprocessor Protection of Power Reducing Transformers

    F. A. Romanuk; S. P. Korolev; M. S. Loman

    2011-01-01

    The paper contains analysis of advantages and disadvantages of existing differential protection terminals of power reducing transformers. The paper shows that there are good reasons to develop microprocessor protection of power reducing transformer which contains required functions and settings and which is based on Belarusian principles of relay protection system construction. The paper presents functional structure of microprocessor terminal of power reducing transformer which is developed. 

  10. ''NICRO'' microprogramming language for sectional microprocessors

    Semenov, Yu.A.; Chudakov, V.N.

    1982-01-01

    ''MICRO'' microprogramming input language developed for sectional microprocessors is described. The structure of micromanual, purpose of particular fields, the corresponding mne-- mocodes and requirements they have to meet are considered. Program for integer division with a sign written in the ''MICRO'' language is given as an example. The possibilities of modif ying the translator for its adaptation to different types of processor and microprocessor sets are analyzed

  11. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  12. Fermilab advanced computer program multi-microprocessor project

    Nash, T.; Areti, H.; Biel, J.

    1985-06-01

    Fermilab's Advanced Computer Program is constructing a powerful 128 node multi-microprocessor system for data analysis in high-energy physics. The system will use commercial 32-bit microprocessors programmed in Fortran-77. Extensive software supports easy migration of user applications from a uniprocessor environment to the multiprocessor and provides sophisticated program development, debugging, and error handling and recovery tools. This system is designed to be readily copied, providing computing cost effectiveness of below $2200 per VAX 11/780 equivalent. The low cost, commercial availability, compatibility with off-line analysis programs, and high data bandwidths (up to 160 MByte/sec) make the system an ideal choice for applications to on-line triggers as well as an offline data processor

  13. Microprocessor event analysis in parallel with Camac data acquisition

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  14. Microprocessor event analysis in parallel with CAMAC data acquisition

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  15. Design analysis and microprocessor based control of a nuclear reactor

    Sabbakh, N.J.

    1988-01-01

    The object of this thesis is to design and test a microprocessor based controller, to a simulated nuclear reactor system. The mathematical model that describes the dynamics of a typical nuclear reactor of one group of delayed neutrons approximations with temperature feedback was chosen. A digital computer program has been developed for the design and analysis of a simulated model based on the concept of state-variable feedback in order to meet a desired system response with maximum overshoot of 3.4% and setting time of 4 sec. The state variable feedback coefficients are designed for the continuous system, then an approximation is used to obtain in the state variable feedback vector for the discrete system. System control was implemented utilizing Direct Digital Control (DDC) of a nuclear reactor simulated model through a control algorithm that was performed by means of a microprocessor based system. The controller performance was satisfactorily tested by exciting the reactor system with a transient reactivity disturbance and by a step change in power demand. Direct digital control, when implemented on a microprocessor adds versatility, flexibility in system design with the added advantage of possible use of optimal control algorithms. 6 tabs.; 30 figs.; 46 refs.; 6 apps

  16. Radiation-hardened bulk Si-gate CMOS microprocessor family

    Stricker, R.E.; Dingwall, A.G.F.; Cohen, S.; Adams, J.R.; Slemmer, W.C.

    1979-01-01

    RCA and Sandia Laboratories jointly developed a radiation-hardened bulk Si-gate CMOS technology which is used to fabricate the CDP-1800 series microprocessor family. Total dose hardness of 1 x 10 6 rads (Si) and transient upset hardness of 5 x 10 8 rads (Si)/sec with no latch up at any transient level was achieved. Radiation-hardened parts manufactured to date include the CDP-1802 microprocessor, the CDP-1834 ROM, the CDP-1852 8-bit I/O port, the CDP-1856 N-bit 1 of 8 decoder, and the TCC-244 256 x 4 Static RAM. The paper is divided into three parts. In the first section, the basic fundamentals of the non-hardened C 2 L technology used for the CDP-1800 series microprocessor parts is discussed along with the primary reasons for hardening this technology. The second section discusses the major changes in the fabrication sequence that are required to produce radiation-hardened devices. The final section details the electrical performance characteristics of the hardened devices as well as the effects of radiation on device performance. Also included in this section is a discussion of the TCC-244 256 x 4 Static RAM designed jointly by RCA and Sandia Laboratories for this application

  17. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  18. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  19. Commercialization issues and funding opportunities for high-performance optoelectronic computing modules

    Hessenbruch, John M.; Guilfoyle, Peter S.

    1997-01-01

    Low power, optoelectronic integrated circuits are being developed for high speed switching and data processing applications. These high performance optoelectronic computing modules consist of three primary components: vertical cavity surface emitting lasers, diffractive optical interconnect elements, and detector/amplifier/laser driver arrays. Following the design and fabrication of an HPOC module prototype, selected commercial funding sources will be evaluated to support a product development stage. These include the formation of a strategic alliance with one or more microprocessor or telecommunications vendors, and/or equity investment from one or more venture capital firms.

  20. G-cueing microcontroller (a microprocessor application in simulators)

    Horattas, C. G.

    1980-01-01

    A g cueing microcontroller is described which consists of a tandem pair of microprocessors, dedicated to the task of simulating pilot sensed cues caused by gravity effects. This task includes execution of a g cueing model which drives actuators that alter the configuration of the pilot's seat. The g cueing microcontroller receives acceleration commands from the aerodynamics model in the main computer and creates the stimuli that produce physical acceleration effects of the aircraft seat on the pilots anatomy. One of the two microprocessors is a fixed instruction processor that performs all control and interface functions. The other, a specially designed bipolar bit slice microprocessor, is a microprogrammable processor dedicated to all arithmetic operations. The two processors communicate with each other by a shared memory. The g cueing microcontroller contains its own dedicated I/O conversion modules for interface with the seat actuators and controls, and a DMA controller for interfacing with the simulation computer. Any application which can be microcoded within the available memory, the available real time and the available I/O channels, could be implemented in the same controller.

  1. The first IA-64 microprocessor

    Rusu, S

    2000-01-01

    The first implementation of the IA-64 architecture achieves high performance by using a highly parallel execution core, while maintaining binary compatibility with the IA-32 instruction set. Explicitly parallel instruction computing (EPIC) design maximizes performance through hardware and software synergy. The processor contains 25.4 million transistors and operates at 800 MHz. The chip is fabricated in a 0.18- mu m CMOS process with six metal layers and packaged in a 1012-pad organic land grid array using C4 (flip chip) assembly technology. A core speed back-side bus connects the processor to a 4-MB L3 cache. (6 refs).

  2. RISC Processors and High Performance Computing

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  3. An INTEL 8080 microprocessor development system

    Horne, P.J.

    1977-01-01

    The INTEL 8080 has become one of the two most widely used microprocessors at CERN, the other being the MOTOROLA 6800. Even thouth this is the case, there have been, to date, only rudimentary facilities available for aiding the development of application programs for this microprocessor. An ideal development system is one which has a sophisticated editing and filing system, an assembler/compiler, and access to the microprocessor application. In many instances access to a PROM programmer is also required, as the application may utilize only PROMs for program storage. With these thoughts in mind, an INTEL 8080 microprocessor development system was implemented in the Proton Synchrotron (PS) Division. This system utilizes a PDP 11/45 as the editing and file-handling machine, and an MSC 8/MOD 80 microcomputer for assembling, PROM programming and debugging user programs at run time. The two machines are linked by an existing CAMAC crate system which will also provide the means of access to microprocessor applications in CAMAC and the interface of the development system to any other application. (Auth.)

  4. R high performance programming

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  5. Real time computer system with distributed microprocessors

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  6. Trust versus confidence: Microprocessors and personnel monitoring

    Chiaro, P.J. Jr.

    1993-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to open-quotes false alarms.close quotes This is especially true when monitoring for alpha contamination. What is a open-quotes false alarm?close quotes Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  7. Trust versus confidence: Microprocessors and personnel monitoring

    Chiaro, P.J. Jr.

    1993-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that we put our trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  8. Trust versus confidence: Microprocessors and personnel monitoring

    Chiaro, P.J. Jr.

    1994-01-01

    Due to recent technological advances, substantial improvements have been made in personnel contamination monitoring. In all likelihood, these advances will close out the days of manually frisking personnel for radioactive contamination. Unfortunately, as microprocessor-based monitors become more widely used, not only at commercial power reactors but also at government facilities, questions concerning their trustworthiness arise. Algorithms make decisions that were previously made by technicians. Trust is placed not in technicians but in machines. In doing this it is assumed that the machine never misses. Inevitably, this trust drops, due largely to ''false alarms''. This is especially true when monitoring for alpha contamination. What is a ''false alarm''? Do these machines and their algorithms that they put their trust in make mistakes? An analysis was performed on half-body and hand-and-foot monitors at Oak Ridge National Laboratory (ORNL) in order to justify the suggested confidence level used for alarm point determination. Sources used in this analysis had activities approximating ORNL's contamination limits

  9. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  10. Microprocessor system design a practical introduction

    Spinks, Michael J

    2013-01-01

    Microprocessor System Design: A Practical Introduction describes the concepts and techniques incorporated into the design of electronic circuits, particularly microprocessor boards and their peripherals. The book reviews the basic building blocks of the electronic systems composed of digital (logic levels, gate output circuitry) and analog components (resistors, capacitors, diodes, transistors). The text also describes operational amplifiers (op-amp) that use a negative feedback technique to improve the parameters of the op-amp. The design engineer can use programmable array logic (PAL) to rep

  11. Application of microprocessors to radiation protection measurements

    Zappe, D.; Meldes, C.

    1982-01-01

    In radiation protection measurements signals from radiation detectors or dosemeters have to be transformed into quantities relevant to radiation protection. In most cases this can only be done by taking into account various parameters (e.g. the quality factor). Moreover, the characteristics of the statistical laws of nuclear radiation emission have to be considered. These problems can properly be solved by microprocessors. After reviewing the main properties of microprocessors, some typical examples of applying them to problems of radiation protection measurement are given. (author)

  12. Microprocessor protection relays: new prospects or new problems?

    Gurevich, Vladimir

    2006-01-01

    The internal architecture and principles of operation of microprocessor-based devices including so-called "microprocessor protective relays" have little in common with devices called "electric relays". But microprocessor-based relay protection devices are gradually driving out the traditional electromechanical and even electronic relay protection of virtually from all fields of power and electrical engineering. Advantages of microprocessor-based protection means over traditional ones are far ...

  13. A microprocessor based area monitor system for neutron and gamma radiation

    Wilhelm, R.; Heusser, G.

    1980-01-01

    The conventional electronics of the area monitors at the MPI-Heidelberg accelerators have been replaced by a microprocessor system consisting of individual detector-microprocessors and a central microcomputer. The detector microprocessors convert the count rates of BF3 and GM counter tubes into dose rates and control three different radiation thresholds (failure, low and high level). Different warning signals are operated directly by the detector processors, whereas the dose rates are transferred to the central microcomputer. Here the data are processed for recording on tape and displaying on TV monitors. The detector as well as the central processors have been developed on the basis of a 16-bit microprocessor. In the control rooms the dose rates of the individual monitors are displayed and on an indicator board showing the different locations, the high radiation level and the state of the doors (open, locked, and closed, locked but open) are sianaled by different LED. If a high radiation threshold is surpassed, the doors adjacent to that area can be locked either by switches on the indicator board or automatically. Within the experimental area, the low and high radiation level is indicated by acoustic and light signals. The whole concept permits keeping the absorbed doses of the personnel as low as possible without affecting the flexibility of the experimental operations. The independence of the microprocessor driven area monitors guarantees a high reliability. Compared to conventional electronics the advantages of the system are its reliability and cost. (Author)

  14. The HXR80M-balloon experiment: a microprocessor-controlled transatlantic payload

    Ubertini, P.; Bazzano, A.; Boccaccini, L.

    1980-01-01

    Following the results obtained from the succesful transatlantic flight launched during the summer 1976 from the CNR Milo Base, Sicily, the Laboratorio di Astrofisica Spaziale has started a new program in the hard X-ray astronomy field. It basically consists in the development of high resolution large area Multiwire Proportional Chambers to be employed in long duration balloon flights to study and monitor galactic and extragalactic sources. This note will describe the flight configuration and performances of the HXR80M payload. The experiment is expected to fly during July 1980 from the Milo Base in the framework of the CNR experimental balloon campaign. The note will analyze the main characteristics of the detectors employed, of the data handling electronics and in particular of the hardware and the software of the on-board microprocessor controlled multichannel analyzer. In fact the limitation due to the low bit rate HF link (1.2kbit/s) and the long flight duration (about one week) make imperative the use of an on-board microprocessor system to handle and select in real time the scientific data and to control the housekeeping and the telecommand systems

  15. Python high performance programming

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  16. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  17. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  18. High Performance Computing Multicast

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  19. NGINX high performance

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  20. Microprocessors: From basic chips to complete systems

    Dobinson, R.W.

    1985-01-01

    These lectures aim to present and explain in general terms some of the characteristics of microprocessor chips and associated components. They show how systems are synthesized from the basic integrated circuit building blocks which are currently available; processor, memory, input-output (I/0) devices, etc. (orig./HSI)

  1. Microprocessor Controlled Capacitor Bank Switching System for ...

    In this work, analysis and development of a microprocessor controlled capacitor bank switching system for deployment in a smart distribution network was carried out. This system was implemented by the use of discreet components such as resistors, capacitors, transistor, diode, automatic voltage regulator, with the ...

  2. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  3. A fastbus master based on a risc microprocessor

    Cerrito, L.; Chorowicz, V.; Lebbolo, H.; Vallereau, A.

    1990-01-01

    SISIFUS is a general purpose Fastbus Master and Slave able to perform any operation on both Fastbus segments. Master operations are directed either by the processor or by two fast sequencers. A Block Mover function is implemented allowing direct data block transfers between two Slaves. SISIFUS uses the AM 29000 RISC microprocessor which can execute every assembler instruction in 40ns. The on-board monitor/debugger allows programs to be written in assembler from a terminal connected to the module or written in C and cross compiled on a host computer (PC)

  4. Microprocessor based beam loss monitor system for the AGS

    Witkover, R.L.

    1979-01-01

    An array of 120 long radiation monitors (LRM) have been installed around the AGS. Each monitor is an extended coaxial ion chamber, 5 meters long, made from hollow core coaxial transmission cable pressured with argon. The LRM's are each connected to a low current preamplifier and voltage-to-frequency converter (VFC). The digital output of each channel is fed to a 16 bit counter chip which bridges the bus of an 8085 microprocessor. This circuit is connected to the AGS PD-10 for data taking or may function as a stand-alone unit. Various operating modes can be selected for data readout. System design and operating performance are described

  5. High performance proton accelerators

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  6. Neutron beam irradiation study of workload dependence of SER in a microprocessor

    Michalak, Sarah E [Los Alamos National Laboratory; Graves, Todd L [Los Alamos National Laboratory; Hong, Ted [STANFORD; Ackaret, Jerry [IBM; Sonny, Rao [IBM; Subhasish, Mitra [STANFORD; Pia, Sanda [IBM

    2009-01-01

    It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.

  7. FPGAs Emulate Microprocessors-A Successful Case for HFC NPP Digital I and C Upgrade

    Hsu, Allen; Crow, Ivan; Reese, Carl; Kim, Jong; Yang, Steve

    2014-01-01

    Field Programmable Gate Arrays (FPGAs), as programmable logic devices (PLDs) have gained a great deal of interests for implementing safety I and C applications in nuclear power plants (NPPs) largely owing to the FPGAs'potential advantage over the currently more common microprocessor-based digital I and C applications. First of all, FPGAs have adequate capabilities for most digital I and C applications in NPPs. Secondly, FPGAs provide products with longer lifetime, improve testability, and reduce the drift which occurs in analog-based systems, from hardware perspective. Thirdly, FPGAs, from software perspective, can be made simpler, less reliant on complex software such as operating systems, which should make FPGAs easier to qualify for nuclear safety applications. Fourthly, FPGAs are less vulnerable to cyber attacks when FPGAs implement the I and C systems that do not contain high-level, general purpose software that may be easily subjected to malicious modifications. Finally, FPGAs can bring cost reduction in an I and C digital upgrade because FPGAs can provide simpler licensing process than microprocessor-based digital I and C, and FPGAs can be implemented more efficiently. This paper will present one successful case for YGN Unit I and C upgrade using FPGA-based components to replace the obsolete Intel 8085 Microprocessor-based controllers. In this case, FPGAs emulated the process of the existing microprocessors and interpreted the execution of CPU processing. More than 160 of the FPGA-based SBC-01 controllers replacing the Intel 8085 Microprocessor-based Printed Circuit Boards have been installed and running successfully for safety I and C applications over the last five years. In this upgrade, the new FPGA-based controller board SBC-01 emulated the functions of Intel 8085 microprocessor correctly. It is a successful and cost-effective upgrade.vIn this paper, lifecycle design and implementation process and rigorous V and V activities that were used in the

  8. FPGAs Emulate Microprocessors-A Successful Case for HFC NPP Digital I and C Upgrade

    Hsu, Allen; Crow, Ivan; Reese, Carl; Kim, Jong; Yang, Steve [Doosan HF Controls Corp, Carrollton (United States)

    2014-08-15

    Field Programmable Gate Arrays (FPGAs), as programmable logic devices (PLDs) have gained a great deal of interests for implementing safety I and C applications in nuclear power plants (NPPs) largely owing to the FPGAs'potential advantage over the currently more common microprocessor-based digital I and C applications. First of all, FPGAs have adequate capabilities for most digital I and C applications in NPPs. Secondly, FPGAs provide products with longer lifetime, improve testability, and reduce the drift which occurs in analog-based systems, from hardware perspective. Thirdly, FPGAs, from software perspective, can be made simpler, less reliant on complex software such as operating systems, which should make FPGAs easier to qualify for nuclear safety applications. Fourthly, FPGAs are less vulnerable to cyber attacks when FPGAs implement the I and C systems that do not contain high-level, general purpose software that may be easily subjected to malicious modifications. Finally, FPGAs can bring cost reduction in an I and C digital upgrade because FPGAs can provide simpler licensing process than microprocessor-based digital I and C, and FPGAs can be implemented more efficiently. This paper will present one successful case for YGN Unit I and C upgrade using FPGA-based components to replace the obsolete Intel 8085 Microprocessor-based controllers. In this case, FPGAs emulated the process of the existing microprocessors and interpreted the execution of CPU processing. More than 160 of the FPGA-based SBC-01 controllers replacing the Intel 8085 Microprocessor-based Printed Circuit Boards have been installed and running successfully for safety I and C applications over the last five years. In this upgrade, the new FPGA-based controller board SBC-01 emulated the functions of Intel 8085 microprocessor correctly. It is a successful and cost-effective upgrade.vIn this paper, lifecycle design and implementation process and rigorous V and V activities that were used in the

  9. Design description of a microprocessor based Engine Monitoring and Control unit (EMAC) for small turboshaft

    Baez, A. N.

    1985-01-01

    Research programs have demonstrated that digital electronic controls are more suitable for advanced aircraft/rotorcraft turbine engine systems than hydromechanical controls. Commercially available microprocessors are believed to have the speed and computational capability required for implementing advanced digital control algorithms. Thus, it is desirable to demonstrate that off-the-shelf microprocessors are indeed capable of performing real time control of advanced gas turbine engines. The engine monitoring and control (EMAC) unit was designed and fabricated specifically to meet the requirements of an advanced gas turbine engine control system. The EMAC unit is fully operational in the Army/NASA small turboshaft engine digital research program.

  10. Microprocessor-controlled scanning densitometer system

    Shurtliff, R.W.

    1980-04-01

    An Automated Scanning Densitometer System has been developed by uniting a microprocessor with a low energy x-ray densitometer system. The microprocessor controls the detector movement, provides self-calibration, compensates raw readings to provide time-linear output, controls both data storage and the host computer interface, and provides measurement output in engineering units for immediate reading. The densitometer, when used in a scanning mode, is a precision reference instrument that provides chordal average density measurements over the cross section of a pipe under steady-state flow conditions. Results have shown an improvement over the original densitometer in reliability and repeatability of the system, an a factor-of-five improvement in accuracy

  11. Cardiac output measurement instruments controlled by microprocessors

    Spector, M.; Barritault, L.; Boeri, C.; Fauchet, M.; Gambini, D.; Vernejoul, P. de

    The nuclear medicine and biophysics laboratory of the Necker-Enfants malades University Hospital Centre has built a microprocessor controlled Cardiac flowmetre. The principle of the cardiac output measurement from a radiocardiogram is well established. After injection of a radioactive indicator upstream from the heart cavities the dilution curve is obtained by the use of a gamma-ray precordial detector. This curve normally displays two peaks due to passage of the indicator into the right and left sides of the heart respectively. The output is then obtained from the stewart Hamilton principle once recirculation is eliminated. The graphic method used for the calculation however is long and tedious. The decreasing fraction of the dilution curve is projected in logarithmic space in order to eliminate recirculation by determining the mean straight line from which the decreasing exponential is obtained. The principle of the use of microprocessors is explained (electronics, logics) [fr

  12. Microprocessor based data acquisition system for Moessbauer spectrometer

    Patwardhan, P.K.; Indurkar, V.S.

    1981-01-01

    A data acquisition system, for Moessbauer spectrometer and other probability distribution spectrum is described. This utilizes the advantages of incorporating a microcomputer for providing a flexible analytical capability and speed of hard wired MCS unit updating channel contents in DMA. Holbourn, Player and Woodhams have recently described a microprocessor controlled Moessbauer spectrometer where microprocessor performs the task of updating channel contents, requiring about 60 micro seconds in interrupt mode. This imposes restrictions on increasing the channel number and on increasing the velocity scan frequency in order to cover higher velocity ranges. The system described in this article performs data acquisition in faster direct memory access. It is a two module system, (1) MCS module (2) Microcomputer module, arranged around a common address, data and control buses. The microcomputer module has an access to the system data during flyback periods and can be programmed for the task of monitor on progess of experiment and as a manipulator of various control operations needed during experiment. The system firmware includes: (1) MONITOR (2) BLOCK-TRANSFER (3) DATA-SMOOTHING (4) DECIMAL-CONVERTER (5) MATH. The scope of this firmware is briefly described. (author)

  13. Hardware math for the 6502 microprocessor

    Kissel, R.; Currie, J.

    1985-01-01

    A floating-point arithmetic unit is described which is being used in the Ground Facility of Large Space Structures Control Verification (GF/LSSCV). The experiment uses two complete inertial measurement units and a set of three gimbal torquers in a closed loop to control the structural vibrations in a flexible test article (beam). A 6502 (8-bit) microprocessor controls four AMD 9511A floating-point arithmetic units to do all the computation in 20 milliseconds.

  14. Microprocessor-based stepping motor driver

    Halbig, J.K.; Klosterbuer, S.F.

    1979-09-01

    The Pion Generation for Medical Irradiations (PIGMI) program at the Los Alamos Scientific Laboratory requires a versatile stepping motor driver to do beam diagnostic measurements. A driver controlled by a microprocessor that can move eight stepping motors simultaneously was designed. The driver can monitor and respond to clockwise- and counterclockwise-limit switches, and it can monitor a 0- to 10-V dc position signal. The software controls start and stop ramping and maximum stepping rates. 2 figures, 1 table

  15. The specifications a multichannel analyser using microprocessor

    Pontes, E.W.

    The idea of a small nuclear data acquisition system (stand - alone CAMAC system) used for spectroscopy, is presented. The system is composed by an autonomous controller with microprocessor with one fast programable unit (1-2 μsec/CAMAC instructions) and with modulus of general functions as: CAMAC memory, interface for video, interface for analogy to digital converter and temporizing. (E.G.) [pt

  16. Some software algorithms for microprocessor ratemeters

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  17. Some software algorithms for microprocessor ratemeters

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  18. Mass storage for microprocessor farms

    Areti, H.

    1990-01-01

    Experiments in high energy physics require high density and high speed mass storage. Mass storage is needed for data logging during the online data acquisition, data retrieval and storage during the event reconstruction and data manipulation during the physics analysis. This paper examines the storage and speed requirements at the first two stages of the experiments and suggests a possible starting point to deal with the problem. 3 refs., 3 figs

  19. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  20. Microprocessor control unit of thyristor regulator of microhydroelectric power station ballast load

    Nomokonova, Yu; Bogdanov, E

    2014-01-01

    The operational principle of microhydroelectric power station ballast load is presented. The comparative overview of the mathematical modeling methods is performed. The ranges of thyristors optimal work are shown as a result of the regulator regimes analysis. Shows the necessity of regulation the ballast load in microhydroelectric power station with help of developed algorithm of the program for microprocessor control

  1. The use of distributed microprocessors for control devices

    Lejon, J.C.

    1978-01-01

    The use of distributed individual microprocessors provided the basis for the development of the μZ system, which is a modular numerical control device which in its main part contains no elements whatever with multiple functions. With this system, total availability of control is achieved and the failure of any individual element causes loss of automatic control only over one actuator or over a small group of interdependent actuators. The human operator, who cannot be omitted even with an inherently safe control system, can operate the single faulty channel manually. The microprocessors have a free-format with which all possible algorithms within the limits of the memory size of the various cards can be performed. This program can be loaded either in random access memory (RAM) or in read-only memory (ROM). The configuration is made either by assembling software modules in a hard-copy dialogue without any knowledge of data processing being necessary, or from a program written in Fortran. If the user does not have a configurator he can use read-only memories supplied by the manufacter either in the standard form or in a requested design. The parameters are loaded by means of a portable microconsole whose keyboard and displays can be used for a hard-copy dialogue with the regulating cards. Manual control and indications can be carried out from three completely independent configurations which can be used separately or in parallel: individual station, multiple-function station or cathode colour console. (author)

  2. New On-board Microprocessors

    Weigand, R.

    Two new processor devices have been developed for the use on board of spacecrafts. An 8-bit 8032-microcontroller targets typical controlling applications in instruments and sub-systems, or could be used as a main processor on small satellites, whereas the LEON 32-bit SPARC processor can be used for high performance controlling and data processing tasks. The ADV80S32 is fully compliant to the Intel 80x1 architecture and instruction set, extended by additional peripherals, 512 bytes on-chip RAM and a bootstrap PROM, which allows downloading the application software using the CCSDS PacketWire pro- tocol. The memory controller provides a de-multiplexed address/data bus, and allows to access up to 16 MB data and 8 MB program RAM. The peripherals have been de- signed for the specific needs of a spacecraft, such as serial interfaces compatible to RS232, PacketWire and TTC-B-01, counters/timers for extended duration and a CRC calculation unit accelerating the CCSDS TM/TC protocol. The 0.5 um Atmel manu- facturing technology (MG2RT) provides latch-up and total dose immunity; SEU fault immunity is implemented by using SEU hardened Flip-Flops and EDAC protection of internal and external memories. The maximum clock frequency of 20 MHz allows a processing power of 3 MIPS. Engineering samples are available. For SW develop- ment, various SW packages for the 8051 architecture are on the market. The LEON processor implements a 32-bit SPARC V8 architecture, including all the multiply and divide instructions, complemented by a floating-point unit (FPU). It includes several standard peripherals, such as timers/watchdog, interrupt controller, UARTs, parallel I/Os and a memory controller, allowing to use 8, 16 and 32 bit PROM, SRAM or memory mapped I/O. With on-chip separate instruction and data caches, almost one instruction per clock cycle can be reached in some applications. A 33-MHz 32-bit PCI master/target interface and a PCI arbiter allow operating the device in a plug-in card

  3. CAMAC based computer--computer communications via microprocessor data links

    Potter, J.M.; Machen, D.R.; Naivar, F.J.; Elkins, E.P.; Simmonds, D.D.

    1976-01-01

    Communications between the central control computer and remote, satellite data acquisition/control stations at The Clinton P. Anderson Meson Physics Facility (LAMPF) is presently accomplished through the use of CAMAC based Data Link Modules. With the advent of the microprocessor, a new philosophy for digital data communications has evolved. Data Link modules containing microprocessor controllers provide link management and communication network protocol through algorithms executed in the Data Link microprocessor

  4. SNOOP module CAMAC interface to the 168/E microprocessor

    Bernstein, D.; Carroll, J.T.; Mitnick, V.H.; Paffrath, L.; Parker, D.B.

    1979-10-01

    A pair of 168/E microprocessors will be used to meet the realtime computing requirements of the SLAC Hybrid Facility. A SNOOP module and 168/E Interface provide the link between the host computer and the microprocessors. By eavesdropping on normal CAMAC read operations, the SNOOP provides a direct data transfer from CAMAC to microprocessor memory. The host computer controls the processors using standard CAMAC programmed I/O to the SNOOP

  5. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  6. LSI microprocessor circuit families based on integrated injection logic. Mikroprotsessornyye komplekty bis na osnove integral'noy inzhektsionnoy logiki

    Borisov, V.S.; Vlasov, F.S.; Kaloshkin, E.P.; Serzhanovich, D.S.; Sukhoparov, A.I.

    1984-01-01

    Progress in developing microprocessor computer hardware is based on progress and improvement in systems engineering, circuit engineering and manufacturing process methods of design and development of large-scale integrated circuits (BIS). Development of these methods with widespread use of computer-aided design (CAD) systems has allowed developing 4- and 8-bit microprocessor families (MPK) of LSI circuits based on integrated injection logic (I/sup 2/L), characterized by relatively high speed and low dissipated power. The emergence of LSI and VLSI microprocessor circuits required computer system developers to make changes to theory and practice of computer system design. Progress in technology upset the established relation between hardware and software component development costs in systems being designed. A characteristic feature of using LSI circuits is also the necessity of building devices from standard modules with large functional complexity. The existing directions of forming compositions of LSI microprocessor families allow the system developer to choose a particular methodology of design, proceeding from the efficiency function and field of application of the system being designed. The efficiency of using microprocessor families is largely governed by the user's understanding in depth of the structure of LSI microprocessor family circuits and the features of using them to implement a broad class of computer devices and modules being developed. This book is devoted to solving this problem.

  7. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  8. Reliability of microprocessor-based relay protection devices: Myths and reality

    Gurevich Vladimir

    2009-01-01

    Full Text Available The article examines four basic theses about the ostensibly extremely high reliability of microprocessor-based relay protection (MP touted by supporters of MP. Through detailed analysis based on many references it is shown that the basis of these theses are widespread myths, and actually MP reliability is lower than the reliability of electromechanical and electronic protective relays on discrete components.

  9. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  10. Thermal interface pastes nanostructured for high performance

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  11. Supply system with microprocessor control for electron gun

    Duplin, N.I.; Sergeev, N.N.

    1988-01-01

    Precision supply system for electron gun used in Auger-spectrometer is described. The supply system consists of control and high-voltage parts, made as separate units. Supply high-voltage unit includes system supply module, filament module to supply electron gun cathode and 6 high-volt modules to supply accelerating, modulating and three focusing electrodes of the gun. High-voltage modules have the following characteristics: U-(100-1000)V output voltage, 5x10 -5 U stability, 10 -5 xU pulsation amplitude, J-(0-5)A filament current change range at 10 -4 xJ stability. Control unit including microprocessor, timer and storage devices forms control voltage for all modules and regulates voltage and current of filament at electrodes

  12. Microprocessor-controlled portable neutron spectrometer

    Hunt, G.F.; Kaifer, R.C.; Slaughter, D.R.; Strout, R.E. II; Rueppel, D.W.

    1979-01-01

    A neutron spectrometer that acquires and unfolds data in the field has been developed for use in the energy range from 1 to 20 MeV. The system includes an NE213 organic scintillation detector, automatic gain stabilization, automatically stabilized pulseshape discrimination, an LSl-11 microprocessor for control and data reduction, and a multichannel analyzer for data acquisition. The system, with the exception of the multichannel analyzer, is mounted in a suitcase 47 by 66 by 23.5 cm. The mass is 23.5 kg

  13. System architecture for microprocessor based protection system

    Gallagher, J.M. Jr.; Lilly, G.M.

    1976-01-01

    This paper discusses the architectural design features to be employed by Westinghouse in the application of distributed digital processing techniques to the protection system. While the title of the paper makes specific reference to microprocessors, this is only one (and the newest) of the building blocks which constitutes a distributed digital processing system. The actual system structure (as realized through utilization of the various building blocks) is established through considerations of reliability, licensability, and cost. It is the intent of the paper to address these considerations licenstions as they relate to the architectural design features. (orig.) [de

  14. Microprocessor-based accelerating power level detector

    Nagpal, M.; Zarecki, W.; Albrecht, J.C.

    1994-01-01

    An accelerating power level detector was built using state-of-the-art microprocessor technology at Powertech Labs Inc. The detector will monitor the real power flowing in two 300 kV transmission lines out of Kemano Hydroelectric Generating Station and will detect any sudden loss of load due to a fault on either line under certain pre-selected power flow conditions. This paper discusses the criteria of operation for the detector and its implementation details, including digital processing, hardware, and software.

  15. Microprocessor architectures RISC, CISC and DSP

    Heath, Steve

    1995-01-01

    'Why are there all these different processor architectures and what do they all mean? Which processor will I use? How should I choose it?' Given the task of selecting an architecture or design approach, both engineers and managers require a knowledge of the whole system and an explanation of the design tradeoffs and their effects. This is information that rarely appears in data sheets or user manuals. This book fills that knowledge gap.Section 1 provides a primer and history of the three basic microprocessor architectures. Section 2 describes the ways in which the architectures react with the

  16. Microprocessor-controlled, programmable ramp voltage generator

    Hopwood, J.

    1978-11-01

    A special-purpose voltage generator has been developed for driving the quadrupole mass filter of a residual gas analyzer. The generator is microprocessor-controlled with desired ramping parameters programmed by setting front-panel digital thumb switches. The start voltage, stop voltage, and time of each excursion are selectable. A maximum of five start-stop levels may be pre-selected for each program. The ramp voltage is 0 to 10 volts with sweep times from 0.1 to 999.99 seconds

  17. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  18. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  19. A new design approach for control circuits of pipelined single-flux-quantum microprocessors

    Yamanashi, Y; Akimoto, A; Yoshikawa, N; Tanaka, M; Kawamoto, T; Kamiya, Y; Fujimaki, A; Terai, H; Yorozu, S

    2006-01-01

    A novel method of design for controllers of pipelined microprocessors using single-flux-quantum (SFQ) logic has been proposed. The proposed design approach is based on one hot encoding and is very suitable for designing a finite state machine using SFQ logic circuits, where each internal state of the microprocessor is represented by a flip-flop. In this approach, decoding of the internal state can be performed instantaneously, in contrast to the case in the conventional method using a binary state register. Moreover, pipelining is effectively implemented without increasing the circuit size because no pipeline registers are required in the one hot encoding. By using this method, we have designed a controller for our new SFQ microprocessors, which employs pipelining. The number of Josephson junctions of the newly designed controller is 1067, while the previous version without pipelining contains 1721 Josephson junctions. These results indicate that the proposed design approach is very effective for pipelined SFQ microprocessors. We have implemented a new controller using the NEC 2.5 kA cm -2 Nb standard process and confirmed its correct operation experimentally

  20. Software tools for microprocessor based systems

    Halatsis, C.

    1981-01-01

    After a short review of the hardware and/or software tools for the development of single-chip, fixed instruction set microprocessor-based sytems we focus on the software tools for designing systems based on microprogrammed bit-sliced microprocessors. Emphasis is placed on meta-microassemblers and simulation facilties at the register-transfer-level and architecture level. We review available meta-microassemblers giving their most important features, advantages and disadvantages. We also make extentions to higher-level microprogramming languages and associated systems specifically developed for bit-slices. In the area of simulation facilities we first discuss the simulation objectives and the criteria for chosing the right simulation language. We consertrate to simulation facilities already used in bit-slices projects and discuss the gained experience. We conclude by describing the way the Signetics meta-microassembler and the ISPS simulation tool have been employed in the design of a fast microprogrammed machine, called MICE, made out of ECL bit-slices. (orig.)

  1. Technology transfer of military space microprocessor developments

    Gorden, C.; King, D.; Byington, L.; Lanza, D.

    1999-01-01

    Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.

  2. Microprocessor-based integrated LMFBR core surveillance. Pt. 2

    Elies, V.

    1985-12-01

    This report is the result of the KfK part of a joint study of KfK and INTERATOM. The aim of this study is to explore the advantages of microprocessors and microelectronics for a more sophisticated core surveillance, which is based on the integration of separate surveillance techniques. After a description of the experimental results gained with the different surveillance techniques so far, it is shown which kinds of correlation can be done using the evaluation results obtained from the single surveillance systems. The main part of this report contains the systems analysis of a microcomputer-based system integrating different surveillance methods. After an analysis of the hardware requirements a hardware structure for the integrated system is proposed. The software structure is then described for the subsystem performing the different surveillance algorithms as well as for the system which does the correlation thus deriving additional information from the single results. (orig.) [de

  3. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  4. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  5. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  6. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  7. Microprocessor system for temperature regulation and stabilization

    Nguyen Nhi Dien; Rodionov, K.G.

    1989-01-01

    Microprocessor based system for temperature regulation and stabilization of an operation external object is described. The system has the direct current amplifier working according to modulator-demodulator principle. The overal gain is 100, 1000, 2000. The maximum output signal is ±10 V. The power amplifier is a thyristor one and its line voltage is 220 V, 50 Hz. The output power is 0-2 kVA. The microcontroller has a remote display terminal. Data input is 8 and data output is one. Input and output voltage is ±(0-10) V. The preselection time for stabilization is within 1 s - 18 h. The program algorithm is given. 5 figs.; 1 tab

  8. Low cost design of microprocessor EDAC circuit

    Hao Li; Yu Lixin; Peng Heping; Zhuang Wei

    2015-01-01

    An optimization method of error detection and correction (EDAC) circuit design is proposed. The method involves selecting or constructing EDAC codes of low cost hardware, associated with operation scheduling implementation based on 2-input XOR gates structure, and two actions for reducing hardware cells, which can reduce the delay penalties and area costs of the EDAC circuit effectively. The 32-bit EDAC circuit hardware implementation is selected to make a prototype, based on the 180 nm process. The delay penalties and area costs of the EDAC circuit are evaluated. Results show that the time penalty and area cost of the EDAC circuitries are affected with different parity-check matrices and different hardware implementation for the EDAC codes with the same capability of correction and detection code. This method can be used as a guide for low-cost radiation-hardened microprocessor EDAC circuit design and for more advanced technologies. (paper)

  9. High performance MEAs. Final report

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  10. A microprocessor based picture analysis system for automatic track measurements

    Heinrich, W.; Trakowski, W.; Beer, J.; Schucht, R.

    1982-01-01

    In the last few years picture analysis became a powerful technique for measurements of nuclear tracks in plastic detectors. For this purpose rather expensive commercial systems are available. Two inexpensive microprocessor based systems with different resolution were developed. The video pictures of particles seen through a microscope are digitized in real time and the picture analysis is done by software. The microscopes are equipped with stages driven by stepping motors, which are controlled by separate microprocessors. A PDP 11/03 supervises the operation of all microprocessors and stores the measured data on its mass storage devices. (author)

  11. The engineering of microprocessor systems guidelines on system development

    1979-01-01

    The Engineering of Microprocessor Systems: Guidelines on System Development provides economical and technical guidance for use when incorporating microprocessors in products or production processes and assesses the alternatives that are available. This volume is part of Project 0251 undertaken by The Electrical Research Association, which aims to give managers and development engineers advice and comment on the development process and the hardware and software needed to support the engineering of microprocessor systems. The results of Phase 1 of the five-phase project are contained in this fir

  12. A light-powered sub-threshold microprocessor

    Liu Ming; Chen Hong; Zhang Chun; Li Changmeng; Wang Zhihua, E-mail: lium02@mails.tsinghua.edu.cn [Institute of Microelectronics, Tsinghua University, Beijing 100084 (China)

    2010-11-15

    This paper presents an 8-bit sub-threshold microprocessor which can be powered by an integrated photosensitive diode. With a custom designed sub-threshold standard cell library and 1 kbit sub-threshold SRAM design, the leakage power of 58 nW, dynamic power of 385 nW - 165 kHz, EDP 13 pJ/inst and the operating voltage of 350 mV are achieved. Under a light of about 150 kLux, the microprocessor can run at a rate of up to 500 kHz. The microprocessor can be used for wireless-sensor-network nodes.

  13. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  14. Front-end data processing using the bit-sliced microprocessor

    Machen, D.R.

    1979-01-01

    A state-of-the-art computing device, based upon the high-speed bit-sliced microprocessor, was developed into hardware for front-end data processing in both control and experiment applications at the Los Alamos Scientific Laboratory. The CAMAC Instrumentation Standard provides the framework for the high-speed hardware, allowing data acquisition and processing to take place at the data source in a CAMAC crate. 5 figures

  15. Recent applications of microprocessor-based instruments in nuclear power stations

    Cash, N.R.; Dennis, U.E.

    1988-01-01

    The incorporation of microprocessors in the design of nuclear power plant instrumentation has led to levels of measurement and control not available previously. In addition to the expected expansion of functional (system) capability, numerous desirable features now are possible. The added ability to both self-calibrate and perform compensation algorithms has led to dramatic improvements in accuracies, response times, and noise rejection. Automated performance checking and self-testing simplify troubleshooting and required periodic surveillance. Alphanumeric displays allow both menu-driven operation and user-prompting, which, in turn, contribute to mistake avoidance. New features of these microprocessor-based instruments are of specific benefit in nuclear power reactors, were safety is of prime concern. Greater reliability and accuracy can be provided. Shortened calibration, surveillance, and repair times reduce the exposure to unnecessary challenges of the plant's protection systems that can arise from spurious noise signals

  16. A microprocessor-controlled assay for the estimation of human placental lactogen

    Adam, T.; Roulston, J.E.; Bagshawe, K.D.

    1979-01-01

    A radioimmunoassay for human placental lactogen (HPL) is described using the KEMTEK 3000, which is a modular radioimmunoassay apparatus controlled by a microprocessor. Operation of the KEMTEK 3000 is largely automatic and it requires minimal intervention from the operator. It is capable of 300 reactions per hour so that a large number of estimations can readily be performed. HPL was assayed by a double antibody method on serum samples from pregnant women and patients with trophoblastic tumours. (Auth.)

  17. Microprocessor protection devices: The present and the future

    Gurevich Vladimir

    2008-01-01

    Full Text Available Paper presents the analysis of the basic constructive disadvantages of the present day microprocessor-based protective devices (MBR and offers the basic principles for creating a new MBR that can be used in newly constructed devices.

  18. A Fault-tolerant RISC Microprocessor for Spacecraft Applications

    Timoc, Constantin; Benz, Harry

    1990-01-01

    Viewgraphs on a fault-tolerant RISC microprocessor for spacecraft applications are presented. Topics covered include: reduced instruction set computer; fault tolerant registers; fault tolerant ALU; and double rail CMOS logic.

  19. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  20. An SEU rate prediction method for microprocessors of space applications

    Gao Jie; Li Qiang

    2012-01-01

    In this article,the relationship between static SEU (Single Event Upset) rate and dynamic SEU rate in microprocessors for satellites is studied by using process duty cycle concept and fault injection technique. The results are compared to in-orbit flight monitoring data. The results show that dynamic SEU rate by using process duty cycle can estimate in-orbit SEU rate of microprocessor reasonable; and the fault injection technique is a workable method to estimate SEU rate. (authors)

  1. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  2. Leak detection system with distributed microprocessor in the primary containment vessel

    Inahara, K.; Yoshioka, K.; Tomizawa, T.

    1980-01-01

    Responding to the demand for greater improvements of the safety monitoring system, less public radiation exposure, and increase of plant availability, measuring and control systems in nuclear power plants have undergone many improvements. Leak detection systems are also required to give earlier warning, additional accuracy, and continuous monitoring function. This paper describes the drywell sump leakage detection system utilizing a distributed microprocessor, which is a successful application owing to its versatile function and ease of installation. The microprocessor performs various functions such as a rate of level change computation, conversion to leakage flow rate, initiation of alarm, and sump pump control. This system has already been applied to three operating BWR plants that demonstrate its efficiency. (auth)

  3. INVESTIGATION OF MICROPROCESSOR CURRENT PROTECTION LINES WITH IMPROVED INDICES OF TECHNICAL PERFECTION

    E. V. Buloichyk

    2014-01-01

    Full Text Available Technical perfection improvement of microprocessor current protection of distribution networks lines is provided by introduction of asymmetrical fault mode determination and fault location functions in the algorithm of its functioning. As a result of computing experiment the basic indices of the technical perfection of current protection have been obtained in the paper. The paper proves high efficiency of the proposed methods that ensure selective and proper operation in the different modes of the controlled line.

  4. Microprocessors & their operating systems a comprehensive guide to 8, 16 & 32 bit hardware, assembly language & computer architecture

    Holland, R C

    1989-01-01

    Provides a comprehensive guide to all of the major microprocessor families (8, 16 and 32 bit). The hardware aspects and software implications are described, giving the reader an overall understanding of microcomputer architectures. The internal processor operation of each microprocessor device is presented, followed by descriptions of the instruction set and applications for the device. Software considerations are expanded with descriptions and examples of the main high level programming languages (BASIC, Pascal and C). The book also includes detailed descriptions of the three main operatin

  5. Design of a microprocessor-based Control, Interface and Monitoring (CIM unit for turbine engine controls research

    Delaat, J. C.; Soeder, J. F.

    1983-01-01

    High speed minicomputers were used in the past to implement advanced digital control algorithms for turbine engines. These minicomputers are typically large and expensive. It is desirable for a number of reasons to use microprocessor-based systems for future controls research. They are relatively compact, inexpensive, and are representative of the hardware that would be used for actual engine-mounted controls. The Control, Interface, and Monitoring Unit (CIM) contains a microprocessor-based controls computer, necessary interface hardware and a system to monitor while it is running an engine. It is presently being used to evaluate an advanced turbofan engine control algorithm.

  6. Nuclear criticality evacuation with telemonitoring and microprocessors

    Fergus, R.W.; Moe, H.J. Sr.

    1979-01-01

    At Argonne National Laboratory, criticality alarms are required at widely separated locations to evacuate personnel in case of accident while emergency teams or maintenance personnel respond from a central location. The system functions have been divided in a similar manner. The alarm site hardware can independently detect a criticality and sound the evacuation signal while general monitoring and routine tests are handled by a communication link to a central monitoring station. The radiation detectors and evacuation sounders at each site are interconnected by a common two conductor cable in a unique telemonitoring format. This format allows both control and data information to be received or transmitted at any point on the cable which can be up to 3000 meters total length. The site microprocessor maintains a current data table, detects several faults, drives a printer, and communicates with the central telemonitoring station. The radiation detectors are made with plastic scintillators and photomultiplier tubes operated in a constant current mode with a 4 decade measurement range. The detectors also respond within microseconds to the criticality radiation burst. These characteristics can be tested with an internal light emitting diode either completely with a manual procedure or routinely with a system test initiated by the central monitoring station. Although the system was developed for a criticality alarm which requires reliable and redundant features, the basic techniques are useable for other monitoring and instrumentation applications

  7. MicroShell Minimalist Shell for Xilinx Microprocessors

    Werne, Thomas A.

    2011-01-01

    MicroShell is a lightweight shell environment for engineers and software developers working with embedded microprocessors in Xilinx FPGAs. (MicroShell has also been successfully ported to run on ARM Cortex-M1 microprocessors in Actel ProASIC3 FPGAs, but without project-integration support.) Micro Shell decreases the time spent performing initial tests of field-programmable gate array (FPGA) designs, simplifies running customizable one-time-only experiments, and provides a familiar-feeling command-line interface. The program comes with a collection of useful functions and enables the designer to add an unlimited number of custom commands, which are callable from the command-line. The commands are parameterizable (using the C-based command-line parameter idiom), so the designer can use one function to exercise hardware with different values. Also, since many hardware peripherals instantiated in FPGAs have reasonably simple register-mapped I/O interfaces, the engineer can edit and view hardware parameter settings at any time without stopping the processor. MicroShell comes with a set of support scripts that interface seamlessly with Xilinx's EDK tool. Adding an instance of MicroShell to a project is as simple as marking a check box in a library configuration dialog box and specifying a software project directory. The support scripts then examine the hardware design, build design-specific functions, conditionally include processor-specific functions, and complete the compilation process. For code-size constrained designs, most of the stock functionality can be excluded from the compiled library. When all of the configurable options are removed from the binary, MicroShell has an unoptimized memory footprint of about 4.8 kB and a size-optimized footprint of about 2.3 kB. Since MicroShell allows unfettered access to all processor-accessible memory locations, it is possible to perform live patching on a running system. This can be useful, for instance, if a bug is

  8. Sectional microprocessor based microcomputer and its application to express analysis using interactive language

    Lang, I.; Leveleki, L.; Salai, M.; Turani, D.

    1984-01-01

    Sectional microprocessor TPA-L/128H based mini-computer being a part of the TPA-8 computer family is developed. A substantial increase of the computer operation rate is attained at the expense of microprogram monitoring. The central processor is constructed on the base of the AM2900 sectional microprocessor elements. The TPA-L/128H computer is program compatible with TPA-8 computer, perfectly equipped with software: high level languages as well as OS/L, COS/H, RTS/H, PAL/128, WPS, TEASYS-8 and IL 128 ensuring statistical data processing, physical experiments automation and interactive experimental data processing. The real time basis problems and CAMAC devices monitoring are efficiently solved

  9. FY1995 study of design methodology and environment of high-performance processor architectures; 1995 nendo koseino processor architecture sekkeiho to sekkei kankyo no kenkyu

    NONE

    1997-03-01

    The aim of our project is to develop high-performance processor architectures for both general purpose and application-specific purpose. We also plan to develop basic softwares, such as compliers, and various design aid tools for those architectures. We are particularly interested in performance evaluation at architecture design phase, design optimization, automatic generation of compliers from processor designs, and architecture design methodologies combined with circuit layout. We have investigated both microprocessor architectures and design methodologies / environments for the processors. Our goal is to establish design technologies for high-performance, low-power, low-cost and highly-reliable systems in system-on-silicon era. We have proposed PPRAM architecture for high-performance system using DRAM and logic mixture technology, Softcore processor architecture for special purpose processors in embedded systems, and Power-Pro architecture for low power systems. We also developed design methodologies and design environments for the above architectures as well as a new method for design verification of microprocessors. (NEDO)

  10. Genomic analysis suggests that mRNA destabilization by the microprocessor is specialized for the auto-regulation of Dgcr8.

    Archana Shenoy

    2009-09-01

    Full Text Available The Microprocessor, containing the RNA binding protein Dgcr8 and RNase III enzyme Drosha, is responsible for processing primary microRNAs to precursor microRNAs. The Microprocessor regulates its own levels by cleaving hairpins in the 5'UTR and coding region of the Dgcr8 mRNA, thereby destabilizing the mature transcript.To determine whether the Microprocessor has a broader role in directly regulating other coding mRNA levels, we integrated results from expression profiling and ultra high-throughput deep sequencing of small RNAs. Expression analysis of mRNAs in wild-type, Dgcr8 knockout, and Dicer knockout mouse embryonic stem (ES cells uncovered mRNAs that were specifically upregulated in the Dgcr8 null background. A number of these transcripts had evolutionarily conserved predicted hairpin targets for the Microprocessor. However, analysis of deep sequencing data of 18 to 200nt small RNAs in mouse ES, HeLa, and HepG2 indicates that exonic sequence reads that map in a pattern consistent with Microprocessor activity are unique to Dgcr8.We conclude that the Microprocessor's role in directly destabilizing coding mRNAs is likely specifically targeted to Dgcr8 itself, suggesting a specialized cellular mechanism for gene auto-regulation.

  11. Microprocessor Card for Cuban Series polarimeters Laserpol

    Arista Romeu, E.; Mora Mazorra, W.

    2012-01-01

    We present the design consists of a card based on a micro-processor 8-bit adds new software components and their basic living, which allow to deliver new services and expand the possibilities for use in other applications of the polarimeter LASERPOL series, as the polarimetric detection. Given the limitations of the original card it was necessary to introduce a series of changes that would allow to address new user requirements, and expand the possible applications of the instruments. This was done the expansion of the capacity of the EPROM and RAM memory, the decoder circuit was implemented memory map using a programmable integrated circuit, and introduced a real time clock with nonvolatile RAM, these features are exploited to the introduction of new features such as the realization of the polarimeter calibration by the user from a sample pattern or a calibration pattern used as a reference, and the incorporation of the time and date to the reports of measurements required industry for quality control processes. Card that is achieved along with the rest of the components is compatible with polarimeters LASERPOL 101M Series, 3M and LP4, pin to pin, which facilitates their incorporation into the polarimeters in operation in the industry 'in situ' replacement cards from previous models, allowing to extend the possibilities of statistical processing, precision and accuracy of the instruments. Improved measurements in the industry, resulting in significant savings by elimination of losses in production and raw materials. The improved response speed of reading the polarimeters LASERPOL Use and polarimetric detectors. (Author)

  12. Microprocessor Activity Controls Differential miRNA Biogenesis In Vivo

    Thomas Conrad

    2014-10-01

    Full Text Available In miRNA biogenesis, pri-miRNA transcripts are converted into pre-miRNA hairpins. The in vivo properties of this process remain enigmatic. Here, we determine in vivo transcriptome-wide pri-miRNA processing using next-generation sequencing of chromatin-associated pri-miRNAs. We identify a distinctive Microprocessor signature in the transcriptome profile from which efficiency of the endogenous processing event can be accurately quantified. This analysis reveals differential susceptibility to Microprocessor cleavage as a key regulatory step in miRNA biogenesis. Processing is highly variable among pri-miRNAs and a better predictor of miRNA abundance than primary transcription itself. Processing is also largely stable across three cell lines, suggesting a major contribution of sequence determinants. On the basis of differential processing efficiencies, we define functionality for short sequence features adjacent to the pre-miRNA hairpin. In conclusion, we identify Microprocessor as the main hub for diversified miRNA output and suggest a role for uncoupling miRNA biogenesis from host gene expression.

  13. Patmos: a time-predictable microprocessor

    Schoeberl, Martin; Puffitsch, Wolfgang; Hepp, Stefan

    2018-01-01

    rather than for high average-case performance. Patmos is a dual-issue, statically scheduled RISC processor. A method cache serves as the cache for the instructions and a split cache organization simplifies the WCET analysis of the data cache. To fill the dual-issue pipeline with enough useful...

  14. Learning Apache Solr high performance

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  15. High-performance composite chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  16. Microprocessor-controlled wide-range streak camera

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  17. Microprocessor-controlled, wide-range streak camera

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  18. EOSCOR: a light weight, microprocessor controlled solar neutron detector

    Koga, R.; Albats, P.; Frye, G.M. Jr.; Schindler, S.M.; Denehy, B.V.; Hopper, V.D.; Mace, O.B.

    1979-01-01

    A light weight high energy neutron detector with vertical detection efficiency of 0.005 at 40 MeV and 1.4 m 2 sensitive area has been developed for long duration super-pressure balloon flight observations of solar neutrons and gamma rays. It consists of two sets of four plastic scintillator hodoscopes separated by a 1 m time-of-flight path to observe n-p, C(n,p), and C(n,d) interactions. The neutron interactions are separated from gamma ray events through TOF measurements. For a large flare, the signal from solar neutrons is expected to be an order of magnitude greater than that of the atmospheric background. The microprocessor controls the data acquisition, accumulation of histograms, and the encoding of data for the telemetry systems. A test flight of the detector was made with a zero-pressure balloon. The expected many-week duration of a super-pressure balloon flight would significantly increase the probability of observing 20-150 MeV neutrons from a medium or large flare. (Auth.)

  19. Microprocessor-controlled, wide-range streak camera

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  20. High-Performance Composite Chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  1. Toward High-Performance Organizations.

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  2. Functional High Performance Financial IT

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  3. High performance Mo adsorbent PZC

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  4. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  5. Single event effect testing of the Intel 80386 family and the 80486 microprocessor

    Moran, A.; LaBel, K.; Gates, M.; Seidleck, C.; McGraw, R.; Broida, M.; Firer, J.; Sprehn, S.

    1996-01-01

    The authors present single event effect test results for the Intel 80386 microprocessor, the 80387 coprocessor, the 82380 peripheral device, and on the 80486 microprocessor. Both single event upset and latchup conditions were monitored

  6. Design of microprocessor-based hardware for number theoretic transform implementation

    Anwar Ahmed Shamim

    1985-01-01

    The Winograd (1976) Fourier Transform algorithm (WFTA) was implemented on a TMS9900 microprocessor to compute NTTs. Since multiplication conducted modulo m is very time consuming a special purpose external hardware modular multiplier was designed, constructed and interfaced with the TMS9900 microprocessor. This external hardware modular multiplier allowed an improvement in the transform execution time. Computation time may further be reduced by employing several microprocessors. Taking advantage of the inherent parallelism of the WFTA, a dedicated parallel microprocessor system was designed and constructed to implement a 15-point WFTA in parallel. Benchmark programs were written to choose a suitable microprocessor for the parallel microprocessor system. A master or a host microprocessor is used to control the parallel microprocessor system and provides an interface to the outside world. An analogue to digital (a/d) and a digital to analogue (d/a) converter allows real time digital signal processing.

  7. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  8. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  9. High performance nuclear fuel element

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  10. Cross software for microprocessor program development at CERN

    Eicken, H. von; Montuelle, J.; Willers, I.; Blake, J.

    1981-01-01

    Programs for a variety of microprocessors (including Intel 8080; Motorola 6800 and 6809 and 68000; and Texas Instruments 9900) can be prepared on different host computers (such as IBM 370, CDC 6000, and Nord 10) using portable programs developed at CERN. The range of cross software consists of: an assembler for each target microprocessor, a single linkage editor, a single object module librarian, and a variety of pre-loaders which convert object modules from CERN's format (CUFOM) into manufacturers' formats. The programs are written in BCPL and PASCAL, programming languages which are available on a wide range of computers. (orig.)

  11. Concept report: Microprocessor control of electrical power system

    Perry, E.

    1977-01-01

    An electrical power system which uses a microprocessor for systems control and monitoring is described. The microprocessor controlled system permits real time modification of system parameters for optimizing a system configuration, especially in the event of an anomaly. By reducing the components count, the assembling and testing of the unit is simplified, and reliability is increased. A resuable modular power conversion system capable of satisfying a large percentage of space applications requirements is examined along with the programmable power processor. The PC global controller which handles systems control and external communication is analyzed, and a software description is given. A systems application summary is also included.

  12. High Performance JavaScript

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  13. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  14. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  15. How to harness the performance potential of current multi-core processors

    Jarp, Sverre; Lazzaro, Alfio; Leduc, Julien; Nowak, Andrzej

    2011-01-01

    Leakage currents have put a stop to the semiconductor industry's ability to increase processor frequency in order to enhance the performance of new microprocessors. Instead, we observe a slew of changes inside the micro-architecture with an aim of enhancing the performance. Several of these changes, however, do not translate into automatic speed improvements for the software. This paper discusses the increased complexity of modern microprocessors by separating out into dimensions each feature that impacts performance and mentions briefly ways of improving software, in particular that of the High Energy Physics community, to take full advantage.

  16. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  17. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  18. High performance soft magnetic materials

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  19. High performance polyethylene nanocomposite fibers

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  20. HIGH-PERFORMANCE COATING MATERIALS

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  1. Digital Fractional Order Controllers Realized by PIC Microprocessor: Experimental Results

    Petras, I.; Grega, S.; Dorcak, L.

    2003-01-01

    This paper deals with the fractional-order controllers and their possible hardware realization based on PIC microprocessor and numerical algorithm coded in PIC Basic. The mathematical description of the digital fractional -order controllers and approximation in the discrete domain are presented. An example of realization of the particular case of digital fractional-order PID controller is shown and described.

  2. Design and implementation of a microprocessor based room ...

    This paper describes the development of a microprocessor based room illumination control system that offers advantage of improved efficiency in the use of electrical energy and reduced cost of electricity over manually controlled lighting systems. This system is developed to regulate the intensity of light from direct current ...

  3. Microprocessor controlled dual parameter ADC system with a CAMAC interface

    Perry, D G; Nickell, Jr, J D [Los Alamos Scientific Lab., NM (USA)

    1978-09-01

    Presented here is the design of a dual parameter ADC system which is controlled by a microprocessor and also interfaced to CAMAC. The system was designed to be mobile in that it may work wherever there is a CAMAC crate. In such cases where the CAMAC system is inoperative, the system may operate in a stand-alone mode.

  4. The Microprocessor controls the activity of mammalian retrotransposons

    Heras, Sara R.; Macias, Sara; Plass, Mireya

    2013-01-01

    RNA biogenesis, also recognizes and binds RNAs derived from human long interspersed element 1 (LINE-1), Alu and SVA retrotransposons. Expression analyses demonstrate that cells lacking a functional Microprocessor accumulate LINE-1 mRNA and encoded proteins. Furthermore, we show that structured regions...

  5. A microprocessor based multiscaling data acquisition system for moessbauer spectroscopy

    Bohm, C.; Ekdahl, T.

    1985-01-01

    A microprocessor based data acquisition system is described, which was developed for use in Moessbauer spectroscopy. It is designed to record two spectra simultaneously, one of which could be a calibration spectrum. It is autonomous, but uses a host computer for initialization and permanent storage of data. The host communication software is also described. (Author)

  6. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  7. A microprocessor-based gamma-ray spectrometer with gain stabilized single-channel analyzers

    Borg, P.J.; Huppert, P.; Phillips, P.L.; Waddington, P.J.

    1985-01-01

    The design and performance of a self-contained microprocessor-based gamma-ray spectrometer for use in geophysical measurements using nuclear techniques is described. The instrument uses single-channel analyzers which are inherently simpler and faster than the Wilkinson or successive approximation ADC. A novel technique of gain stabilization together with a simple means of energy calibration has been developed. The modular design of the equipment makes it suitable for multidetector usage, required in a number of nucleonic gauges for the quantitative measurement of chemical constituents. (orig.)

  8. Microprocessor-controlled data-acquisition instrument for neutron-activation measurements

    Jones, B.A.

    1981-01-01

    This paper describes a microprocessor controlled data acquisition instrument designed at Lawrence Livermore National Laboratory to provide experimenters with a diagnostic tool for measuring the performance of laser imploded fusion targets via neutron activation techniques. This instrument features the ability to count four independent inputs simultaneously while providing a front panel readout of these inputs, plus a time of day clock. A hardcopy printout of the data is also provided by a built-in thermal printer. All running modes and parameters are user selectable via a front panel keypad, and a complete set of internal self-testing diagnostics are available for debug

  9. Development of a microprocessor controller for stand-alone photovoltaic power systems

    Millner, A. R.; Kaufman, D. L.

    1984-01-01

    A controller for stand-alone photovoltaic systems has been developed using a low power CMOS microprocessor. It performs battery state of charge estimation, array control, load management, instrumentation, automatic testing, and communications functions. Array control options are sequential subarray switching and maximum power control. A calculator keypad and LCD display provides manual control, fault diagnosis and digital multimeter functions. An RS-232 port provides data logging or remote control capability. A prototype 5 kW unit has been built and tested successfully. The controller is expected to be useful in village photovoltaic power systems, large solar water pumping installations, and other battery management applications.

  10. Analysis of the Intel 386 and i486 microprocessors for the Space Station Freedom Data Management System

    Liu, Yuan-Kwei

    1991-01-01

    The feasibility is analyzed of upgrading the Intel 386 microprocessor, which has been proposed as the baseline processor for the Space Station Freedom (SSF) Data Management System (DMS), to the more advanced i486 microprocessors. The items compared between the two processors include the instruction set architecture, power consumption, the MIL-STD-883C Class S (Space) qualification schedule, and performance. The advantages of the i486 over the 386 are (1) lower power consumption; and (2) higher floating point performance. The i486 on-chip cache does not have parity check or error detection and correction circuitry. The i486 with on-chip cache disabled, however, has lower integer performance than the 386 without cache, which is the current DMS design choice. Adding cache to the 386/386 DX memory hierachy appears to be the most beneficial change to the current DMS design at this time.

  11. A low-cost high-performance embedded platform for accelerator controls

    Cleva, Stefano; Bogani, Alessio Igor; Pivetta, Lorenzo

    2012-01-01

    Over the last years the mobile and hand-held device market has seen a dramatic performance improvement of the microprocessors employed for these systems. As an interesting side effect, this brings the opportunity of adopting these microprocessors to build small low-cost embedded boards, featuring lots of processing power and input/output capabilities. Moreover, being capable of running a full featured operating system such as Gnu/Linux, and even a control system toolkit such as Tango, these boards can also be used in control systems as front-end or embedded computers. In order to evaluate the feasibility of this idea, an activity has started at Elettra to select, evaluate and validate a commercial embedded device able to guarantee production grade reliability, competitive costs and an open source platform. The preliminary results of this work are presented. (author)

  12. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  13. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  14. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  15. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  16. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  17. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  18. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  19. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  20. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  1. Total dose and dose rate radiation characterization of EPI-CMOS radiation hardened memory and microprocessor devices

    Gingerich, B.L.; Hermsen, J.M.; Lee, J.C.; Schroeder, J.E.

    1984-01-01

    The process, circuit discription, and total dose radiation characteristics are presented for two second generation hardened 4K EPI-CMOS RAMs and a first generation 80C85 microprocessor. Total dose radiation performance is presented to 10M rad-Si and effects of biasing and operating conditions are discussed. The dose rate sensitivity of the 4K RAMs is also presented along with single event upset (SEU) test data

  2. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  3. Dual photon absorptiometer utilizing a HpGe detector and microprocessor controller

    Ellis, K.J.; Vartsky, D.; Pearlstein, T.B.; Alberi, J.L.; Cohn, S.H.

    1978-01-01

    The analysis of bone mineral content (BMC) using a single energy-photon beam assumes that there are only two materials present, bone mineral and a uniform soft tissue component. Uncertainty in the value of BMC increases with different adipose tissue components in the transmitted beam. These errors, however, are reduced by the dual energy technique. Also, extension to additional energies further identifies the separate constituents of the soft tissue component. A multi-energy bone scanning apparatus with data acquisition and analysis capability sufficient to perform multi-energy analysis of bone mineral content was designed and developed. The present work reports on the development of device operated in the dual energy mode. The high purity germanium (HpGe) detector is an integral component of the scanner. Errors in BMC due to multiple small angle scatters are reduced due to the excellent energy resolution of the detector (530 eV at 60 keV). Also, the need to filter the source or additional collimation on the detector is eliminated. A new dual source holder was designed using 200 mCi 125 I and 100 mCi 241 Am. The active areas of the two source capsules are aligned on a common axis. The congruence of the dual source was verified by measuring the collimator response function. This new holder design insures that the same tissue mass simultaneously attenuates both sources. The controller portion of the microprocessor allows for variation in total scan length, step size, and counting time per step. These options allow for multiple measurements without changes in the detector, source, or collimator. The system has been successfully used to determine the BMC content of different bones

  4. Design and Implementation of O/C relay using Microprocessor

    Dr.Abdul-Sattar H. Jasim

    2012-03-01

    Full Text Available This work presents the design and implementation of a versatile digital overcurrent (O/C relay using a single microprocessor. The relay is implemented by a combination of a look-up table and a counter. The software development and hardware testing are done using a microcomputer module based on a 8-bit microprocessor. The digital processing of measured currents enables a separate setting of operating values selection of all types of inverse or constant time characteristics overcurrent protection. This protection provides reasonably fast tripping, even at terminal close to the power source were the most serve faults can occur excluding the transient condition. So this method has an excellent compromise between accuracy hardware and speed

  5. General-purpose microprocessor-based control chassis

    Halbig, J.K.; Klosterbuer, S.F.; Swenson, D.A.

    1979-12-01

    The objective of the Pion Generation for Medical Irradiations (PIGMI) program at the Los Alamos Scientific Laboratory is to develop the technology to build smaller, less expensive, and more reliable proton linear accelerators for medical applications. For this program, a powerful, simple, inexpensive, and reliable control and data acquisition system was developed. The system has a NOVA 3D computer with a real time disk-operating system (RDOS) that communicates with distributed microprocessor-based controllers which directly control data input/output chassis. At the heart of the controller is a microprocessor crate which was conceived at the Fermi National Accelerator Laboratory. This idea was applied to the design of the hardware and software of the controller

  6. Applications of microprocessors in upgrading of accelerator controls

    Mallory, K.B.

    1977-03-01

    Experience at SLAC demonstrates that the criteria for selection and use of microprocessors in modifying an existing control system may differ from the criteria that apply during installation of the control system of a new accelerator. Considerations such as cost of individual projects, progressive installation without disruption of operations and training of on-board personnel can outweigh ''obvious'' goals such as standardization of hardware, uniformity of software, or even a rigid specification of link protocols with the main computer system

  7. Small Private Key PKS on an Embedded Microprocessor

    Seo, Hwajeong; Kim, Jihyun; Choi, Jongseok; Park, Taehwan; Liu, Zhe; Kim, Howon

    2014-01-01

    Multivariate quadratic (MQ) cryptography requires the use of long public and private keys to ensure a sufficient security level, but this is not favorable to embedded systems, which have limited system resources. Recently, various approaches to MQ cryptography using reduced public keys have been studied. As a result of this, at CHES2011 (Cryptographic Hardware and Embedded Systems, 2011), a small public key MQ scheme, was proposed, and its feasible implementation on an embedded microprocessor...

  8. Overview of real-time operating systems on microprocessor platforms

    Luong, T.T.

    1994-01-01

    This paper attempts to overview the real-time operating systems on microprocessor platforms in the field of experimental physics facility controls. The key issues regarding operating systems as well as standards and development environment are discussed. As an illustration, some current industrial products are indicated. Also, real-time systems operating in some institutes of the EPS/EPCS inter divisional group are reviewed. (author). 3 refs., 4 figs

  9. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  10. High Burnup Fuel Performance and Safety Research

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  11. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  12. Auxiliary/Master microprocessor CAMAC Crate Controller applications

    Barsotti, E.

    1975-01-01

    The need for further sophistication of an already complex serial CAMAC control system at Fermilab led to the development of an Auxilary/Master CAMAC Crate Controller. The controller contains a Motorola 6800 microprocessor, 2K bytes of RAM, and 8K bytes of PROM memory. Bussed dataway lines are time shared with CAMAC signals to provide memory expansion and direct addressing of peripheral devices without the need of external cabling. The Auxiliary/Master Crate Controller (A/MCC) can function as either a Master, i.e., stand alone, crate controller or as an Auxiliary controller to Fermilab's Serial Crate Controller (SCC). Two modules, one single- and one double-width, make up an A/MCC. The microprocessor has one nonmaskable and one maskable vectored interrupt. Time sharing the dataway between SCC programmed and block transfer generated dataway cycles and A/MCC operations still allows a 99 percent microprocessor CPU busy time. Since the conception of the A/MCC, there has been an increasing number of control system-related projects proposed which would not have been possible or would have been very difficult to implement without such a device. The first such application now in use at Fermilab is a stand-alone control system for a mass spectrometer experiment in the Main Ring Internal Target Area. This application in addition to other proposed A/MCC applications, both stand-alone and auxiliary, is discussed

  13. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  14. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  15. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  16. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  17. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  18. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  19. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  20. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  1. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  2. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  3. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  4. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  5. A microprocessor based exchange data collection and analysis terminal application to A.E.A. PABX

    Mohammed, F.A.; Ezzat, A.K.; Ayad, N.M.A.

    1978-01-01

    The traffic data acquisition and analysis comprises micro-processer based data collection, terminals (MBDCT) and a centralized computer. The MBDCT's can communicate with the computer through a data set system. Each (MBDCT) remote terminal is connected to about two hundreds subscriber lines. It scans the trunk lines to detect the on/off hook states and to calculate the call time and the called number. If the called subscriber is not from the 200 local lines, its status should be detected though the computer communication with the two terminals. The data collected by the terminal can be slightly analysed using the microprocessor programming capability. More-over short quality performance reports can be printed on a printer interfaced to the microprocessor. Also, data can be transmitted to the central computer for further data traffic investigation. The analysis outcome can be utilized for telephone line maintenance and reorganization. This report is concerned with the terminal details as applied to the A-E-A. PABX. It consists mainly of five external lines and about 300 internal lines

  6. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  7. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  8. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  9. Hierarchically structured distributed microprocessor network for control

    Greenwood, J.R.; Holloway, F.W.; Rupert, P.R.; Ozarski, R.G.; Suski, G.J.

    1979-01-01

    To satisfy a broad range of control-analysis and data-acquisition requirements for Shiva, a hierarchical, computer-based, modular-distributed control system was designed. This system handles the more than 3000 control elements and 1000 data acquisition units in a severe high-voltage, high-current environment. The control system design gives one a flexible and reliable configuration to meet the development milestones for Shiva within critical time limits

  10. Introduction to 6800/6802 microprocessor systems hardware, software and experimentation

    Simpson, Robert J

    1987-01-01

    Introduction to 6800/6802 Microprocessor Systems: Hardware, Software and Experimentation introduces the reader to the features, characteristics, operation, and applications of the 6800/6802 microprocessor and associated family of devices. Many worked examples are included to illustrate the theoretical and practical aspects of the 6800/6802 microprocessor.Comprised of six chapters, this book begins by presenting several aspects of digital systems before introducing the concepts of fetching and execution of a microprocessor instruction. Details and descriptions of hardware elements (MPU, RAM, RO

  11. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  12. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  13. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  14. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  15. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  16. Long-term bridge performance high priority bridge performance issues.

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  17. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  18. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  19. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  20. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  1. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  2. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  3. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  4. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  5. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  6. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  7. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  8. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  9. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  10. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  11. Unified microprocessor CAMAC module for preliminary data processing

    Zaushitsin, V.L.; Kulik, O.V.; Repin, V.M.

    1984-01-01

    The UP-80 unified active module is described. It is made in the CAMAC standard on the base of the K580IK80 microprocessor allowing to increase the rate of large-volume experimental spectroscopic data processing by an order. Loading of 5 different programs for data processing is possible. Data from the operative storage with 1K capacity (8 bits) are recorded and read out trhough the CAMAC line (the regime of unit exchange is possible) or through the joint of the external line

  12. Practical design of digital circuits basic logic to microprocessors

    Kampel, Ian

    1983-01-01

    Practical Design of Digital Circuits: Basic Logic to Microprocessors demonstrates the practical aspects of digital circuit design. The intention is to give the reader sufficient confidence to embark upon his own design projects utilizing digital integrated circuits as soon as possible. The book is organized into three parts. Part 1 teaches the basic principles of practical design, and introduces the designer to his """"tools"""" - or rather, the range of devices that can be called upon. Part 2 shows the designer how to put these together into viable designs. It includes two detailed descriptio

  13. Use of a microprocessor in a remote working level monitor

    Keefe, D.J.; McDowell, W.P.; Groer, P.G.

    1976-01-01

    The instrument described measures the short-lived 222 Rn-daughter concentrations and the Working Level (WL) in sealed ''hot chambers'' located in uranium mines. Radiation-induced pulses from two separate sensors are transmitted through 500 ft. cables to a microprocessor, which processes the pulses and controls the operation of the system. A read-only memory stores a fixed program which is used to calculate the desired concentrations. The results are printed as pCi/l (Rn-daughter concentrations) and WL

  14. Microprocessor-assisted calibration for a remote working level monitor

    McDowell, W.P.; Keefe, D.J.; Groer, P.G.; Witek, R.T.

    1977-01-01

    A method is described for calibrating a Remote Working Level Monitor, an instrument which measures Working Level and Rn-daughter concentrations in the atmosphere. The method makes use of a microprocessor to calculate beta efficiencies for RaB and RaC from the counts accumulated in the RaA, Ra(B + C) and RaC' channels of the instrument. Both the alpha spectroscopic and total-alpha methods are used to determine the Rn-daughter concentrations. These methods require the processor to solve systems of linear equations with several unknowns. No assumptions about Rn-daughter equilibrium are made

  15. Microprocessor-assisted calibration for a remote working level monitor

    McDowell, W.P.; Keefe, D.J.; Groer, P.G.; Witek, R.T.

    1976-01-01

    A method is described for calibrating a Remote Working Level Monitor, an instrument which measures Working Level and Rn-daughter concentrations in the atmosphere. The method makes use of a microprocessor to calculate beta efficiencies for RaB and RaC from the counts accumulated in the RaA, Ra(B + C) and RaC' channels of the instrument. Both the alpha spectroscopic and total-alpha methods are used to determine the Rn-daughter concentrations. These methods require the processor to solve systems of linear equations with several unknowns. No assumptions about Rn-daughter equilibrium are made

  16. Microprocessor Control Design for a Low-Head Crossflow Turbine.

    1985-03-01

    Controllers For a Typical 10 KW Hydroturbine ............ 1-5 I-1 Ely’s Crossflow Turbine . ........ 11-2 11-2 Basic Turbine * * 0 * 0 11-5 11-3 Turbine...the systems. For example, a 25 kilowatt hydroturbine built and installed by Bell Hydroelectric would cost approximately $20,000 in 1978 (6:49). The...O Manual Controller S2 E- Microprocessor Controller 1 2 3 4 5 6 7 8 YEARS Fig. 1-2 Comparative Costs of Controllers For a Typical 10 KW Hydroturbine

  17. Monitoring with new microprocessor cuts cost of control system

    Maehling, K L

    1985-08-01

    Programmable logic controllers (PLC) were originally developed as an alternative to relays, counters and timers for sequential and interlock control systems. They are now also used as part of distributive control systems which include diagnostic monitoring functions. The paper describes how a wiring scheme can be simplified and installation costs reduced by incorporating a newly-developed microprocessor-based monitoring device as an interface between remote devices and a PLC. An industrial application, the 400 tph coal handling facility at Bowater Southern Paper Co's mill in Calhoun, Tennessee, is considered. The control system design is outlined, the micro-monitor is described and the benefits of simplicity are stated in the paper.

  18. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  19. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  20. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  1. NINJA: Java for High Performance Numerical Computing

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  2. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  3. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  4. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  5. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  6. Fracture toughness of ultra high performance concrete by flexural performance

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  7. Microprocessor system to recover data from a self-scanning photodiode array

    Koppel, L.N.; Gadd, T.J.

    1975-01-01

    A microprocessor system developed at Lawrence Livermore Laboratory has expedited the recovery of data describing the low energy x-ray spectra radiated by laser-fusion targets. An Intel microprocessor controls the digitization and scanning of the data stream of an x-ray-sensitive self-scanning photodiode array incorporated in a crystal diffraction spectrometer

  8. A microprocessor controlled read out system for drift chambers

    Centro, Sandro; Cittolin, Sergio; Dreesen, P; Petrolo, E; Rubbia, Carlo; Schinzel, D

    1981-01-01

    Summary form only given, as follows. A General Purpose Microprocessor Controller GPMC has been developed for applications where CAMAC modules with complex control functions are needed. Each application requires an appropriate Interface Module (IM) to be connected to the GPMC. The GPMC consists of a 6800 Microprocessor, 16K EPROM, 2K RAM, CAMAC I/O ports and interface, a RS 232C serial interface, an Advanced Data Link controller and a port for controlling the IM, GPMC and IM are housed in a 2-U wide CAMAC module. A special IM has been designed, which has 1K bute of RAM with its own control and which allows autonomous setting and reading analog voltages through a DAC and ADC. The GPMC can take control of the IM memory and set new voltages. This system is used to control pedestals and gains of a driftchamber readout system, which is housed in a 5-U wide CAMAC module, holding 24 data cards corresponding to 24 sense wires. The data card receives pulses from the left and right end of a sense wire, amplifies and int...

  9. Microprocessor controlled pulse charge and testing of batteries

    Kerezov, A.; Gishin, S.; Ivanov, Ratcho; Savov, S.

    2002-01-01

    The principle of the developed new method for pulse charge of batteries with microprocessor control of the electrochemical processes is the use of current pulses with microprocessor control of the period and the amplitude according to the dynamically changing state of the electrochemical system. In order to realize the method described above a programmable current source was developed. It is connected with a Personal Computer via RS232 standard serial interface in order to control the electrochemical processes. The parameters to be set, the graphical presentation of the pulse current and tension, the used quantity of electricity and electrical energy for every pulse and for the process as a hole are shown on the PC display. In order to test dry-charged and wet-charged batteries a specialized current generator was developed. It is connected also with a Personal Computer via R5232 standard serial interface in order to con-trol the testing of the starting capability of the batteries according to the requirements of the Bulgarian State Standard Ell 60095-1. (Author)

  10. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  11. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  12. Optimizing the design of very high power, high performance converters

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  13. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  14. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  15. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  16. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  17. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  18. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  19. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  20. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  1. Performance concerns for high duty fuel cycle

    Esposito, V.J.; Gutierrez, J.E.

    1999-01-01

    One of the goals of the nuclear industry is to achieve economic performance such that nuclear power plants are competitive in a de-regulated market. The manner in which nuclear fuel is designed and operated lies at the heart of economic viability. In this sense reliability, operating flexibility and low costs are the three major requirements of the NPP today. The translation of these three requirements to the design is part of our work. The challenge today is to produce a fuel design which will operate with long operating cycles, high discharge burnup, power up-rating and while still maintaining all design and safety margins. European Fuel Group (EFG) understands that to achieve the required performance high duty/energy fuel designs are needed. The concerns for high duty design includes, among other items, core design methods, advanced Safety Analysis methodologies, performance models, advanced material and operational strategies. The operational aspects require the trade-off and evaluation of various parameters including coolant chemistry control, material corrosion, boiling duty, boron level impacts, etc. In this environment MAEF is the design that EFG is now offering based on ZIRLO alloy and a robust skeleton. This new design is able to achieve 70 GWd/tU and Lead Test Programs are being executed to demonstrate this capability. A number of performance issues which have been a concern with current designs have been resolved such as cladding corrosion and incomplete RCCA insertion (IRI). As the core duty becomes more aggressive other new issues need to be addressed such as Axial Offset Anomaly. These new issues are being addressed by combination of the new design in concert with advanced methodologies to meet the demanding needs of NPP. The ability and strategy to meet high duty core requirements, flexibility of operation and maintain acceptable balance of all technical issues is the discussion in this paper. (authors)

  2. DURIP: High Performance Computing in Biomathematics Applications

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  3. High Performance Computing Operations Review Report

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  4. Planning for high performance project teams

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  5. Integration in a nuclear physics experiment of a visualization unit managed by a microprocessor

    Lefebvre, M.

    1976-01-01

    A microprocessor (Intel 8080) is introduced in the equipment controlling the (e,e'p) experiment that will take place at the linear accelerator operating in the premises of CEA (Orme des Merisiers, Gif-sur-Yvette, France). The purpose of the microprocessor is to handle the visualization tasks that are necessary to have a continuous control of the experiment. By doing so more time and more memory will be left for data processing by the calculator unit. In a forward version of the system, the controlling of the level of helium in the target might also be in charge of the microprocessor. This work is divided into 7 main parts: 1) a presentation of the linear accelerator and its experimental facilities, 2) the Intel 8080 micro-processor and its programming, 3) the implementation of the micro-processor in the electronic system, 4) the management of the memory, 5) data acquisition, 6) the keyboard, and 7) the visualization unit [fr

  6. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  7. Computational Biology and High Performance Computing 2000

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  8. A rapid high-performance liquid-chromatographic method for simultaneously determining the concentrations of TFM and Bayer 73 in water during lampricide treatments

    Dawson, V.K.

    1982-01-01

    The high-performance liquid-chromatography (HPLC) procedure requires only minutes per sample, is specific, and is relatively sensitive (limit of detection 18 disposable cartridge. The cartridge adsorbs and retains both the lampricides and the internal standard. The quantitative elution of the three chemicals from the cartridge with a small volume of methanol effectively concentrates the sample and provides sample cleanup. The methanol extract is then analyzed directly by HPLC on an MCH 10 reverse phase column by using a methanol:0.01 mol/L acetate buffer (87:13, v:v) as the mobile phase at 2 mL/min and detected by ultraviolet spectrophotometry at 330 (or 254) nm. A microprocessor data system further facilitates the procedure by quantifying off-scale peaks and yielding results directly in units of concentration (mg/L).

  9. High performance separation of lanthanides and actinides

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  10. High Performance OLED Panel and Luminaire

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  11. Thermal treatment system of hazardous residuals in three heating zones based on a microprocessor

    Luna H, C.L.

    1997-01-01

    Thermal treatment system consists of a high power electric oven of three heating zones where each zone works up to 1200 Centigrades; it has the capacity of rising the central zone temperature up to 1000 Centigrades in 58 minutes approximately. This configuration of three zones could be programmed to different temperatures and they will be digitally controlled by a control microprocessor, which has been controlled by its own assembler language, in function of the PID control. There are also other important controls based on this microprocessor, as a signal amplification, starting and shutdown of high power step relays, activation and deactivation of both analogic/digital and digital/analogic convertors, port activation and basic data storage of the system. Two main characteristics were looked for this oven design; the first was the possibility of controlling the three zone temperature and the second was to reduce the rising and stabilization operation time and its digitized control. The principal function of the three zone oven is to accelerate the degradation of hazardous residuals by an oxidation instead combustion, through relatively high temperatures (minimum 800 Centigrades and maximum 1200 Centigrades); this process reduces the ash and volatile particulate production. The hazardous residuals will be pumped into the degradation system and after atomized through a packaged column; this step will avoid the direct contact of the residuals with the oven cores. These features make this system as closed process, which means that the residuals can not leak to the working area, reducing the exposure risk to the personnel. This three step oven system is the first stage of the complete hazardous residuals degradation system; after this, the flow will go into a cold plasma region where the process is completed, making a closed system. (Author)

  12. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  13. A High Performance COTS Based Computer Architecture

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  14. Management issues for high performance storage systems

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  15. High-performance computing in seismology

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  16. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  17. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  18. High performance computing in linear control

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  19. Building Trust in High-Performing Teams

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  20. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  1. Development of high-performance blended cements

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  2. Utilities for high performance dispersion model PHYSIC

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  3. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  4. High-Performance Vertical Organic Electrochemical Transistors.

    Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G

    2018-02-01

    Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. High Performance Data Distribution for Scientific Community

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  6. High-performance laboratories and cleanrooms; TOPICAL

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  7. Transport in JET high performance plasmas

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  8. Transport in JET high performance plasmas

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  9. High-performance vertical organic transistors.

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Performance of the CMS High Level Trigger

    Perrotta, Andrea

    2015-01-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increases in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. The increase in the number of interactions per bunch crossing, on average 25 in 2012, and expected to be around 40 in Run II, will be an additional complication. We present here the expected performance of the main triggers that will be used during the 2015 data taking campaign, paying particular attention to the new approaches that have been developed to cope with the challenges of the new run. This includes improvements in HLT electron and photon reconstruction as well as better performing muon triggers. We will also present the performance of the improved trac...

  11. Development of a High Performance Spacer Grid

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  12. Low cost high performance uncertainty quantification

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  13. Energy Efficient Graphene Based High Performance Capacitors.

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. SISYPHUS: A high performance seismic inversion factory

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  15. A Fourier transform with speed improvements for microprocessor applications

    Lokerson, D. C.; Rochelle, R.

    1980-01-01

    A fast Fourier transform algorithm for the RCA 1802microprocessor was developed for spacecraft instrument applications. The computations were tailored for the restrictions an eight bit machine imposes. The algorithm incorporates some aspects of Walsh function sequency to improve operational speed. This method uses a register to add a value proportional to the period of the band being processed before each computation is to be considered. If the result overflows into the DF register, the data sample is used in computation; otherwise computation is skipped. This operation is repeated for each of the 64 data samples. This technique is used for both sine and cosine portions of the computation. The processing uses eight bit data, but because of the many computations that can increase the size of the coefficient, floating point form is used. A method to reduce the alias problem in the lower bands is also described.

  16. Application of a microprocessor system to stream monitoring

    Oakes, T.W.; Shank, K.E.

    1978-01-01

    Low-level liquid wastes originating from the Oak Ridge National Laboratory (ORNL) are discharged, after treatment, into White Oak Creek, which is a small tributary of the Clinch River located in East Tennessee. Samples of White Oak Creek discharges are collected at White Oak Dam by a continuous digital proportional water sampler and analyzed weekly for radioactivity. The sampler contains a control system with a microprocessor that has been programmed to solve nonlinear weir equations. This system was designed and installed at ORNL by the Instrumentation and Controls Division and was tested by the Environmental Surveillance and Evaluation Section of the Industrial Safety and Applied Health Physics Division. The control system was designed to measure water flow rates from 0 to 334 ft 3 /sec to within 0.1%. Results of our test program and possible applications to other liquid sampling needs are discussed

  17. A low cost, microprocessor-based battery charge controller

    Pulfrey, D L; Hacker, J [Pulfrey Solar Inc., Vancouver, BC (Canada)

    1990-01-01

    This report describes the design, construction, testing, and evaluation of a microprocessor-based battery charge controller that uses charge integration as the method of battery state-of-charge estimation. The controller is intended for use in medium-size (100-1000W) photovoltaic systems that employ 12V lead-acid batteries for charge storage. The controller regulates the charge flow to the battery and operates in three, automatically-determined modes, namely: charge, equalize, and float. The prototype controller is modular in nature and can handle charge/discharge currents of magnitude up to 80A, depending on the number of circuit boards employed. Evaluation tests and field trials have shown the controller to be very accurate and reliable. Based on the cost of the prototype, it appears that an original equipment manufacturer's selling price of $400 for a 40A (500W) unit may be realistic. 18 figs., 2 tabs.

  18. Ultra high performance concrete dematerialization study

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  19. JT-60U high performance regimes

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  20. High-performance phase-field modeling

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  1. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  2. Development of high performance ODS alloys

    Shao, Lin [Texas A & M Univ., College Station, TX (United States); Gao, Fei [Univ. of Michigan, Ann Arbor, MI (United States); Garner, Frank [Texas A & M Univ., College Station, TX (United States)

    2018-01-29

    This project aims to capitalize on insights developed from recent high-dose self-ion irradiation experiments in order to develop and test the next generation of optimized ODS alloys needed to meet the nuclear community's need for high strength, radiation-tolerant cladding and core components, especially with enhanced resistance to void swelling. Two of these insights are that ferrite grains swell earlier than tempered martensite grains, and oxide dispersions currently produced only in ferrite grains require a high level of uniformity and stability to be successful. An additional insight is that ODS particle stability is dependent on as-yet unidentified compositional combinations of dispersoid and alloy matrix, such as dispersoids are stable in MA957 to doses greater than 200 dpa but dissolve in MA956 at doses less than 200 dpa. These findings focus attention on candidate next-generation alloys which address these concerns. Collaboration with two Japanese groups provides this project with two sets of first-round candidate alloys that have already undergone extensive development and testing for unirradiated properties, but have not yet been evaluated for their irradiation performance. The first set of candidate alloys are dual phase (ferrite + martensite) ODS alloys with oxide particles uniformly distributed in both ferrite and martensite phases. The second set of candidate alloys are ODS alloys containing non-standard dispersoid compositions with controllable oxide particle sizes, phases and interfaces.

  3. Low-Cost High-Performance MRI

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  4. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  6. High performance liquid chromatography in pharmaceutical analyses

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  7. Combining high productivity with high performance on commodity hardware

    Skovhede, Kenneth

    -like compiler for translating CIL bytecode on the CELL-BE. I then introduce a bytecode converter that transforms simple loops in Java bytecode to GPGPU capable code. I then introduce the numeric library for the Common Intermediate Language, NumCIL. I can then utilizing the vector programming model from Num......CIL and map this to the Bohrium framework. The result is a complete system that gives the user a choice of high-level languages with no explicit parallelism, yet seamlessly performs efficient execution on a number of hardware setups....

  8. Integrating advanced facades into high performance buildings

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  9. The need for high performance breeder reactors

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  10. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  11. How to create high-performing teams.

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  12. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  13. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  14. High Performance with Prescriptive Optimization and Debugging

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  15. Optimizing High Performance Self Compacting Concrete

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  16. High Performance Circularly Polarized Microstrip Antenna

    Bondyopadhyay, Probir K. (Inventor)

    1997-01-01

    A microstrip antenna for radiating circularly polarized electromagnetic waves comprising a cluster array of at least four microstrip radiator elements, each of which is provided with dual orthogonal coplanar feeds in phase quadrature relation achieved by connection to an asymmetric T-junction power divider impedance notched at resonance. The dual fed circularly polarized reference element is positioned with its axis at a 45 deg angle with respect to the unit cell axis. The other three dual fed elements in the unit cell are positioned and fed with a coplanar feed structure with sequential rotation and phasing to enhance the axial ratio and impedance matching performance over a wide bandwidth. The centers of the radiator elements are disposed at the corners of a square with each side of a length d in the range of 0.7 to 0.9 times the free space wavelength of the antenna radiation and the radiator elements reside in a square unit cell area of sides equal to 2d and thereby permit the array to be used as a phased array antenna for electronic scanning and is realizable in a high temperature superconducting thin film material for high efficiency.

  17. NCI's Transdisciplinary High Performance Scientific Data Platform

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  18. High Power Flex-Propellant Arcjet Performance

    Litchford, Ron J.

    2011-01-01

    implied nearly frozen flow in the nozzle and yielded performance ranges of 800-1100 sec for hydrogen and 400-600 sec for ammonia. Inferred thrust-to-power ratios were in the range of 30-10 lbf/MWe for hydrogen and 60-20 lbf/MWe for ammonia. Successful completion of this test series represents a fundamental milestone in the progression of high power arcjet technology, and it is hoped that the results may serve as a reliable touchstone for the future development of MW-class regeneratively-cooled flex-propellant plasma rockets.

  19. Silicon Photomultiplier Performance in High ELectric Field

    Montoya, J.; Morad, J.

    2016-12-01

    Roughly 27% of the universe is thought to be composed of dark matter. The Large Underground Xenon (LUX) relies on the emission of light from xenon atoms after a collision with a dark matter particle. After a particle interaction in the detector, two things can happen: the xenon will emit light and charge. The charge (electrons), in the liquid xenon needs to be pulled into the gas section so that it can interact with gas and emit light. This allows LUX to convert a single electron into many photons. This is done by applying a high voltage across the liquid and gas regions, effectively ripping electrons out of the liquid xenon and into the gas. The current device used to detect photons is the photomultiplier tube (PMT). These devices are large and costly. In recent years, a new technology that is capable of detecting single photons has emerged, the silicon photomultiplier (SiPM). These devices are cheaper and smaller than PMTs. Their performance in a high electric fields, such as those found in LUX, are unknown. It is possible that a large electric field could introduce noise on the SiPM signal, drowning the single photon detection capability. My hypothesis is that SiPMs will not observe a significant increase is noise at an electric field of roughly 10kV/cm (an electric field within the range used in detectors like LUX). I plan to test this hypothesis by first rotating the SiPMs with no applied electric field between two metal plates roughly 2 cm apart, providing a control data set. Then using the same angles test the dark counts with the constant electric field applied. Possibly the most important aspect of LUX, is the photon detector because it's what detects the signals. Dark matter is detected in the experiment by looking at the ratio of photons to electrons emitted for a given interaction in the detector. Interactions with a low electron to photon ratio are more like to be dark matter events than those with a high electron to photon ratio. The ability to

  20. Microprocessor-controlled system for automatic acquisition of potentiometric data and their non-linear least-squares fit in equilibrium studies.

    Gampp, H; Maeder, M; Zuberbühler, A D; Kaden, T A

    1980-06-01

    A microprocessor-controlled potentiometric titration apparatus for equilibrium studies is described. The microprocessor controls the stepwise addition of reagent, monitors the pH until it becomes constant and stores the constant value. The data are recorded on magnetic tape by a cassette recorder with an RS232 input-output interface. A non-linear least-squares program based on Marquardt's modification of the Newton-Gauss method is discussed and its performance in the calculation of equilibrium constants is exemplified. An HP 9821 desk-top computer accepts the data from the magnetic tape recorder. In addition to a fully automatic fitting procedure, the program allows manual adjustment of the parameters. Three examples are discussed with regard to performance and reproducibility.

  1. The Role of Performance Management in the High Performance Organisation

    de Waal, André A.; van der Heijden, Beatrice I.J.M.

    2014-01-01

    The allegiance of partnering organisations and their employees to an Extended Enterprise performance is its proverbial sword of Damocles. Literature on Extended Enterprises focuses on collaboration, inter-organizational integration and learning to avoid diminishing or missing allegiance becoming an

  2. Evaluating performance of high efficiency mist eliminators

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  3. Very High-Performance Embedded Computing Will Allow Ambitious Space Science Investigation

    Pignol, Michel

    2005-01-01

    .... developed on radiation tolerant technologies. Unfortunately, the microprocessors today available on such technologies have the computing throughput which was available about 10 years ago on the commercial market...

  4. The Effect of a Microprocessor Prosthetic Foot on Function and Quality of Life in Transtibial Amputees Who Are Limited Community Ambulators

    2017-09-01

    motion and active power , will translate into improved functional performance, ambulatory safety (risk of falls) and quality of life in trans-tibial...clinical trial designed to determine if a microprocessor controlled prosthetic foot (MPF), with greater range of motion and active power , will...contact over a 6 month period of time and receive physical therapy training to minimize deviations resulting from habit or lack of training, education

  5. Microprocessor system for data acquisition and processing for the Flora device

    Klimov, V.M.

    1986-01-01

    ''VEhFORMIKA'' microprocessor system for data collection and processing when conducting experiments at the ''Flora'' device is described, its application is grounded. The complex allows one to conduct investigations using multichannel methods and exercise the device electrophysical control

  6. High Performance Graphene Oxide Based Rubber Composites

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  7. Initial rheological description of high performance concretes

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  8. High thermoelectric performance of graphite nanofibers.

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2018-02-22

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high thermoelectric performance. This study unveils that the platelet form of GNFs in which graphite layers are perpendicular to the fiber axis can exhibit outstanding thermoelectric properties with a figure of merit ZT reaching 3.55 in a 0.5 nm diameter fiber and 1.1 in a 1.1 nm diameter one. Interestingly, by introducing 14 C isotope doping, ZT can even be enhanced up to more than 5, and more than 8 if we include the effect of finite phonon mean free path, which demonstrates the amazing thermoelectric potential of GNFs.

  9. Durability of high performance concrete in seawater

    Amjad Hussain Memon; Salihuddin Radin Sumadi; Rabitah Handan

    2000-01-01

    This paper presents a report on the effects of blended cements on the durability of high performance concrete (HPC) in seawater. In this research the effect of seawater was investigated. The specimens were initially subjected to water curing for seven days inside the laboratory at room temperature, followed by seawater curing exposed to tidal zone until testing. In this study three levels of cement replacement (0%, 30% and 70%) were used. The combined use of chemical and mineral admixtures has resulted in a new generation of concrete called HPC. The HPC has been identified as one of the most important advanced materials necessary in the effort to build a nation's infrastructure. HPC opens new opportunities in the utilization of the industrial by-products (mineral admixtures) in the construction industry. As a matter of fact permeability is considered as one of the fundamental properties governing the durability of concrete in the marine environment. Results of this investigation indicated that the oxygen permeability values for the blended cement concretes at the age of one year are reduced by a factor of about 2 as compared to OPC control mix concrete. Therefore both blended cement concretes are expected to withstand in the seawater exposed to tidal zone without serious deterioration. (Author)

  10. Fuzzy Concurrent Object Oriented Expert System for Fault Diagnosis in 8085 Microprocessor Based System Board

    Mr.D. V. Kodavade; Dr. Mrs.S.D.Apte

    2014-01-01

    With the acceptance of artificial intelligence paradigm, a number of successful artificial intelligence systems were created. Fault diagnosis in microprocessor based boards needs lot of empirical knowledge and expertise and is a true artificial intelligence problem. Research on fault diagnosis in microprocessor based system boards using new fuzzy-object oriented approach is presented in this paper. There are many uncertain situations observed during fault diagnosis. These uncertain situations...

  11. Nonconformance in electromechanical output relays of microprocessor-based protection devices under actual operating conditions

    Gurevich, Vladimir

    2006-01-01

    Microprocessor-based protection relays are gradually driving out traditional electromechanical and even electronic protection devices from virtually all fields of power and electrical engineering. In this paper, one of many problems of microprocessor-based relays is discussed: nonconformance of miniature electromechanical output relays under actual operation conditions: switching inductive loads (with tripping CB coils or lockout relay coils) at 220 VDC, and "dry" switching of some control ci...

  12. Alternative High-Performance Ceramic Waste Forms

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  13. Intelligent Facades for High Performance Green Buildings

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  14. High-performance commercial building systems

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  15. Improving the high performance concrete (HPC behaviour in high temperatures

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  16. Biosorption of gold from computer microprocessor leachate solutions using chitin.

    Côrtes, Letícia N; Tanabe, Eduardo H; Bertuol, Daniel A; Dotto, Guilherme L

    2015-11-01

    The biosorption of gold from discarded computer microprocessor (DCM) leachate solutions was studied using chitin as a biosorbent. The DCM components were leached with thiourea solutions, and two procedures were tested for recovery of gold from the leachates: (1) biosorption and (2) precipitation followed by biosorption. For each procedure, the biosorption was evaluated considering kinetic, equilibrium, and thermodynamic aspects. The general order model was able to represent the kinetic behavior, and the equilibrium was well represented by the BET model. The maximum biosorption capacities were around 35 mg g(-1) for both procedures. The biosorption of gold on chitin was a spontaneous, favorable, and exothermic process. It was found that precipitation followed by biosorption resulted in the best gold recovery, because other species were removed from the leachate solution in the precipitation step. This method enabled about 80% of the gold to be recovered, using 20 g L(-1) of chitin at 298 K for 4 h. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Model based design introduction: modeling game controllers to microprocessor architectures

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  18. GPS/MEMS IMU/Microprocessor Board for Navigation

    Gender, Thomas K.; Chow, James; Ott, William E.

    2009-01-01

    A miniaturized instrumentation package comprising a (1) Global Positioning System (GPS) receiver, (2) an inertial measurement unit (IMU) consisting largely of surface-micromachined sensors of the microelectromechanical systems (MEMS) type, and (3) a microprocessor, all residing on a single circuit board, is part of the navigation system of a compact robotic spacecraft intended to be released from a larger spacecraft [e.g., the International Space Station (ISS)] for exterior visual inspection of the larger spacecraft. Variants of the package may also be useful in terrestrial collision-detection and -avoidance applications. The navigation solution obtained by integrating the IMU outputs is fed back to a correlator in the GPS receiver to aid in tracking GPS signals. The raw GPS and IMU data are blended in a Kalman filter to obtain an optimal navigation solution, which can be supplemented by range and velocity data obtained by use of (l) a stereoscopic pair of electronic cameras aboard the robotic spacecraft and/or (2) a laser dynamic range imager aboard the ISS. The novelty of the package lies mostly in those aspects of the design of the MEMS IMU that pertain to controlling mechanical resonances and stabilizing scale factors and biases.

  19. Different microprocessor controlled devices for ITU TRIGA Mark II reactor

    Can, B.; Omuz, S.; Uzun, S.; Apan, H.

    1990-01-01

    In this paper the design of a period meter and multichannel thermometer, which are controlled by a microprocessor, in order to be used at ITU TRIGA Mark-II Reactor is presented. The system works as a simple microcomputer, which includes a CPU, a EPROM, a RAM, a CTC, a PIO, a PIA a keyboard and displays, using the assembly language. The period meter can work either with pulse signal or with analog signal depending on demand of the user. The period is calculated by software and its range is -99,9 sec, to +2.1 sec. When the period drops +3 sec, the system gives alarm illuminating a LED. The multichannel thermometer has eight temperature channels. Temperature channels can manually or automatically be selected. The channel selection time can be adjusted. The thermometer gives alarm illuminating a LED, when the temperature rises to 600 C. Temperature data is stored in the RAM and is shown on a display. This system provides us to use four spare thermocouples in the reactor. (orig.)

  20. Environmental dose measurement with microprocessor based portable TLD reader

    Deme, S.; Apathy, I.; Feher, I.

    1996-01-01

    Application of TL method for environmental gamma-radiation dosimetry involves uncertainty caused by the dose collected during the transport from the point of annealing to the place of exposure and back to the place of evaluation. Should an accident occur read out is delayed due to the need to transport to a laboratory equipped with a TLD reader. A portable reader capable of reading out the TL dosemeter at the place of exposure ('in situ TLD reader') eliminates the above mentioned disadvantages. We have developed a microprocessor based portable TLD reader for monitoring environmental gamma-radiation doses and for on board reading out of doses on space stations. The first version of our portable, battery operated reader (named Pille - 'butterfly') was made at the beginning of the 80s. These devices used CaSO 4 bulb dosemeters and the evaluation technique was based on analogue timing circuits and analogue to digital conversion of the photomultiplier current with a read out precision of 1 μGy and a measuring range up to 10 Gy. The measured values were displayed and manually recorded. The version with an external power supply was used for space dosimetry as an onboard TLD reader

  1. Spectrally high performing quantum cascade lasers

    Toor, Fatima

    Quantum cascade (QC) lasers are versatile semiconductor light sources that can be engineered to emit light of almost any wavelength in the mid- to far-infrared (IR) and terahertz region from 3 to 300 mum [1-5]. Furthermore QC laser technology in the mid-IR range has great potential for applications in environmental, medical and industrial trace gas sensing [6-10] since several chemical vapors have strong rovibrational frequencies in this range and are uniquely identifiable by their absorption spectra through optical probing of absorption and transmission. Therefore, having a wide range of mid-IR wavelengths in a single QC laser source would greatly increase the specificity of QC laser-based spectroscopic systems, and also make them more compact and field deployable. This thesis presents work on several different approaches to multi-wavelength QC laser sources that take advantage of band-structure engineering and the uni-polar nature of QC lasers. Also, since for chemical sensing, lasers with narrow linewidth are needed, work is presented on a single mode distributed feedback (DFB) QC laser. First, a compact four-wavelength QC laser source, which is based on a 2-by-2 module design, with two waveguides having QC laser stacks for two different emission wavelengths each, one with 7.0 mum/11.2 mum, and the other with 8.7 mum/12.0 mum is presented. This is the first design of a four-wavelength QC laser source with widely different emission wavelengths that uses minimal optics and electronics. Second, since there are still several unknown factors that affect QC laser performance, results on a first ever study conducted to determine the effects of waveguide side-wall roughness on QC laser performance using the two-wavelength waveguides is presented. The results are consistent with Rayleigh scattering effects in the waveguides, with roughness effecting shorter wavelengths more than longer wavelengths. Third, a versatile time-multiplexed multi-wavelength QC laser system that

  2. Nova performance at ultra high fluence levels

    Hunt, J.T.

    1986-01-01

    Nova is a ten beam high power Nd:glass laser used for interial confinement fusion research. It was operated in the high power high energy regime following the completion of construction in December 1984. During this period several interesting nonlinear optical phenomena were observed. These phenomena are discussed in the text. 11 refs., 5 figs

  3. Durability and Performance of High Performance Infiltration Cathodes

    Samson, Alfred Junio; Søgaard, Martin; Hjalmarsson, Per

    2013-01-01

    The performance and durability of solid oxide fuel cell (SOFC) cathodes consisting of a porous Ce0.9Gd0.1O1.95 (CGO) infiltrated with nitrates corresponding to the nominal compositions La0.6Sr0.4Co1.05O3-δ (LSC), LaCoO3-δ (LC), and Co3O4 are discussed. At 600°C, the polarization resistance, Rp......, varied as: LSC (0.062Ωcm2)cathode was found to depend on the infiltrate firing temperature and is suggested to originate...... of the infiltrate but also from a better surface exchange property. A 450h test of an LSC-infiltrated CGO cathode showed an Rp with final degradation rate of only 11mΩcm2kh-1. An SOFC with an LSC-infiltrated CGO cathode tested for 1,500h at 700°C and 0.5Acm-2 (60% fuel, 20% air utilization) revealed no measurable...

  4. From adaptive to high-performance structures

    Teuffel, P.

    2011-01-01

    Multiple design aspects influence the building performance such as architectural criteria, various environmental impacts and user behaviour. Specific examples are sun, wind, temperatures, function, occupancy, socio-cultural aspects and other contextual aspects and needs. Even though these aspects

  5. High-performance-vehicle technology. [fighter aircraft propulsion

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  6. Regulation of Plant Microprocessor Function in Shaping microRNA Landscape

    Jakub Dolata

    2018-06-01

    Full Text Available MicroRNAs are small molecules (∼21 nucleotides long that are key regulators of gene expression. They originate from long stem–loop RNAs as a product of cleavage by a protein complex called Microprocessor. The core components of the plant Microprocessor are the RNase type III enzyme Dicer-Like 1 (DCL1, the zinc finger protein Serrate (SE, and the double-stranded RNA binding protein Hyponastic Leaves 1 (HYL1. Microprocessor assembly and its processing of microRNA precursors have been reported to occur in discrete nuclear bodies called Dicing bodies. The accessibility of and modifications to Microprocessor components affect microRNA levels and may have dramatic consequences in plant development. Currently, numerous lines of evidence indicate that plant Microprocessor activity is tightly regulated. The cellular localization of HYL1 is dependent on a specific KETCH1 importin, and the E3 ubiquitin ligase COP1 indirectly protects HYL1 from degradation in a light-dependent manner. Furthermore, proper localization of HYL1 in Dicing bodies is regulated by MOS2. On the other hand, the Dicing body localization of DCL1 is regulated by NOT2b, which also interacts with SE in the nucleus. Post-translational modifications are substantial factors that contribute to protein functional diversity and provide a fine-tuning system for the regulation of protein activity. The phosphorylation status of HYL1 is crucial for its activity/stability and is a result of the interplay between kinases (MPK3 and SnRK2 and phosphatases (CPL1 and PP4. Additionally, MPK3 and SnRK2 are known to phosphorylate SE. Several other proteins (e.g., TGH, CDF2, SIC, and RCF3 that interact with Microprocessor have been found to influence its RNA-binding and processing activities. In this minireview, recent findings on the various modes of Microprocessor activity regulation are discussed.

  7. A high performance thermoacoustic Stirling-engine

    Tijani, M.E.H.; Spoelstra, S. [Energy research Centre of the Netherlands (ECN), PO Box 1, 1755 ZG Petten (Netherlands)

    2011-11-10

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  8. Psychological factors in developing high performance athletes

    Elbe, Anne-Marie; Wikman, Johan Michael

    2017-01-01

    calls for great efforts in dealing with competitive pressure and demands mental strength with regard to endurance, self-motivation and willpower. But while it is somewhat straightforward to specify the physical and physiological skills needed for top performance in a specific sport, it becomes less...... clear with regard to the psychological skills that are needed. Therefore, the main questions to be addressed in this chapter are: (1) which psychological skills are needed to reach top performance? And (2) (how) can these skills be developed in young talents?...

  9. High Performance Expectations: Concept and causes

    Andersen, Lotte Bøgh; Jacobsen, Christian Bøtcher

    2017-01-01

    literature research, HPE is defined as the degree to which leaders succeed in expressing ambitious expectations to their employees’ achievement of given performance criteria, and it is analyzed how leadership behavior affects employee-perceived HPE. This study applies a large-scale leadership field...... experiment with 3,730 employees nested in 471 organizations and finds that transformational leadership training as well as transactional and combined training of the leaders significantly increased employees’ HPE relative to a control group. Furthermore, transformational leadership and the use of pecuniary...... rewards seem to be important mechanisms. This implies that public leaders can actually affect HPE through their leadership and thus potentially organizational performance as well....

  10. High Rate Performing Li-ion Battery

    2015-02-09

    journal article will be sufficient in most cases. This document may be as long or as short as needed to give a fair account of the work performed...Klink, J. J. & Moser, J. EPR Study of Vanadium (4+) in the Anatase and Rutile Phases of TiO2. Phys. Rev. B 34, 3060-3068 (1986). 40 Abragam, A

  11. Engendering a high performing organisational culture through ...

    Concluding that Africa's poor organisational performances are attributable to some inadequacies in the cultural foundations of countries and organisations, this paper argues for internal branding as the way forward for African organisations. Through internal branding an African organization can use a systematic and ...

  12. Mastering JavaScript high performance

    Adams, Chad R

    2015-01-01

    If you are a JavaScript developer with some experience in development and want to increase the performance of JavaScript projects by building faster web apps, then this book is for you. You should know the basic concepts of JavaScript.

  13. Gamma and Xray spectroscopy at high performance

    Borchert, G.L.

    1984-01-01

    The author determines that for many interesting problems in gamma and Xray spectroscopy it is necessary to use crystal diffractometers. The basic features of such instruments are discussed and the special performance of crystal spectrometers is demonstrated by means of typical examples of various applications

  14. High Performance Fortran for Aerospace Applications

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  15. High performance management bij franchise-supermarkten

    Sloot, Laurens; van Nierop, Erjen; de Waal, Andre

    In dit artikel wordt een onderzoek gepresenteerd naar de mate waarin franchise-supermarkten voldoen aan de vijf factoren van high performanceorganisaties (HPO): hoge kwaliteit managers, hoge kwaliteit medewerkers, openheid en actiegerichtheid, continue verbetering en vernieuwing, en

  16. High performance fuel technology development : Development of high performance cladding materials

    Park, Jeongyong; Jeong, Y. H.; Park, S. Y.

    2012-04-01

    The superior in-pile performance of the HANA claddings have been verified by the successful irradiation test and in the Halden research reactor up to the high burn-up of 67GWD/MTU. The in-pile corrosion and creep resistances of HANA claddings were improved by 40% and 50%, respectively, over Zircaloy-4. HANA claddings have been also irradiated in the commercial reactor up to 2 reactor cycles, showing the corrosion resistance 40% better than that of ZIRLO in the same fuel assembly. Long-term out-of-pile performance tests for the candidates of the next generation cladding materials have produced the highly reliable test results. The final candidate alloys were selected and they showed the corrosion resistance 50% better than the foreign advanced claddings, which is beyond the original target. The LOCA-related properties were also improved by 20% over the foreign advanced claddings. In order to establish the optimal manufacturing process for the inner and outer claddings of the dual-cooled fuel, 18 different kinds of specimens were fabricated with various cold working and annealing conditions. Based on the performance tests and various out-of-pile test results obtained from the specimens, the optimal manufacturing process was established for the inner and outer cladding tubes of the dual-cooled fuel

  17. Menhir: An Environment for High Performance Matlab

    Stéphane Chauveau

    1999-01-01

    Full Text Available In this paper we present Menhir a compiler for generating sequential or parallel code from the Matlab language. The compiler has been designed in the context of using Matlab as a specification language. One of the major features of Menhir is its retargetability to generate parallel and sequential C or Fortran code. We present the compilation process and the target system description for Menhir. Preliminary performances are given and compared with MCC, the MathWorks Matlab compiler.

  18. Inclusion control in high-performance steels

    Holappa, L.E.K.; Helle, A.S.

    1995-01-01

    Progress of clean steel production, fundamentals of oxide and sulphide inclusions as well as inclusion morphology in normal and calcium treated steels are described. Effects of cleanliness and inclusion control on steel properties are discussed. In many damaging constructional and engineering applications the nonmetallic inclusions have a quite decisive role in steel performance. An example of combination of good mechanical properties and superior machinability by applying inclusion control is presented. (author)

  19. Emerging technologies for high performance infrared detectors

    Tan Chee Leong; Mohseni Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as...

  20. Development of a high performance liquid chromatography method ...

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  1. High Performance Home Building Guide for Habitat for Humanity Affiliates

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  2. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  3. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  4. Development of High Performance Piezoelectric Polyimides

    Simpson, Joycelyn O.; St.Clair, Terry L.; Welch, Sharon S.

    1996-01-01

    In this work a series of polyimides are investigated which exhibit a strong piezoelectric response and polarization stability at temperatures in excess of 100 C. This work was motivated by the need to develop piezoelectric sensors suitable for use in high temperature aerospace applications.

  5. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  6. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  7. High performance flexible electronics for biomedical devices.

    Salvatore, Giovanni A; Munzenrieder, Niko; Zysset, Christoph; Kinkeldei, Thomas; Petti, Luisa; Troster, Gerhard

    2014-01-01

    Plastic electronics is soft, deformable and lightweight and it is suitable for the realization of devices which can form an intimate interface with the body, be implanted or integrated into textile for wearable and biomedical applications. Here, we present flexible electronics based on amorphous oxide semiconductors (a-IGZO) whose performance can achieve MHz frequency even when bent around hair. We developed an assembly technique to integrate complex electronic functionalities into textile while preserving the softness of the garment. All this and further developments can open up new opportunities in health monitoring, biotechnology and telemedicine.

  8. High performance image processing of SPRINT

    DeGroot, T. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  9. High-performance commercial building facades

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to

  10. Miniaturized high performance sensors for space plasmas

    Young, D.T.

    1996-01-01

    Operating under ever more constrained budgets, NASA has turned to a new paradigm for instrumentation and mission development in which smaller, faster, better, cheaper is of primary consideration for future space plasma investigations. The author presents several examples showing the influence of this new paradigm on sensor development and discuss certain implications for the scientific return from resource constrained sensors. The author also discusses one way to improve space plasma sensor performance which is to search out new technologies, measurement techniques and instrument analogs from related fields including among others, laboratory plasma physics

  11. High Performance Building Mockup in FLEXLAB

    McNeil, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kohler, Christian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Eleanor S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Selkowitz, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-08-30

    Genentech has ambitious energy and indoor environmental quality performance goals for Building 35 (B35) being constructed by Webcor at the South San Francisco campus. Genentech and Webcor contracted with the Lawrence Berkeley National Laboratory (LBNL) to test building systems including lighting, lighting controls, shade fabric, and automated shading controls in LBNL’s new FLEXLAB facility. The goal of the testing is to ensure that the systems installed in the new office building will function in a way that reduces energy consumption and provides a comfortable work environment for employees.

  12. High performance computations using dynamical nucleation theory

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  13. Pressurized planar electrochromatography, high-performance thin-layer chromatography and high-performance liquid chromatography--comparison of performance.

    Płocharz, Paweł; Klimek-Turek, Anna; Dzido, Tadeusz H

    2010-07-16

    Kinetic performance, measured by plate height, of High-Performance Thin-Layer Chromatography (HPTLC), High-Performance Liquid Chromatography (HPLC) and Pressurized Planar Electrochromatography (PPEC) was compared for the systems with adsorbent of the HPTLC RP18W plate from Merck as the stationary phase and the mobile phase composed of acetonitrile and buffer solution. The HPLC column was packed with the adsorbent, which was scrapped from the chromatographic plate mentioned. An additional HPLC column was also packed with adsorbent of 5 microm particle diameter, C18 type silica based (LiChrosorb RP-18 from Merck). The dependence of plate height of both HPLC and PPEC separating systems on flow velocity of the mobile phase and on migration distance of the mobile phase in TLC system was presented applying test solute (prednisolone succinate). The highest performance, amongst systems investigated, was obtained for the PPEC system. The separation efficiency of the systems investigated in the paper was additionally confirmed by the separation of test component mixture composed of six hormones. 2010 Elsevier B.V. All rights reserved.

  14. Can Knowledge of the Characteristics of "High Performers" Be Generalised?

    McKenna, Stephen

    2002-01-01

    Two managers described as high performing constructed complexity maps of their organization/world. The maps suggested that high performance is socially constructed and negotiated in specific contexts and management competencies associated with it are context specific. Development of high performers thus requires personalized coaching more than…

  15. Equipment calibration with a microprocessor connected to a time-sharing system

    Fontaine, G.; Guglielmi, L.; Jaeger, J.J.; Szafran, S.

    1981-01-01

    In H.E.P., it is common practice to test and calibrate equipment at different stages (design, construction checks, setting up and running periods) with a dedicated mini or micro-computer (such as CERN CAVIAR). An alternative solution has been developed in which such tasks are split between a microprocessor (Motorola 6800), and a host computer; this allows an easy and cheap multiplication of independant testing set-ups. The local processor is limited to CAMAC data acquisition, histogramming and simple processing, but its computing power is enhanced by a connection to a host time-sharing system via a MUMM multiplexor described in a separate paper. It is thus possible to perform sophisticated computations (fits etc...) and to use the host disk space to store calibration results for later use. In spite of the use of assembly language, a software structure has been devised to ease the constitution of an application program. This is achieved by the interplay of three levels of facilities: macro-instructions, library of subroutines, and Patchy controlled pieces of programs. A comprehensive collection of these is kept in the form of PAM files on the host computer. This system has been used to test calorimeter modules for the UA 1 experiment. (orig.)

  16. Stair ascent with an innovative microprocessor-controlled exoprosthetic knee joint.

    Bellmann, Malte; Schmalz, Thomas; Ludwigs, Eva; Blumentritt, Siegmar

    2012-12-01

    Climbing stairs can pose a major challenge for above-knee amputees as a result of compromised motor performance and limitations to prosthetic design. A new, innovative microprocessor-controlled prosthetic knee joint, the Genium, incorporates a function that allows an above-knee amputee to climb stairs step over step. To execute this function, a number of different sensors and complex switching algorithms were integrated into the prosthetic knee joint. The function is intuitive for the user. A biomechanical study was conducted to assess objective gait measurements and calculate joint kinematics and kinetics as subjects ascended stairs. Results demonstrated that climbing stairs step over step is more biomechanically efficient for an amputee using the Genium prosthetic knee than the previously possible conventional method where the extended prosthesis is trailed as the amputee executes one or two steps at a time. There is a natural amount of stress on the residual musculoskeletal system, and it has been shown that the healthy contralateral side supports the movements of the amputated side. The mechanical power that the healthy contralateral knee joint needs to generate during the extension phase is also reduced. Similarly, there is near normal loading of the hip joint on the amputated side.

  17. SEU simulation and testing of resistor-hardened D-latches in the SA3300 microprocessor

    Sexton, F.W.; Corbett, W.T.; Treece, R.K.; Hass, K.J.; Axness, C.L.; Hash, G.L.; Shaneyfelt, M.R.; Wunsch, T.F.; Hughes, K.L.

    1991-01-01

    In this paper the SEU tolerance of the SA3300 microprocessor with feedback resistors is presented and compared to the SA3300 without feedback resistors and to the commercial version (NS32016). Upset threshold at room temperature increased from 23 MeV-cm 2 /mg and 180 MeV-cm 2 /mg with feedback resistors of 50 kΩ and 160 kΩ, respectively. The performance goal of 10 MHz over the full temperature range of -55 degrees C to +125 degrees C is exceeded for feedback resistors of 160 kΩ and less. Error rate calculations for this design predict that the error rate is less than once every 100 years when 50 kΩ feedback resistors are used in the D-latch design. Analysis of the SEU response using a lumped-parameter circuit simulator imply a charge collection depth of 4.5 μm. This is much deeper than the authors would expect for prompt collection in the epi and funnel regions and has been explained in terms of diffusion current in the heavily doped substrate

  18. A high performance totally ordered multicast protocol

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  19. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-01-01

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  20. High-performance sport, marijuana, and cannabimimetics.

    Hilderbrand, Richard L

    2011-11-01

    The prohibition on use of cannabinoids in sporting competitions has been widely debated and continues to be a contentious issue. Information continues to accumulate on the adverse health effects of smoked marijuana and the decrement of performance caused by the use of cannabinoids. The objective of this article is to provide an overview of cannabinoids and cannabimimetics that directly or indirectly impact sport, the rules of sport, and performance of the athlete. This article reviews some of the history of marijuana in Olympic and Collegiate sport, summarizes the guidelines by which a substance is added to the World Anti-Doping Agency Prohibited List, and updates information on the pharmacologic effects of cannabinoids and their mechanism of action. The recently marketed cannabimimetics Spice and K2 are included in the discussion as they activate the same receptors as are activated by THC. The article also provides a view as to why the World Anti-Doping Agency prohibits cannabinoid or cannabimimetic use incompetition and should continue to do so.

  1. High Performance, Three-Dimensional Bilateral Filtering

    Bethel, E. Wes

    2008-06-05

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  2. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  3. Australia's new high performance research reactor

    Miller, R.; Abbate, P.M.

    2003-01-01

    A contract for the design and construction of the Replacement Research Reactor was signed in July 2000 between ANSTO and INVAP from Argentina. Since then the detailed design has been completed, a construction authorization has been obtained, and construction has commenced. The reactor design embodies modern safety thinking together with innovative solutions to ensure a highly safe and reliable plant. Also significant effort has been placed on providing the facility with diverse and ample facilities to maximize its use for irradiating material for radioisotope production as well as providing high neutron fluxes for neutron beam research. The project management organization and planing is commensurate with the complexity of the project and the number of players involved. (author)

  4. High Performance Single Nanowire Tunnel Diodes

    Wallentin, Jesper; Persson, Johan Mikael; Wagner, Jakob Birkedal

    NWs were contacted in a NW-FET setup. Electrical measurements at room temperature display typical tunnel diode behavior, with a Peak-to-Valley Current Ratio (PVCR) as high as 8.2 and a peak current density as high as 329 A/cm2. Low temperature measurements show improved PVCR of up to 27.6....... is the tunnel (Esaki) diode, which provides a low-resistance connection between junctions. We demonstrate an InP-GaAs NW axial heterostructure with tunnel diode behavior. InP and GaAs can be readily n- and p-doped, respectively, and the heterointerface is expected to have an advantageous type II band alignment...

  5. Future Vehicle Technologies : high performance transportation innovations

    Pratt, T. [Future Vehicle Technologies Inc., Maple Ridge, BC (Canada)

    2010-07-01

    Battery management systems (BMS) were discussed in this presentation, with particular reference to the basic BMS design considerations; safety; undisclosed information about BMS; the essence of BMS; and Future Vehicle Technologies' BMS solution. Basic BMS design considerations that were presented included the balancing methodology; prismatic/cylindrical cells; cell protection; accuracy; PCB design, size and components; communications protocol; cost of manufacture; and expandability. In terms of safety, the presentation addressed lithium fires; high voltage; high voltage ground detection; crash/rollover shutdown; complete pack shutdown capability; and heat shields, casings, and impact protection. BMS bus bar engineering considerations were discussed along with good chip design. It was concluded that FVTs advantage is a unique skillset in automotive technology and the development of speed and cost effectiveness. tabs., figs.

  6. Radiation cured coatings for high performance products

    Parkins, J.C.; Teesdale, D.H.

    1984-01-01

    Development over the past ten years of radiation curable coating and lacquer systems and the means of curing them has led to new products in the packaging, flooring, furniture and other industries. Solventless lacquer systems formulated with acrylates and other resins enable high levels of durability, scuff resistance and gloss to be achieved. Ultra violet and electron beam radiation curing are used, the choice depending on the nature of the coating, the product and the scale of the operation. (author)

  7. High thermoelectric performance of graphite nanofibers

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2017-01-01

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high ...

  8. New monomers for high performance polymers

    Gratz, Roy F.

    1993-01-01

    This laboratory has been concerned with the development of new polymeric materials with high thermo-oxidative stability for use in the aerospace and electronics industries. Currently, there is special emphasis on developing matrix resins and composites for the high speed civil transport (HSCT) program. This application requires polymers that have service lifetimes of 60,000 hr at 350 F (177 C) and that are readily processible into void-free composites, preferably by melt-flow or powder techniques that avoid the use of high boiling solvents. Recent work has focused on copolymers which have thermally stable imide groups separated by flexible arylene ether linkages, some with trifluoromethyl groups attached to the aromatic rings. The presence of trifluoromethyl groups in monomers and polymers often improves their solubility and processibility. The goal of this research was to synthesize several new monomers containing pendant trifluoromethyl groups and to incorporate these monomers into new imide/arylene ether copolymers. Initially, work was begun on the synthesis of three target compounds. The first two, 3,5-dihydroxybenzo trifluoride and 3-amino 5-hydroxybenzo trifluoride, are intermediates in the synthesis of more complex monomers. The third, 3,5-bis (3-amino-phenoxy) benzotrifluoride, is an interesting diamine that could be incorporated into a polyimide directly.

  9. High performance repairing of reinforced concrete structures

    Iskhakov, I.; Ribakov, Y.; Holschemacher, K.; Mueller, T.

    2013-01-01

    Highlights: ► Steel fibered high strength concrete is effective for repairing concrete elements. ► Changing fibers’ content, required ductility of the repaired element is achieved. ► Experiments prove previously developed design concepts for two layer beams. -- Abstract: Steel fibered high strength concrete (SFHSC) is an effective material that can be used for repairing concrete elements. Design of normal strength concrete (NSC) elements that should be repaired using SFHSC can be based on general concepts for design of two-layer beams, consisting of SFHSC in the compressed zone and NSC without fibers in the tensile zone. It was previously reported that such elements are effective when their section carries rather large bending moments. Steel fibers, added to high strength concrete, increase its ultimate deformations due to the additional energy dissipation potential contributed by fibers. When changing the fibers’ content, a required ductility level of the repaired element can be achieved. Providing proper ductility is important for design of structures to dynamic loadings. The current study discusses experimental results that form a basis for finding optimal fiber content, yielding the highest Poisson coefficient and ductility of the repaired elements’ sections. Some technological issues as well as distribution of fibers in the cross section of two-layer bending elements are investigated. The experimental results, obtained in the frame of this study, form a basis for general technological provisions, related to repairing of NSC beams and slabs, using SFHSC.

  10. Information processing among high-performance managers

    S.C. Garcia-Santos

    2010-01-01

    Full Text Available The purpose of this study was to evaluate the information processing of 43 business managers with a professional superior performance. The theoretical framework considers three models: the Theory of Managerial Roles of Henry Mintzberg, the Theory of Information Processing, and Process Model Response to Rorschach by John Exner. The participants have been evaluated by Rorschach method. The results show that these managers are able to collect data, evaluate them and establish rankings properly. At same time, they are capable of being objective and accurate in the problems assessment. This information processing style permits an interpretation of the world around on basis of a very personal and characteristic processing way or cognitive style.

  11. High temperature performance of polymer composites

    Keller, Thomas

    2014-01-01

    The authors explain the changes in the thermophysical and thermomechanical properties of polymer composites under elevated temperatures and fire conditions. Using microscale physical and chemical concepts they allow researchers to find reliable solutions to their engineering needs on the macroscale. In a unique combination of experimental results and quantitative models, a framework is developed to realistically predict the behavior of a variety of polymer composite materials over a wide range of thermal and mechanical loads. In addition, the authors treat extreme fire scenarios up to more than 1000°C for two hours, presenting heat-protection methods to improve the fire resistance of composite materials and full-scale structural members, and discuss their performance after fire exposure. Thanks to the microscopic approach, the developed models are valid for a variety of polymer composites and structural members, making this work applicable to a wide audience, including materials scientists, polymer chemist...

  12. High performance concrete with blended cement

    Biswas, P.P.; Saraswati, S.; Basu, P.C.

    2012-01-01

    Principal objectives of the proposed project are two folds. Firstly, to develop the HPC mix suitable to NPP structures with blended cement, and secondly to study its durability necessary for desired long-term performance. Three grades of concrete to b considered in the proposed projects are M35, M50 and M60 with two types of blended cements, i.e. Portland slag cement (PSC) and Portland pozzolana cement (PPC). Three types of mineral admixtures - silica fume, fly ash and ground granulated blast furnace slag will be used. Concrete mixes with OPc and without any mineral admixture will be considered as reference case. Durability study of these mixes will be carried out

  13. High performance VLSI telemetry data systems

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  14. High Performance Fuel Technology Development(I)

    Song, Kun Woo; Kim, Keon Sik; Bang, Jeong Yong; Park, Je Keon; Chen, Tae Hyun; Kim, Hyung Kyu

    2010-04-01

    The dual-cooled annular fuel has been investigated for the purpose of achieving the power uprate of 20% and decreasing pellet temperature by 30%. The 12x12 rod array and basic design was developed, which is mechanically compatible with the OPR-1000. The reactor core analysis has been performed using this design, and the results have shown that the criteria of nuclear, thermohydraulic and safety design are satisfied and pellet temperature can be lowered by 40% even in 120% power. The basic design of fuel component was developed and the cladding thickness was designed through analysis and experiments. The solutions have been proposed and analyzed to the technical issues such as 'inner channel blockage' and 'imbalance between inner and outer coolant'. The annular pellet was fabricated with good control of shape and size, and especially, a new sintering technique has been developed to control the deviation of inner diameter within ±5μm. The irradiation test of annular pellets has been conducted up to 10 MWD/kgU to find out the densification and swelling behaviors. The 11 types of materials candidates have developed for the PCI-endurance pellet, and the material containing the Mn-Al additive showed its creep performance of much better than UO2 material. The HANA cladding has been irradiated up to 61 MWD/kgU, and the results have shown that its oxidation resistance is better by 40% than that of Zircaloy. The 30 types of candidate materials for next generation have been developed through alloy design and property tests

  15. Carbon nanotubes for high-performance logic

    Chen, Zhihong; Wong, H.S. Phillip; Mitra, Subhasish; Bol, Aggeth; Peng, Lianmao; Hills, Gage; Thissen, Nick

    2014-01-01

    Single-wall carbon nanotubes (CNTs) were discovered in 1993 and have been an area of intense research since then. They offer the right dimensions to explore material science and physical chemistry at the nanoscale and are the perfect system to study low-dimensional physics and transport. In the past decade, more attention has been shifted toward making use of this unique nanomaterial in real-world applications. In this article, we focus on potential applications of CNTs in the high-performanc...

  16. Environmentally friendly, high-performance generation

    Kalmari, A.

    2003-01-01

    The project developer, owner, and operator of the new 45 MWth BFB-based cogeneration plant in Iisalmi is Termia Oy, part of the Atro Group (formerly Savon Voima Oy). Fired on peat and wood waste and handed over to the customer in November 2002, the plant's electrical output is sold to the parent company and heat locally to customers in Iisalmi. When the construction decision was made, one of the main objectives was to utilise as high a level of indigenous fuels (peat and biomass) as possible, at a high level of efficiency. An environmental impact analysis was carried out, taking into account the impact of various fuels and emissions in terms of combustion and logistics. One main benefit of the type of plant ultimately selected was that the bulk of the fuel can be supplied from the surrounding area. This is very important in terms of fuel supply security and local employment. The government provided a EUR 2.7 million grant for the project, equivalent to 13% of the total EUR 21 million investment budget. Before the plant was built, Termia used approximately 95 GWh of indigenous fuels annually. Today, this figure is 220 GWh. The main fuel used is milled peat. Up to 30% green chips from logging residues can be used. Recycled waste fuel can cover up to 3% of the total fuel requirement

  17. Liquid Argon Calorimeter performance at High Rates

    Seifert, F; The ATLAS collaboration

    2013-01-01

    The expected increase of luminosity at HL-LHC by a factor of ten with respect to LHC luminosities has serious consequences for the signal reconstruction, radiation hardness requirements and operations of the ATLAS liquid argon calorimeters in the endcap, respectively forward region. Small modules of each type of calorimeter have been built and exposed to a high intensity proton beam of 50 GeV at IHEP/Protvino. The beam is extracted via the bent crystal technique, offering the unique opportunity to cover intensities ranging from $10^6$ p/s up to $3\\cdot10^{11}$ p/s. This exceeds the deposited energy per time expected at HL-LHC by more than a factor of 100. The correlation between beam intensity and the read-out signal has been studied. The data show clear indications of pulse shape distortion due to the high ionization build-up, in agreement with MC expectations. This is also confirmed from the dependence of the HV currents on beam intensity.

  18. High-performance silicon nanowire bipolar phototransistors

    Tan, Siew Li; Zhao, Xingyan; Chen, Kaixiang; Crozier, Kenneth B.; Dan, Yaping

    2016-07-01

    Silicon nanowires (SiNWs) have emerged as sensitive absorbing materials for photodetection at wavelengths ranging from ultraviolet (UV) to the near infrared. Most of the reports on SiNW photodetectors are based on photoconductor, photodiode, or field-effect transistor device structures. These SiNW devices each have their own advantages and trade-offs in optical gain, response time, operating voltage, and dark current noise. Here, we report on the experimental realization of single SiNW bipolar phototransistors on silicon-on-insulator substrates. Our SiNW devices are based on bipolar transistor structures with an optically injected base region and are fabricated using CMOS-compatible processes. The experimentally measured optoelectronic characteristics of the SiNW phototransistors are in good agreement with simulation results. The SiNW phototransistors exhibit significantly enhanced response to UV and visible light, compared with typical Si p-i-n photodiodes. The near infrared responsivities of the SiNW phototransistors are comparable to those of Si avalanche photodiodes but are achieved at much lower operating voltages. Compared with other reported SiNW photodetectors as well as conventional bulk Si photodiodes and phototransistors, the SiNW phototransistors in this work demonstrate the combined advantages of high gain, high photoresponse, low dark current, and low operating voltage.

  19. High Performance Clocks and Gravity Field Determination

    Müller, J.; Dirkx, D.; Kopeikin, S. M.; Lion, G.; Panet, I.; Petit, G.; Visser, P. N. A. M.

    2018-02-01

    Time measured by an ideal clock crucially depends on the gravitational potential and velocity of the clock according to general relativity. Technological advances in manufacturing high-precision atomic clocks have rapidly improved their accuracy and stability over the last decade that approached the level of 10^{-18}. This notable achievement along with the direct sensitivity of clocks to the strength of the gravitational field make them practically important for various geodetic applications that are addressed in the present paper. Based on a fully relativistic description of the background gravitational physics, we discuss the impact of those highly-precise clocks on the realization of reference frames and time scales used in geodesy. We discuss the current definitions of basic geodetic concepts and come to the conclusion that the advances in clocks and other metrological technologies will soon require the re-definition of time scales or, at least, clarification to ensure their continuity and consistent use in practice. The relative frequency shift between two clocks is directly related to the difference in the values of the gravity potential at the points of clock's localization. According to general relativity the relative accuracy of clocks in 10^{-18} is equivalent to measuring the gravitational red shift effect between two clocks with the height difference amounting to 1 cm. This makes the clocks an indispensable tool in high-precision geodesy in addition to laser ranging and space geodetic techniques. We show how clock measurements can provide geopotential numbers for the realization of gravity-field-related height systems and can resolve discrepancies in classically-determined height systems as well as between national height systems. Another application of clocks is the direct use of observed potential differences for the improved recovery of regional gravity field solutions. Finally, clock measurements for space-borne gravimetry are analyzed along with

  20. Development of high performance hybrid rocket fuels

    Zaseck, Christopher R.

    . In order to examine paraffin/additive combustion in a motor environment, I conducted experiments on well characterized aluminum based additives. In particular, I investigate the influence of aluminum, unpassivated aluminum, milled aluminum/polytetrafluoroethylene (PTFE), and aluminum hydride on the performance of paraffin fuels for hybrid rocket propulsion. I use an optically accessible combustor to examine the performance of the fuel mixtures in terms of characteristic velocity efficiency and regression rate. Each combustor test consumes a 12.7 cm long, 1.9 cm diameter fuel strand under 160 kg/m 2s of oxygen at up to 1.4 MPa. The experimental results indicate that the addition of 5 wt.% 30 mum or 80 nm aluminum to paraffin increases the regression rate by approximately 15% compared to neat paraffin grains. At higher aluminum concentrations and nano-scale particles sizes, the increased melt layer viscosity causes slower regression. Alane and Al/PTFE at 12.5 wt.% increase the regression of paraffin by 21% and 32% respectively. Finally, an aging study indicates that paraffin can protect air and moisture sensitive particles from oxidation. The opposed burner and aluminum/paraffin hybrid rocket experiments show that additives can alter bulk fuel properties, such as viscosity, that regulate entrainment. The general effect of melt layer properties on the entrainment and regression rate of paraffin is not well understood. Improved understanding of how solid additives affect the properties and regression of paraffin is essential to maximize performance. In this document I investigate the effect of melt layer properties on paraffin regression using inert additives. Tests are performed in the optical cylindrical combustor at ˜1 MPa under a gaseous oxygen mass flux of ˜160 kg/m2s. The experiments indicate that the regression rate is proportional to mu0.08rho 0.38kappa0.82. In addition, I explore how to predict fuel viscosity, thermal conductivity, and density prior to testing

  1. Emerging technologies for high performance infrared detectors

    Tan, Chee Leong; Mohseni, Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III-V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  2. Emerging technologies for high performance infrared detectors

    Tan Chee Leong

    2018-01-01

    Full Text Available Infrared photodetectors (IRPDs have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  3. Video performance for high security applications

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  4. High performance magnet power supply optimization

    Jackson, L.T.

    1988-01-01

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems

  5. High Dynamic Performance Nonlinear Source Emulator

    Nguyen-Duy, Khiem; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    As research and development of renewable and clean energy based systems is advancing rapidly, the nonlinear source emulator (NSE) is becoming very essential for testing of maximum power point trackers or downstream converters. Renewable and clean energy sources play important roles in both...... terrestrial and nonterrestrial applications. However, most existing NSEs have only been concerned with simulating energy sources in terrestrial applications, which may not be fast enough for testing of nonterrestrial applications. In this paper, a high-bandwidth NSE is developed that is able to simulate...... change in the input source but also to a load step between nominal and open circuit. Moreover, all of these operation modes have a very fast settling time of only 10 μs, which is hundreds of times faster than that of existing works. This attribute allows for higher speed and a more efficient maximum...

  6. High-Performance Energy Applications and Systems

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  7. High performance multiple stream data transfer

    Rademakers, F.; Saiz, P.

    2001-01-01

    The ALICE detector at LHC (CERN), will record raw data at a rate of 1.2 Gigabytes per second. Trying to analyse all this data at CERN will not be feasible. As originally proposed by the MONARC project, data collected at CERN will be transferred to remote centres to use their computing infrastructure. The remote centres will reconstruct and analyse the events, and make available the results. Therefore high-rate data transfer between computing centres (Tiers) will become of paramount importance. The authors will present several tests that have been made between CERN and remote centres in Padova (Italy), Torino (Italy), Catania (Italy), Lyon (France), Ohio (United States), Warsaw (Poland) and Calcutta (India). These tests consisted, in a first stage, of sending raw data from CERN to the remote centres and back, using a ftp method that allows connections of several streams at the same time. Thanks to these multiple streams, it is possible to increase the rate at which the data is transferred. While several 'multiple stream ftp solutions' already exist, the authors' method is based on a parallel socket implementation which allows, besides files, also objects (or any large message) to be send in parallel. A prototype will be presented able to manage different transfers. This is the first step of a system to be implemented that will be able to take care of the connections with the remote centres to exchange data and monitor the status of the transfer

  8. High performance parallel backprojection on FPGA

    Pfanner, Florian; Knaup, Michael; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Reconstruction of tomographic images, i.e., images from a Computed Tomography scanner, is a very time consuming issue. The most calculation power is needed for the backprojection step. A closer inspection shows that the algorithm for backprojection is easy to parallelize. FPGAs are able to execute many operations in the same time, so a highly parallel algorithm is a requirement for a powerful acceleration. For data flow rate maximization, we realized the backprojection in a pipelined structure with data throughput of one clock cycle. Due the hardware limitations of the FPGA, it is not possible to reconstruct the image as a whole. So it is necessary to split up the image and reconstruct these parts separately. Despite that, a reconstruction of 512 projections into a 5122 image is calculated within 13 ms on a Virtex 5 FPGA. To save hardware resources we use fixed point arithmetic with an accuracy of 23 bit for calculation. A comparison of the result image and an image, calculated with floating point arithmetic on CPU, shows that there are no differences between these images. (orig.)

  9. Technologies of high-performance thermography systems

    Breiter, R.; Cabanski, Wolfgang A.; Mauk, K. H.; Kock, R.; Rode, W.

    1997-08-01

    A family of 2 dimensional detection modules based on 256 by 256 and 486 by 640 platinum silicide (PtSi) focal planes, or 128 by 128 and 256 by 256 mercury cadmium telluride (MCT) focal planes for applications in either the 3 - 5 micrometer (MWIR) or 8 - 10 micrometer (LWIR) range was recently developed by AIM. A wide variety of applications is covered by the specific features unique for these two material systems. The PtSi units provide state of the art correctability with long term stable gain and offset coefficients. The MCT units provide extremely fast frame rates like 400 Hz with snapshot integration times as short as 250 microseconds and with a thermal resolution NETD less than 20 mK for e.g. the 128 by 128 LWIR module. The unique design idea general for all of these modules is the exclusively digital interface, using 14 bit analog to digital conversion to provide state of the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. Device specific features like bias voltages etc. are identified during the final test and stored in a memory on the driving electronics. This concept allows an easy exchange of IDCAs of the same type without any need for tuning or e.g. the possibility to upgrade a PtSi based unit to an MCT module by just loading the suitable software. Miniaturized digital signal processor (DSP) based image correction units were developed for testing and operating the units with output data rates of up to 16 Mpixels/s. These boards provide the ability for freely programmable realtime functions like two point correction and various data manipulations in thermography applications.

  10. High energy permanent magnets - Solutions to high performance devices

    Ma, B.M.; Willman, C.J.

    1986-01-01

    Neodymium iron boron magnets are a special class of magnets providing the highest level of performance with the least amount of material. Crucible Research Center produced the highest energy product magnet of 45 MGOe - a world record. Commercialization of this development has already taken place. Crucible Magnetics Division, located in Elizabethtown, Kentucky, is currently manufacturing and marketing six different grades of NdFeB magnets. Permanent magnets find application in motors, speakers, electron beam focusing devices for military and Star Wars. The new NdFeB magnets are of considerable interest for a wide range of applications

  11. Microprocessor, Setx, Xrn2, and Rrp6 Co-operate to Induce Premature Termination of Transcription by RNAPII

    Wagschal, Alexandre; Rousset, Emilie; Basavarajaiah, Poornima; Contreras, Xavier; Harwig, Alex; Laurent-Chabalier, Sabine; Nakamura, Mirai; Chen, Xin; Zhang, Ke; Meziane, Oussama; Boyer, Frédéric; Parrinello, Hugues; Berkhout, Ben; Terzian, Christophe; Benkirane, Monsef; Kiernan, Rosemary

    2012-01-01

    Transcription elongation is increasingly recognized as an important mechanism of gene regulation. Here, we show that microprocessor controls gene expression in an RNAi-independent manner. Microprocessor orchestrates the recruitment of termination factors Setx and Xrn2, and the 30-50 exoribonuclease,

  12. DOE research in utilization of high-performance computers

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  13. Distributed Microprocessor Automation Network for Synthesizing Radiotracers Used in Positron Emission Tomography [PET

    Russell, J. A. G.; Alexoff, D. L.; Wolf, A. P.

    1984-09-01

    This presentation describes an evolving distributed microprocessor network for automating the routine production synthesis of radiotracers used in Positron Emission Tomography. We first present a brief overview of the PET method for measuring biological function, and then outline the general procedure for producing a radiotracer. The paper identifies several reasons for our automating the syntheses of these compounds. There is a description of the distributed microprocessor network architecture chosen and the rationale for that choice. Finally, we speculate about how this network may be exploited to extend the power of the PET method from the large university or National Laboratory to the biomedical research and clinical community at large. (DT)

  14. Distributed microprocessor automation network for synthesizing radiotracers used in positron emission tomography

    Russell, J.A.G.; Alexoff, D.L.; Wolf, A.P.

    1984-01-01

    This presentation describes an evolving distributed microprocessor network for automating the routine production synthesis of radiotracers used in Positron Emission Tomography. We first present a brief overview of the PET method for measuring biological function, and then outline the general procedure for producing a radiotracer. The paper identifies several reasons for our automating the syntheses of these compounds. There is a description of the distributed microprocessor network architecture chosen and the rationale for that choice. Finally, we speculate about how this network may be exploited to extend the power of the PET method from the large university or National Laboratory to the biomedical research and clinical community at large. 20 refs. (DT)

  15. Application of a 16-bit microprocessor to the digital control of machine tools

    Issaly, Alain

    1979-01-01

    After an overview of machine tools (various types, definition standardization, associated technologies for motors and position sensors), this research thesis describes the principles of computer-based digital control: classification of machine tool command systems, machining programming, programming languages, dialog function, interpolation function, servo-control function, tool compensation function. The author reports the application of a 16-bit microprocessor to the computer-based digital control of a machine tool: feasibility, selection of microprocessor, hardware presentation, software development and description, machining mode, translation-loading mode

  16. High-Performance Management Practices and Employee Outcomes in Denmark

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...

  17. Contribution to the automatic command in robotics - Application to the command by microprocessors of the articulated systems

    Al Mouhamed, Mayez

    1982-01-01

    The first part of the present paper deals with the main methods of changing the coordinates for a general articulated system. After a definition of the coordinates changing, we propose a coordination system designed for easy programming of the movements. Its characteristic is to permit the action anywhere on the manipulated object. The second part deals with the force regulation problem. For this purpose we have developed a general force sensor. The informations delivered by the sensor are used by force regulators which are intended for the automatic assembly of subsystems. In the third part the dynamic problem of the articulated systems is exposed. We present a new method which allows to determine dynamic parameters from appropriate motions of the robot. These parameters are then used to implement the dynamic control. Several applications, using the powerful microprocessor INTEL 8086 and its arithmetic coprocessor 8087, are presented, in order to demonstrate the performances gained. (author) [fr

  18. Academic performance in high school as factor associated to academic performance in college

    Mileidy Salcedo Barragán

    2008-12-01

    Full Text Available This study intends to find the relationship between academic performance in High School and College, focusing on Natural Sciences and Mathematics. It is a descriptive correlational study, and the variables were academic performance in High School, performance indicators and educational history. The correlations between variables were established with Spearman’s correlation coefficient. Results suggest that there is a positive relationship between academic performance in High School and Educational History, and a very weak relationship between performance in Science and Mathematics in High School and performance in College.

  19. Performance of a high efficiency high power UHF klystron

    Konrad, G.T.

    1977-03-01

    A 500 kW c-w klystron was designed for the PEP storage ring at SLAC. The tube operates at 353.2 MHz, 62 kV, a microperveance of 0.75, and a gain of approximately 50 dB. Stable operation is required for a VSWR as high as 2 : 1 at any phase angle. The design efficiency is 70%. To obtain this value of efficiency, a second harmonic cavity is used in order to produce a very tightly bunched beam in the output gap. At the present time it is planned to install 12 such klystrons in PEP. A tube with a reduced size collector was operated at 4% duty at 500 kW. An efficiency of 63% was observed. The same tube was operated up to 200 kW c-w for PEP accelerator cavity tests. A full-scale c-w tube reached 500 kW at 65 kV with an efficiency of 55%. In addition to power and phase measurements into a matched load, some data at various load mismatches are presented

  20. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...