WorldWideScience

Sample records for high computational power

  1. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  2. High Performance Computing - Power Application Programming Interface Specification Version 2.0.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ward, H. Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  3. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  4. Computer-aided analysis of power-electronic systems simulation of a high-voltage power converter

    International Nuclear Information System (INIS)

    Bordry, F.; Isch, H.W.; Proudlock, P.

    1987-01-01

    In the study of semiconductor devices, simulation methods play an important role in both the design of systems and the analysis of their operation. The authors describe a new and efficient computer-aided package program for general power-electronic systems. The main difficulty when taking into account non-linear elements, such as semiconductors, lies in determining the existence and the relations of the elementary sequences defined by the conduction or nonconduction of these components. The method does not require a priori knowledge of the state sequences of the semiconductor nor of the commutation instants, but only the circuit structure, its parameters and the commands to the controlled switches. The simulation program computes automatically both transient and steady-state waveforms for any circuit configuration. The simulation of a high-voltage power converter is presented, both for its steady-state and transient overload conditions. This 100 kV power converter (4 MW) will feed two klystrons in parallel

  5. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  6. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Laros III, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeBonis, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  7. High degree utilization of computers for design of nuclear power plants

    International Nuclear Information System (INIS)

    Masui, Takao; Sawada, Takashi

    1992-01-01

    Nuclear power plants are the huge technology in which various technologies are compounded, and the high safety is demanded. Therefore, in the design of nuclear power plants, it is necessary to carry out the design by sufficiently grasping the behavior of the plants, and to confirm the safety by carrying out the accurate design evaluation supposing the various operational conditions, and as the indispensable tool for these analysis and evaluation, the most advanced computers in that age have been utilized. As to the utilization for the design, there are the fields of design, analysis and evaluation and another fields of the application to the support of design. Also in the field of the application to operation control, computers are utilized. The utilization of computers for the core design, hydrothermal design, core structure design, safety analysis and structural analysis of PWR plants, and for the nuclear design, safety analysis and heat flow analysis of FBR plants, the application to the support of design and the application to operation control are explained. (K.I.)

  8. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  9. Ultra-low power high precision magnetotelluric receiver array based customized computer and wireless sensor network

    Science.gov (United States)

    Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.

    2016-12-01

    Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.

  10. Computer control of the high-voltage power supply for the DIII-D electron cyclotron heating system

    International Nuclear Information System (INIS)

    Clow, D.D.; Kellman, D.H.

    1992-01-01

    This paper reports on the DIII-D Electron Cyclotron Heating (ECH) high voltage power supply which is controlled by a computer. Operational control is input via keyboard and mouse, and computer/power supply interfact is accomplished with a Computer Assisted Monitoring and Control (CAMAC) system. User-friendly tools allow the design and layout of simulated control panels on the computer screen. Panel controls and indicators can be changed, added or deleted, and simple editing of user-specific processes can quickly modify control and fault logic. Databases can be defined, and control panel functions are easily referred to various data channels. User-specific processes are written and linked using Fortran, to manage control and data acquisition through CAMAC. The resulting control system has significant advantages over the hardware it emulates: changes in logic, layout, and function are quickly and easily incorporated; data storage, retrieval, and processing are flexible and simply accomplished; physical components subject to wear and degradation are minimized. In addition, the system can be expanded to multiplex control of several power supplies, each with its own database, through a single computer console

  11. Computer control of the high-voltage power supply for the DIII-D Electron Cyclotron Heating system

    International Nuclear Information System (INIS)

    Clow, D.D.; Kellman, D.H.

    1991-10-01

    The D3-D Electron Cyclotron Heating (ECH) high voltage power supply is controlled by a computer. Operational control is input via keyboard and mouse, and computer/power supply interface is accomplished with a Computer Assisted Monitoring and Control (CAMAC) system. User-friendly tools allow the design and layout of simulated control panels on the computer screen. Panel controls and indicators can be changed, added or deleted, and simple editing of user-specific processes can quickly modify control and fault logic. Databases can be defined, and control panel functions are easily referred to various data channels. User-specific processes are written and linked using Fortran, to manage control and data acquisition through CAMAC. The resulting control system has significant advantages over the hardware it emulates: changes in logic, layout, and function are quickly and easily incorporated; data storage, retrieval, and processing are flexible and simply accomplished, physical components subject to wear and degradation are minimized. In addition, the system can be expanded to multiplex control of several power supplied, each with its own database, through a single computer and console. 5 refs., 4 figs., 1 tab

  12. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  13. High-power klystrons

    Science.gov (United States)

    Siambis, John G.; True, Richard B.; Symons, R. S.

    1994-05-01

    Novel emerging applications in advanced linear collider accelerators, ionospheric and atmospheric sensing and modification and a wide spectrum of industrial processing applications, have resulted in microwave tube requirements that call for further development of high power klystrons in the range from S-band to X-band. In the present paper we review recent progress in high power klystron development and discuss some of the issues and scaling laws for successful design. We also discuss recent progress in electron guns with potential grading electrodes for high voltage with short and long pulse operation via computer simulations obtained from the code DEMEOS, as well as preliminary experimental results. We present designs for high power beam collectors.

  14. High average power solid state laser power conditioning system

    International Nuclear Information System (INIS)

    Steinkraus, R.F.

    1987-01-01

    The power conditioning system for the High Average Power Laser program at Lawrence Livermore National Laboratory (LLNL) is described. The system has been operational for two years. It is high voltage, high power, fault protected, and solid state. The power conditioning system drives flashlamps that pump solid state lasers. Flashlamps are driven by silicon control rectifier (SCR) switched, resonant charged, (LC) discharge pulse forming networks (PFNs). The system uses fiber optics for control and diagnostics. Energy and thermal diagnostics are monitored by computers

  15. Predicting the Noise of High Power Fluid Targets Using Computational Fluid Dynamics

    Science.gov (United States)

    Moore, Michael; Covrig Dusa, Silviu

    The 2.5 kW liquid hydrogen (LH2) target used in the Qweak parity violation experiment is the highest power LH2 target in the world and the first to be designed with Computational Fluid Dynamics (CFD) at Jefferson Lab. The Qweak experiment determined the weak charge of the proton by measuring the parity-violating elastic scattering asymmetry of longitudinally polarized electrons from unpolarized liquid hydrogen at small momentum transfer (Q2 = 0 . 025 GeV2). This target satisfied the design goals of bench-marked with the Qweak target data. This work is an essential component in future designs of very high power low noise targets like MOLLER (5 kW, target noise asymmetry contribution < 25 ppm) and MESA (4.5 kW).

  16. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  17. Power throttling of collections of computing elements

    Science.gov (United States)

    Bellofatto, Ralph E [Ridgefield, CT; Coteus, Paul W [Yorktown Heights, NY; Crumley, Paul G [Yorktown Heights, NY; Gara, Alan G [Mount Kidsco, NY; Giampapa, Mark E [Irvington, NY; Gooding,; Thomas, M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Megerian, Mark G [Rochester, MN; Ohmacht, Martin [Yorktown Heights, NY; Reed, Don D [Mantorville, MN; Swetz, Richard A [Mahopac, NY; Takken, Todd [Brewster, NY

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  18. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  19. Use of computers at nuclear power plants

    International Nuclear Information System (INIS)

    Sen'kin, V.I.; Ozhigano, Yu.V.

    1974-01-01

    Applications of information and control computors in reacter central systems in Great Britain, Federal Republic of Germany, France, Canada, and the USA is surveyed. For the purpose of increasing the reliability of the computers effective means were designed for emergency operation and automatic computerized controls, and highly reliable micromodel modifications were developed. Numerical data units were handled along with development of methods and diagrams for converting analog values to numerical values, in accordance with modern requirements. Some data are presented on computer reliability in operating nuclear power plants both proposed and under construction. It is concluded that in foreign nuclear power stations the informational and calculational computers are finding increasingly wide distribution. Rapid action, the possibility of controlling large parameters, and operation of the computer in conjunction with increasing reliability are speeding up the process of introducing computers in atomic energy and broadenig their functions. (V.P.)

  20. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  1. High power communication satellites power systems study

    International Nuclear Information System (INIS)

    Josloff, A.T.; Peterson, J.R.

    1994-01-01

    This paper discusses a DOE-funded study to evaluate the commercial attractiveness of high power communication satellites and assesses the attributes of both conventional photovoltaic and reactor power systems. This study brings together a preeminent US Industry/Russian team to cooperate on the role of high power communication satellites in the rapidly expanding communications revolution. These high power satellites play a vital role in assuring availability of universally accessible, wide bandwidth communications, for high definition TV, super computer networks and other services. Satellites are ideally suited to provide the wide bandwidths and data rates required and are unique in the ability to provide services directly to the users. As new or relocated markets arise, satellites offer a flexibility that conventional distribution services cannot match, and it is no longer necessary to be near population centers to take advantage of the telecommunication revolution. The geopolitical implications of these substantially enhanced communications capabilities will be significant

  2. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  3. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  4. A computer control system for the PNC high power cw electron linac. Concept and hardware

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, T.; Hirano, K.; Takei, Hayanori; Nomura, Masahiro; Tani, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Kato, Y.; Ishikawa, Y.

    1998-06-01

    Design and construction of a high power cw (Continuous Wave) electron linac for studying feasibility of nuclear waste transmutation was started in 1989 at PNC. The PNC accelerator (10 MeV, 20 mA average current, 4 ms pulse width, 50 Hz repetition) is dedicated machine for development of the high current acceleration technology in future need. The computer control system is responsible for accelerator control and supporting the experiment for high power operation. The feature of the system is the measurements of accelerator status simultaneously and modularity of software and hardware for easily implemented for modification or expansion. The high speed network (SCRAM Net {approx} 15 MB/s), Ethernet, and front end processors (Digital Signal Processor) were employed for the high speed data taking and control. The system was designed to be standard modules and software implemented man machine interface. Due to graphical-user-interface and object-oriented-programming, the software development environment is effortless programming and maintenance. (author)

  5. High power communication satellites power systems study

    Science.gov (United States)

    Josloff, Allan T.; Peterson, Jerry R.

    1995-01-01

    This paper discusses a planned study to evaluate the commercial attractiveness of high power communication satellites and assesses the attributes of both conventional photovoltaic and reactor power systems. These high power satellites can play a vital role in assuring availability of universally accessible, wide bandwidth communications, for high definition TV, super computer networks and other services. Satellites are ideally suited to provide the wide bandwidths and data rates required and are unique in the ability to provide services directly to the users. As new or relocated markets arise, satellites offer a flexibility that conventional distribution services cannot match, and it is no longer necessary to be near population centers to take advantage of the telecommunication revolution. The geopolitical implications of these substantially enhanced communications capabilities can be significant.

  6. Automated System Tests High-Power MOSFET's

    Science.gov (United States)

    Huston, Steven W.; Wendt, Isabel O.

    1994-01-01

    Computer-controlled system tests metal-oxide/semiconductor field-effect transistors (MOSFET's) at high voltages and currents. Measures seven parameters characterizing performance of MOSFET, with view toward obtaining early indication MOSFET defective. Use of test system prior to installation of power MOSFET in high-power circuit saves time and money.

  7. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  8. Balancing computation and communication power in power constrained clusters

    Science.gov (United States)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    2018-05-29

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agent may be configured to reassign the unused power to the active nodes to expedite workload processing.

  9. Fault analysis and strategy of high pulsed power supply for high power laser

    International Nuclear Information System (INIS)

    Liu Kefu; Qin Shihong; Li Jin; Pan Yuan; Yao Zonggan; Zheng Wanguo; Guo Liangfu; Zhou Peizhang; Li Yizheng; Chen Dehuai

    2001-01-01

    according to the requirements of driving flash-lamp, a high pulsed power supply (PPS) based on capacitors as energy storage elements is designed. The author analyzes in detail the faults of high pulsed power supply for high power laser. Such as capacitor internal short-circuit, main bus breakdown to ground, flashlamp sudden short or break. The fault current and voltage waveforms were given by circuit simulations. Based on the analysis and computation, the protection strategy with the fast fuse and ZnO was put forward, which can reduce the damage of PPS to the lower extent and provide the personnel safe and collateral property from the all threats. The preliminary experiments demonstrated that the design of the PPS can satisfy the project requirements

  10. SWITCHING POWER FAN CONTROL OF COMPUTER

    Directory of Open Access Journals (Sweden)

    Oleksandr I. Popovskyi

    2010-10-01

    Full Text Available Relevance of material presented in the article, due to extensive use of high-performance computers to create modern information systems, including the NAPS of Ukraine. Most computers in NAPS of Ukraine work on Intel Pentium processors at speeds from 600 MHz to 3 GHz and release a lot of heat, which requires the installation of the system unit 2-3 additional fans. The fan is always works on full power, that leads to rapid deterioration and high level (up to 50 dB noise. In order to meet ergonomic requirements it is proposed to іnstall a computer system unit and an additional control unit ventilators, allowing independent control of each fan. The solution is applied at creation of information systems planning research in the National Academy of Pedagogical Sciences of Ukraine on Internet basis.

  11. Future Computing Platforms for Science in a Power Constrained Era

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Eulisse, Giulio; Elmer, Peter; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG). (paper)

  12. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  13. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  14. Wind power systems. Applications of computational intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lingfeng [Toledo Univ., OH (United States). Dept. of Electrical Engineering and Computer Science; Singh, Chanan [Texas A and M Univ., College Station, TX (United States). Electrical and Computer Engineering Dept.; Kusiak, Andrew (eds.) [Iowa Univ., Iowa City, IA (United States). Mechanical and Industrial Engineering Dept.

    2010-07-01

    Renewable energy sources such as wind power have attracted much attention because they are environmentally friendly, do not produce carbon dioxide and other emissions, and can enhance a nation's energy security. For example, recently more significant amounts of wind power are being integrated into conventional power grids. Therefore, it is necessary to address various important and challenging issues related to wind power systems, which are significantly different from the traditional generation systems. This book is a resource for engineers, practitioners, and decision-makers interested in studying or using the power of computational intelligence based algorithms in handling various important problems in wind power systems at the levels of power generation, transmission, and distribution. Researchers have been developing biologically-inspired algorithms in a wide variety of complex large-scale engineering domains. Distinguished from the traditional analytical methods, the new methods usually accomplish the task through their computationally efficient mechanisms. Computational intelligence methods such as evolutionary computation, neural networks, and fuzzy systems have attracted much attention in electric power systems. Meanwhile, modern electric power systems are becoming more and more complex in order to meet the growing electricity market. In particular, the grid complexity is continuously enhanced by the integration of intermittent wind power as well as the current restructuring efforts in electricity industry. Quite often, the traditional analytical methods become less efficient or even unable to handle this increased complexity. As a result, it is natural to apply computational intelligence as a powerful tool to deal with various important and pressing problems in the current wind power systems. This book presents the state-of-the-art development in the field of computational intelligence applied to wind power systems by reviewing the most up

  15. A Multilevel Introspective Dynamic Optimization System For Holistic Power-Aware Computing

    DEFF Research Database (Denmark)

    Venkatachalam, Vasanth; Probst, Christian; Franz, Michael

    2005-01-01

    Power consumption is rapidly becoming the dominant limiting factor for further improvements in computer design. Curiously, this applies both at the “high-end” of workstations and servers and the “low end” of handheld devices and embedded computers. At the high-end, the challenge lies in dealing w......, including that of applications and the virtual machine itself. We believe this introspective, holistic approach enables more informed power-management decisions....... with exponentially growing power densities. At the low-end, there is a demand to make mobile devices more powerful and longer lasting, but battery technology is not improving at the same rate that power consumption is rising. Traditional power-management research is fragmented; techniques are being developed...... at specific levels, without fully exploring their synergy with other levels. Most software techniques target either operating systems or compilers but do not explore the interaction between the two layers. These techniques also have not fully explored the potential of virtual machines for power management...

  16. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  17. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  18. Designing high power targets with computational fluid dynamics (CFD)

    International Nuclear Information System (INIS)

    Covrig, S. D.

    2013-01-01

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets

  19. Designing high power targets with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Covrig, S. D. [Thomas Jefferson National Laboratory, Newport News, VA 23606 (United States)

    2013-11-07

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  20. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    Science.gov (United States)

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  1. Application of a Statistical Linear Time-Varying System Model of High Grazing Angle Sea Clutter for Computing Interference Power

    Science.gov (United States)

    2017-12-08

    STATISTICAL LINEAR TIME-VARYING SYSTEM MODEL OF HIGH GRAZING ANGLE SEA CLUTTER FOR COMPUTING INTERFERENCE POWER 1. INTRODUCTION Statistical linear time...beam. We can approximate one of the sinc factors using the Dirichlet kernel to facilitate computation of the integral in (6) as follows: ∣∣∣∣sinc(WB...plotted in Figure 4. The resultant autocorrelation can then be found by substituting (18) into (28). The Python code used to generate Figures 1-4 is found

  2. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  3. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  4. Power plant process computer

    International Nuclear Information System (INIS)

    Koch, R.

    1982-01-01

    The concept of instrumentation and control in nuclear power plants incorporates the use of process computers for tasks which are on-line in respect to real-time requirements but not closed-loop in respect to closed-loop control. The general scope of tasks is: - alarm annunciation on CRT's - data logging - data recording for post trip reviews and plant behaviour analysis - nuclear data computation - graphic displays. Process computers are used additionally for dedicated tasks such as the aeroball measuring system, the turbine stress evaluator. Further applications are personal dose supervision and access monitoring. (orig.)

  5. Application of computational intelligence in emerging power systems

    African Journals Online (AJOL)

    ... in the electrical engineering applications. This paper highlights the application of computational intelligence methods in power system problems. Various types of CI methods, which are widely used in power system, are also discussed in the brief. Keywords: Power systems, computational intelligence, artificial intelligence.

  6. Process computers automate CERN power supply installations

    International Nuclear Information System (INIS)

    Ullrich, H.; Martin, A.

    1974-01-01

    Higher standards of performance and reliability in the power plants of large particle accelerators necessitate increasing use of automation. The CERN (European Nuclear Research Centre) in Geneva started to employ process computers for plant automation at an early stage in its history. The great complexity and extent of the plants for high-energy physics first led to the setting-up of decentralized automatic systems which are now being increasingly combined into one interconnected automation system. One of these automatic systems controls and monitors the extensive power supply installations for the main ring magnets in the experimental zones. (orig.) [de

  7. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  8. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  9. Trend of computer-based console for nuclear power plants

    International Nuclear Information System (INIS)

    Wajima, Tsunetaka; Serizawa, Michiya

    1975-01-01

    The amount of informations to be watched by the operators in the central operation room increased with the increase of the capacity of nuclear power generation plants, and the necessity of computer-based consoles, in which the informations are compiled and the rationalization of the interface between the operators and the plants is intended by introducing CRT displays and process computers, became to be recognized. The integrated monitoring and controlling system is explained briefly by taking Dungeness B Nuclear Power Station in Britain as a typical example. This power station comprises two AGRs, and these two plants can be controlled in one central control room, each by one man. Three computers including stand-by one are installed. Each computer has the core memory of 16 K words (24 bits/word), and 4 magnetic drums of 256 K words are installed as the external memory. The peripheral equipments are 12 CRT displays, 6 typewriters, high speed tape reader and tape punch for each plant. The display and record of plant data, the analysis, display and record of alarms, the control of plants including reactors, and post incident record are assigned to the computers. In Hitachi Ltd. in Japan, the introduction of color CRTs, the developments of operating consoles, new data-accessing method, and the consoles for maintenance management are in progress. (Kako, I.)

  10. Computer-based control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Kalashnikov, V.K.; Shugam, R.A.; Ol'shevsky, Yu.N.

    1975-01-01

    Computer-based control systems of nuclear power plants may be classified into those using computers for data acquisition only, those using computers for data acquisition and data processing, and those using computers for process control. In the present paper a brief review is given of the functions the systems above mentioned perform, their applications in different nuclear power plants, and some of their characteristics. The trend towards hierarchic systems using control computers with reserves already becomes clear when consideration is made of the control systems applied in the Canadian nuclear power plants that pertain to the first ones equipped with process computers. The control system being now under development for the large Soviet reactors of WWER type will also be based on the use of control computers. That part of the system concerned with controlling the reactor assembly is described in detail

  11. Plant computer system in nuclear power station

    International Nuclear Information System (INIS)

    Kato, Shinji; Fukuchi, Hiroshi

    1991-01-01

    In nuclear power stations, centrally concentrated monitoring system has been adopted, and in central control rooms, large quantity of information and operational equipments concentrate, therefore, those become the important place of communication between plants and operators. Further recently, due to the increase of the unit capacity, the strengthening of safety, the problems of man-machine interface and so on, it has become important to concentrate information, to automate machinery and equipment and to simplify them for improving the operational environment, reliability and so on. On the relation of nuclear power stations and computer system, to which attention has been paid recently as the man-machine interface, the example in Tsuruga Power Station, Japan Atomic Power Co. is shown. No.2 plant in the Tsuruga Power Station is a PWR plant with 1160 MWe output, which is a home built standardized plant, accordingly the computer system adopted here is explained. The fundamental concept of the central control board, the process computer system, the design policy, basic system configuration, reliability and maintenance, CRT display, and the computer system for No.1 BWR 357 MW plant are reported. (K.I.)

  12. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  13. Wirelessly powered sensor networks and computational RFID

    CERN Document Server

    2013-01-01

    The Wireless Identification and Sensing Platform (WISP) is the first of a new class of RF-powered sensing and computing systems.  Rather than being powered by batteries, these sensor systems are powered by radio waves that are either deliberately broadcast or ambient.  Enabled by ongoing exponential improvements in the energy efficiency of microelectronics, RF-powered sensing and computing is rapidly moving along a trajectory from impossible (in the recent past), to feasible (today), toward practical and commonplace (in the near future). This book is a collection of key papers on RF-powered sensing and computing systems including the WISP.  Several of the papers grew out of the WISP Challenge, a program in which Intel Corporation donated WISPs to academic applicants who proposed compelling WISP-based projects.  The book also includes papers presented at the first WISP Summit, a workshop held in Berkeley, CA in association with the ACM Sensys conference, as well as other relevant papers. The book provides ...

  14. New high power linacs and beam physics

    International Nuclear Information System (INIS)

    Wangler, T.P.; Gray, E.R.; Nath, S.; Crandall, K.R.; Hasegawa, K.

    1997-01-01

    New high-power proton linacs must be designed to control beam loss, which can lead to radioactivation of the accelerator. The threat of beam loss is increased significantly by the formation of beam halo. Numerical simulation studies have identified the space-charge interactions, especially those that occur in rms mismatched beams, as a major concern for halo growth. The maximum-amplitude predictions of the simulation codes must be subjected to independent tests to confirm the validity of the results. Consequently, the authors compare predictions from the particle-core halo models with computer simulations to test their understanding of the halo mechanisms that are incorporated in the computer codes. They present and discuss scaling laws that provide guidance for high-power linac design

  15. Computational Power of Symmetry-Protected Topological Phases.

    Science.gov (United States)

    Stephen, David T; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-07

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  16. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  17. Computer Aided Modeling and Analysis of Five-Phase PMBLDC Motor Drive for Low Power High Torque Application

    Directory of Open Access Journals (Sweden)

    M. A. Inayathullaah

    2014-01-01

    Full Text Available In order to achieve high torque at low power with high efficiency, a new five-phase permanent magnet brushless DC (PMBLDC motor design was analyzed and optimized. A similar three-phase motor having the same D/L ratio (inner diameter (D and length of the stator (L is compared for maximum torque and torque ripple of the designed five-phase PMBLDC motor. Maxwell software was used to build finite element simulation model of the motor. The internal complicated magnetic field distribution and dynamic performance simulation were obtained in different positions. No load and load characteristics of the five-phase PMBLDC motor were simulated, and the power consumption of materials was computed. The conformity of the final simulation results indicates that this method can be used to provide a theoretical basis for further optimal design of this new type of motor with its drive so as to improve the starting torque and reduce torque ripple of the motor.

  18. Computer system for nuclear power plant parameter display

    International Nuclear Information System (INIS)

    Stritar, A.; Klobuchar, M.

    1990-01-01

    The computer system for efficient, cheap and simple presentation of data on the screen of the personal computer is described. The display is in alphanumerical or graphical form. The system can be used for the man-machine interface in the process monitoring system of the nuclear power plant. It represents the third level of the new process computer system of the Nuclear Power Plant Krsko. (author)

  19. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  20. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  1. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  2. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  3. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  4. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  5. Analysis of three-phase power-supply systems using computer-aided design programs

    International Nuclear Information System (INIS)

    Oberst, E.F.

    1977-01-01

    A major concern of every designer of large, three-phase power-supply systems is the protection of system components from overvoltage transients. At present, three computer-aided circuit design programs are available in the Magnetic Fusion Energy (MFE) National Computer Center that can be used to analyze three-phase power systems: MINI SCEPTRE, SPICE I, and SPICE II. These programs have been used at Lawrence Livermore Laboratory (LLL) to analyze the operation of a 200-kV dc, 20-A acceleration power supply for the High Voltage Test Stand. Various overvoltage conditions are simulated and the effectiveness of system protective devices is observed. The simulated overvoltage conditions include such things as circuit breaker openings, pulsed loading, and commutation voltage surges in the rectifiers. These examples are used to illustrate the use of the computer-aided, circuit-design programs discussed in this paper

  6. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  7. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  8. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  9. Profiling an application for power consumption during execution on a compute node

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-09-17

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  10. Computational engineering applied to the concentrating solar power technology

    International Nuclear Information System (INIS)

    Giannuzzi, Giuseppe Mauro; Miliozzi, Adio

    2006-01-01

    Solar power plants based on parabolic-trough collectors present innumerable thermo-structural problems related on the one hand to the high temperatures of the heat transfer fluid, and on the other to the need og highly precise aiming and structural resistance. Devising an engineering response to these problems implies analysing generally unconventional solutions. At present, computational engineering is the principal investigating tool; it speeds the design of prototype installations and significantly reduces the necessary but costly experimental programmes [it

  11. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  12. Computer-based control of nuclear power information systems at international level

    International Nuclear Information System (INIS)

    Boniface, Ekechukwu; Okonkwo, Obi

    2011-01-01

    In most highly industrialized countries of the world information plays major role in anti-nuclear campaign. Information and discussions on nuclear power need critical and objective analysis before the structured information presentation to the public to avoid bias anti-nuclear information on one side and neglect of great risk in nuclear power. This research is developing a computer-based information system for the control of nuclear power at international level. The system is to provide easy and fast information highways for the followings: (1) Low Regulatory dose and activity limit as level of high danger for individuals and public. (2) Provision of relevant technical or scientific education among the information carriers in the nuclear power countries. The research is on fact oriented investigation about radioactivity. It also deals with fact oriented education about nuclear accidents and safety. A standard procedure for dissemination of latest findings using technical and scientific experts in nuclear technology is developed. The information highway clearly analyzes the factual information about radiation risk and nuclear energy. Radiation cannot be removed from our environment. The necessity of radiation utilizations defines nuclear energy as two-edge sword. It is therefore, possible to use computer-based information system in projecting and dissemination of expert knowledge about nuclear technology positively and also to use it in directing the public on the safety and control of the nuclear energy. The computer-based information highway for nuclear energy technology is to assist in scientific research and technological development at international level. (author)

  13. Usage of super high speed computer for clarification of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu

    1999-01-01

    This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)

  14. Axial power deviation control strategy and computer simulation for Daya Bay Nuclear Power Station

    International Nuclear Information System (INIS)

    Liao Yehong; Zhou Xiaoling, Xiao Min

    2004-01-01

    Daya Bay Nuclear Power Station has very tight operation diagram especially at its right side. Therefore the successful control of axial power deviation for PWR is crucial to nuclear safety. After analyzing various core characters' effect on axial power distribution, several axial power deviation control strategies has been proposed to comply with different power varying operation scenario. Application and computer simulation of the strategies has shown that our prediction of axial power deviation evolution are comparable to the measurement values, and that our control strategies are effective. Engineering experience shows that the application of our methodology can predict accurately the transient of axial power deviation, and therefore has become a useful tool for reactor operation and safety control. This paper presents the axial power control characteristics, reactor operation strategy research, computer simulation, and comparison to measurement results in Daya Bay Nuclear Power Station. (author)

  15. The computer simulation of the resonant network for the B-factory model power supply

    International Nuclear Information System (INIS)

    Zhou, W.; Endo, K.

    1993-07-01

    A high repetition model power supply and the resonant magnet network are simulated with the computer in order to check and improve the design of the power supply for the B-factory booster. We put our key point on a transient behavior of the power supply and the resonant magnet network. The results of the simulation are given. (author)

  16. Architectural analysis for wirelessly powered computing platforms

    NARCIS (Netherlands)

    Kapoor, A.; Pineda de Gyvez, J.

    2013-01-01

    We present a design framework for wirelessly powered generic computing platforms that takes into account various system parameters in response to a time-varying energy source. These parameters are the charging profile of the energy source, computing speed (fclk), digital supply voltage (VDD), energy

  17. Modeling high-power RF accelerator cavities with SPICE

    International Nuclear Information System (INIS)

    Humphries, S. Jr.

    1992-01-01

    The dynamical interactions between RF accelerator cavities and high-power beams can be treated on personal computers using a lumped circuit element model and the SPICE circuit analysis code. Applications include studies of wake potentials, two-beam accelerators, microwave sources, and transverse mode damping. This report describes the construction of analogs for TM mn0 modes and the creation of SPICE input for cylindrical cavities. The models were used to study continuous generation of kA electron beam pulses from a vacuum cavity driven by a high-power RF source

  18. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  19. Power Conversion Study for High Temperature Gas-Cooled Reactors

    International Nuclear Information System (INIS)

    Chang Oh; Richard Moore; Robert Barner

    2005-01-01

    The Idaho National Laboratory (INL) is investigating a Brayton cycle efficiency improvement on a high temperature gas-cooled reactor (HTGR) as part of Generation-IV nuclear engineering research initiative. There are some technical issues to be resolved before the selection of the final design of the high temperature gas cooled reactor, called as a Next Generation Nuclear Plant (NGNP), which is supposed to be built at the INEEL by year 2017. The technical issues are the selection of the working fluid, direct vs. indirect cycle, power cycle type, the optimized design in terms of a number of intercoolers, and others. In this paper, we investigated a number of working fluids for the power conversion loop, direct versus indirect cycle, the effect of intercoolers, and other thermal hydraulics issues. However, in this paper, we present part of the results we have obtained. HYSYS computer code was used along with a computer model developed using Visual Basic computer language

  20. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  1. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  2. Improvement of nuclear power plant monitor and control equipment. Computer application backfitting

    International Nuclear Information System (INIS)

    Hayakawa, H.; Kawamura, A.; Suto, O.; Kinoshita, Y.; Toda, Y.

    1985-01-01

    This paper describes the application of advanced computer technology to existing Japanese Boiling Water Reactor (BWR) nuclear power plants for backfitting. First we review the background of the backfitting and the objectives of backfitting. A feature of backfitting such as restrictions and constraints imposed by the existing equipment are discussed and how to overcome these restrictions by introduction of new technology such as highly efficient data transmission using multiplexing, and compact space saving computer systems are described. Role of the computer system in reliable NPS are described with a wide spectrum of TOSHIBA backfitting computer system application experiences. (author)

  3. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  4. Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era

    Energy Technology Data Exchange (ETDEWEB)

    Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal; Cong, Jason

    2016-11-14

    Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.

  5. Practical computer analysis of switch mode power supplies

    CERN Document Server

    Bennett, Johnny C

    2006-01-01

    When designing switch-mode power supplies (SMPSs), engineers need much more than simple "recipes" for analysis. Such plug-and-go instructions are not at all helpful for simulating larger and more complex circuits and systems. Offering more than merely a "cookbook," Practical Computer Analysis of Switch Mode Power Supplies provides a thorough understanding of the essential requirements for analyzing SMPS performance characteristics. It demonstrates the power of the circuit averaging technique when used with powerful computer circuit simulation programs. The book begins with SMPS fundamentals and the basics of circuit averaging models, reviewing most basic topologies and explaining all of their various modes of operation and control. The author then discusses the general analysis requirements of power supplies and how to develop the general types of SMPS models, demonstrating the use of SPICE for analysis. He examines the basic first-order analyses generally associated with SMPS performance along with more pra...

  6. Computer program analyzes and monitors electrical power systems (POSIMO)

    Science.gov (United States)

    Jaeger, K.

    1972-01-01

    Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.

  7. Computing power on the move

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    You might sit right next to your computer as you work, use the GRID’s computing power sitting in another part of the world or share CPU time with the Cloud: actual and virtual machines communicate and exchange information, and the place where they are located is a detail of only marginal importance. CERN’s new remote computer centre will open in Hungary in 2013.   Artist's impression of the new Wigner Data Centre. (Image: Wigner). CERN’s computing department has been aiming to minimise human contact with the machines for a while now. “The problem is that people going in creates dust, and simply touching things may cause damage,” explains Wayne Salter, Leader of the IT Computing Facilities Group. A first remote centre on the other side of Geneva was opened in June 2010 and a new one will open in Hungary next year. “Once the centre in Budapest is running, we will not be going there to operate it. As far as possible, w...

  8. Fault tolerant embedded computers and power electronics for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Robiolle, M.

    1995-01-01

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs

  9. Fault tolerant embedded computers and power electronics for nuclear robotics

    Energy Technology Data Exchange (ETDEWEB)

    Giraud, A.; Robiolle, M.

    1995-12-31

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs.

  10. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based......, and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. A review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning and cost are often...

  11. A computational modeling approach of the jet-like acoustic streaming and heat generation induced by low frequency high power ultrasonic horn reactors.

    Science.gov (United States)

    Trujillo, Francisco Javier; Knoerzer, Kai

    2011-11-01

    High power ultrasound reactors have gained a lot of interest in the food industry given the effects that can arise from ultrasonic-induced cavitation in liquid foods. However, most of the new food processing developments have been based on empirical approaches. Thus, there is a need for mathematical models which help to understand, optimize, and scale up ultrasonic reactors. In this work, a computational fluid dynamics (CFD) model was developed to predict the acoustic streaming and induced heat generated by an ultrasonic horn reactor. In the model it is assumed that the horn tip is a fluid inlet, where a turbulent jet flow is injected into the vessel. The hydrodynamic momentum rate of the incoming jet is assumed to be equal to the total acoustic momentum rate emitted by the acoustic power source. CFD velocity predictions show excellent agreement with the experimental data for power densities higher than W(0)/V ≥ 25kWm(-3). This model successfully describes hydrodynamic fields (streaming) generated by low-frequency-high-power ultrasound. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  12. Toward High-Power Klystrons With RF Power Conversion Efficiency on the Order of 90%

    CERN Document Server

    Baikov, Andrey Yu; Syratchev, Igor

    2015-01-01

    The increase in efficiency of RF power generation for future large accelerators is considered a high priority issue. The vast majority of the existing commercial high-power RF klystrons operates in the electronic efficiency range between 40% and 55%. Only a few klystrons available on the market are capable of operating with 65% efficiency or above. In this paper, a new method to achieve 90% RF power conversion efficiency in a klystron amplifier is presented. The essential part of this method is a new bunching technique - bunching with bunch core oscillations. Computer simulations confirm that the RF production efficiency above 90% can be reached with this new bunching method. The results of a preliminary study of an L-band, 20-MW peak RF power multibeam klystron for Compact Linear Collider with the efficiency above 85% are presented.

  13. Computer techniques for experimental work in GDR nuclear power plants with WWER

    International Nuclear Information System (INIS)

    Stemmler, G.

    1985-01-01

    Nuclear power plant units with WWER are being increasingly equipped with high-performance, programmable process control computers. There are, however, essential reasons for further advancing the development of computer-aided measuring systems, in particular for experimental work. A special structure of such systems, which is based on the division into relatively rigid data registration and primary handling and into further processing by advanced programming language, has proved useful in the GDR. (author)

  14. Profiling an application for power consumption during execution on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  15. Computer aided method of low voltage power distribution networks protection system against lightning and electromagnetic pulse generated by high altitude nuclear burst

    International Nuclear Information System (INIS)

    Laroubine, J.

    1989-01-01

    The lightning creates an electromagnetic field which produces a slow duration and high energy pulse of current on low voltage power distribution networks. On the other hand an high altitude nuclear burst generates an electromagnetic pulse which causes fast and intense interferences. We describe here the specifications of a passive filter that can reject these interferences. We used a computer aided method of simulation to create a prototype. Experimental results confirm the validity of the model used for simulation [fr

  16. Application of a High-Power Reversible Converter in a Hybrid Traction Power Supply System

    Directory of Open Access Journals (Sweden)

    Gang Zhang

    2017-03-01

    Full Text Available A high-power reversible converter can achieve a variety of functions, such as recovering regenerative braking energy, expanding traction power capacity, and improving an alternating current (AC grid power factor. A new hybrid traction power supply scheme, which consists of a high-power reversible converter and two 12-pulse diode rectifiers, is proposed. A droop control method based on load current feed-forward is adopted to realize the load distribution between the reversible converter and the existing 12-pulse diode rectifiers. The direct current (DC short-circuit characteristics of the reversible converter is studied, then the relationship between the peak fault current and the circuit parameters is obtained from theoretical calculations and validated by computer simulation. The first two sets of 2 MW reversible converters have been successfully applied in Beijing Metro Line 10, the proposed hybrid application scheme and coordinated control strategy are verified, and 11.15% of average energy-savings is reached.

  17. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  18. Proceedings of national symposium on computer applications in power plants

    International Nuclear Information System (INIS)

    1992-01-01

    The National Symposium on Computer Applications in Power Plants was organized to help promote exchange of views among scientists and engineers engaged in design, engineering, operation and maintenance of computer based systems in nuclear power plants, conventional power plants, heavy water plants, nuclear fuel cycle facilities and allied industries. About one hundred papers were presented at the Symposium. Those falling within the subject scope of INIS have been processed separately. (author)

  19. Budget-based power consumption for application execution on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  20. Transitions in the computational power of thermal states for measurement-based quantum computation

    International Nuclear Information System (INIS)

    Barrett, Sean D.; Bartlett, Stephen D.; Jennings, David; Doherty, Andrew C.; Rudolph, Terry

    2009-01-01

    We show that the usefulness of the thermal state of a specific spin-lattice model for measurement-based quantum computing exhibits a transition between two distinct 'phases' - one in which every state is a universal resource for quantum computation, and another in which any local measurement sequence can be simulated efficiently on a classical computer. Remarkably, this transition in computational power does not coincide with any phase transition, classical, or quantum in the underlying spin-lattice model.

  1. High Power RF Test Facility at the SNS

    CERN Document Server

    Kang, Yoon W; Campisi, Isidoro E; Champion, Mark; Crofford, Mark; Davis, Kirk; Drury, Michael A; Fuja, Ray E; Gurd, Pamela; Kasemir, Kay-Uwe; McCarthy, Michael P; Powers, Tom; Shajedul Hasan, S M; Stirbet, Mircea; Stout, Daniel; Tang, Johnny Y; Vassioutchenko, Alexandre V; Wezensky, Mark

    2005-01-01

    RF Test Facility has been completed in the SNS project at ORNL to support test and conditioning operation of RF subsystems and components. The system consists of two transmitters for two klystrons powered by a common high voltage pulsed converter modulator that can provide power to two independent RF systems. The waveguides are configured with WR2100 and WR1150 sizes for presently used frequencies: 402.5 MHz and 805 MHz. Both 402.5 MHz and 805 MHz systems have circulator protected klystrons that can be powered by the modulator capable of delivering 11 MW peak and 1 MW average power. The facility has been equipped with computer control for various RF processing and complete dual frequency operation. More than forty 805 MHz fundamental power couplers for the SNS superconducting linac (SCL) cavitites have been RF conditioned in this facility. The facility provides more than 1000 ft2 floor area for various test setups. The facility also has a shielded cave area that can support high power tests of normal conducti...

  2. Computing and cognition in future power plant operations

    International Nuclear Information System (INIS)

    Kisner, R.A.; Sheridan, T.B.

    1983-01-01

    The intent of this paper is to speculate on the nature of future interactions between people and computers in the operation of power plants. In particular, the authors offer a taxonomy for examining the differing functions of operators in interacting with the plant and its computers, and the differing functions of the computers in interacting with the plant and its operators

  3. Computing and cognition in future power-plant operations

    International Nuclear Information System (INIS)

    Kisner, R.A.; Sheridan, T.B.

    1983-01-01

    The intent of this paper is to speculate on the nature of future interactions between people and computers in the operation of power plants. In particular, the authors offer a taxonomy for examining the differing functions of operators in interacting with the plant and its computers, and the differing functions of the computers in interacting with the plant and its operators

  4. Assessing Power Monitoring Approaches for Energy and Power Analysis of Computers

    OpenAIRE

    El Mehdi Diouria, Mohammed; Dolz Zaragozá, Manuel Francisco; Glückc, Olivier; Lefèvre, Laurent; Alonso, Pedro; Catalán Pallarés, Sandra; Mayo, Rafael; Quintana Ortí, Enrique S.

    2014-01-01

    Large-scale distributed systems (e.g., datacenters, HPC systems, clouds, large-scale networks, etc.) consume and will consume enormous amounts of energy. Therefore, accurately monitoring the power dissipation and energy consumption of these systems is more unavoidable. The main novelty of this contribution is the analysis and evaluation of different external and internal power monitoring devices tested using two different computing systems, a server and a desktop machine. Furthermore, we prov...

  5. Reducing power consumption during execution of an application on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  6. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming; Claudel, Christian

    2017-01-01

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  7. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming

    2017-02-02

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  8. Interactive simulation of nuclear power systems using a dedicated minicomputer - computer graphics facility

    International Nuclear Information System (INIS)

    Tye, C.; Sezgen, A.O.

    1980-01-01

    The design of control systems and operational procedures for large scale nuclear power plant poses a difficult optimization problem requiring a lot of computational effort. Plant dynamic simulation using digital minicomputers offers the prospect of relatively low cost computing and when combined with graphical input/output provides a powerful tool for studying such problems. The paper discusses the results obtained from a simulation study carried out at the Computer Graphics Unit of the University of Manchester using a typical station control model for an Advanced Gas Cooled reactor. Particular reference is placed on the use of computer graphics for information display, parameter and control system optimization and techniques for using graphical input for defining and/or modifying the control system topology. Experience gained from this study has shown that a relatively modest minicomputer system can be used for simulating large scale dynamic systems and that highly interactive computer graphics can be used to advantage to relieve the designer of many of the tedious aspects of simulation leaving him free to concentrate on the more creative aspects of his work. (author)

  9. WORK SYSTEM ANALYSIS OF POWER SUPPLY IN OPTIMIZING ELECTRICITY ON PERSONAL COMPUTER (PC

    Directory of Open Access Journals (Sweden)

    Sudarmaji Sudarmaji

    2017-12-01

    Full Text Available Working Principles DC Power Supply - is an energy source for a computer to operate. The power supply changes the current from AC 110 volts to 60Hz or 220 volts 50Hz to DC + 3.3 volts, +5 volts and + 12 volts. Power Supply must carry a good and stable DC power supply so the system can run well. Tools running on the voltage supplied by the onboard voltage regulator, for example RIMM and RIMM require 2.5 volts while AGP AX and cards require 1.5 volts, both supplied by the onboard regulator of the motherboard. In addition to supplying power, the Power Supply can prevent the computer from starting until a Power Supply voltage exists at a predetermined area. Power Good is a sign of a special test that is sent to the motherboard as an active signal on the computer, usually marked by a green light when the power button is pressed. The current issued by the Power Supply is a direct current (DC, power output is composed of 200 watts, 250 watts, 300 watts, 350 watts, 400 watts to 600 watts. Computers with Intel Pentium 4 processors and above use power of 380 watts to 450 watts. Keywords: Power Supply, Computer, DC, Power Good, and volt

  10. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  11. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  12. Dynamic stability calculations for power grids employing a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, K

    1982-06-01

    The aim of dynamic contingency calculations in power systems is to estimate the effects of assumed disturbances, such as loss of generation. Due to the large dimensions of the problem these simulations require considerable computing time and costs, to the effect that they are at present only used in a planning state but not for routine checks in power control stations. In view of the homogeneity of the problem, where a multitude of equal generator models, having different parameters, are to be integrated simultaneously, the use of a parallel computer looks very attractive. The results of this study employing a prototype parallel computer (SMS 201) are presented. It consists of up to 128 equal microcomputers bus-connected to a control computer. Each of the modules is programmed to simulate a node of the power grid. Generators with their associated control are represented by models of 13 states each. Passive nodes are complemented by 'phantom'-generators, so that the whole power grid is homogenous, thus removing the need for load-flow-iterations. Programming of microcomputers is essentially performed in FORTRAN.

  13. Energy Use and Power Levels in New Monitors and Personal Computers; TOPICAL

    International Nuclear Information System (INIS)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-01-01

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  14. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    International Nuclear Information System (INIS)

    Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P

    2016-01-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)

  15. High fidelity thermal-hydraulic analysis using CFD and massively parallel computers

    International Nuclear Information System (INIS)

    Weber, D.P.; Wei, T.Y.C.; Brewster, R.A.; Rock, Daniel T.; Rizwan-uddin

    2000-01-01

    Thermal-hydraulic analyses play an important role in design and reload analysis of nuclear power plants. These analyses have historically relied on early generation computational fluid dynamics capabilities, originally developed in the 1960s and 1970s. Over the last twenty years, however, dramatic improvements in both computational fluid dynamics codes in the commercial sector and in computing power have taken place. These developments offer the possibility of performing large scale, high fidelity, core thermal hydraulics analysis. Such analyses will allow a determination of the conservatism employed in traditional design approaches and possibly justify the operation of nuclear power systems at higher powers without compromising safety margins. The objective of this work is to demonstrate such a large scale analysis approach using a state of the art CFD code, STAR-CD, and the computing power of massively parallel computers, provided by IBM. A high fidelity representation of a current generation PWR was analyzed with the STAR-CD CFD code and the results were compared to traditional analyses based on the VIPRE code. Current design methodology typically involves a simplified representation of the assemblies, where a single average pin is used in each assembly to determine the hot assembly from a whole core analysis. After determining this assembly, increased refinement is used in the hot assembly, and possibly some of its neighbors, to refine the analysis for purposes of calculating DNBR. This latter calculation is performed with sub-channel codes such as VIPRE. The modeling simplifications that are used involve the approximate treatment of surrounding assemblies and coarse representation of the hot assembly, where the subchannel is the lowest level of discretization. In the high fidelity analysis performed in this study, both restrictions have been removed. Within the hot assembly, several hundred thousand to several million computational zones have been used, to

  16. Computer network for electric power control systems. Chubu denryoku (kabu) denryoku keito seigyoyo computer network

    Energy Technology Data Exchange (ETDEWEB)

    Tsuneizumi, T. (Chubu Electric Power Co. Inc., Nagoya (Japan)); Shimomura, S.; Miyamura, N. (Fuji Electric Co. Ltd., Tokyo (Japan))

    1992-06-03

    A computer network for electric power control system was developed that is applied with the open systems interconnection (OSI), an international standard for communications protocol. In structuring the OSI network, a direct session layer was accessed from the operation functions when high-speed small-capacity information is transmitted. File transfer, access and control having a function of collectively transferring large-capacity data were applied when low-speed large-capacity information is transmitted. A verification test for the realtime computer network (RCN) mounting regulation was conducted according to a verification model using a mini-computer, and a result that can satisfy practical performance was obtained. For application interface, kernel, health check and two-route transmission functions were provided as a connection control function, so were transmission verification function and late arrival abolishing function. In system mounting pattern, dualized communication server (CS) structure was adopted. A hardware structure may include a system to have the CS function contained in a host computer and a separate installation system. 5 figs., 6 tabs.

  17. Analysis and control of high power synchronous rectifier

    Energy Technology Data Exchange (ETDEWEB)

    Singh Tejinder.

    1993-01-01

    The description, steady state/dynamic analysis and control design of a high power synchronous rectifier is presented. The proposed rectifier system exploits selective harmonic elimination modulation techniques to minimize filtering requirements, and overcomes the dc voltage limitations of prior art equipment. A detailed derivation of the optimum pulse width modulation switching patterns, in the low frequency range for high power applications is presented. A general mathematical model of the rectifier is established which is non-linear and time-invariant. The transformation of reference frame and small signal linearization techniques are used to obtain closed form solutions from the mathematical model. The modelling procedure is verified by computer simulation. The closed loop design of the synchronous rectifier based on a phase and amplitude control strategy is investigated. The transfer functions derived from this analysis are used for the design of the regulators. The steady-state and dynamic results predicted by computer simulation are verified by PECAN. A systematic design procedure is developed and a detailed design example of a 1 MV-amp rectifer system is presented. 23 refs., 33 figs.

  18. Advanced Output Coupling for High Power Gyrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Read, Michael [Calabazas Creek Research, Inc., San Mateo, CA (United States); Ives, Robert Lawrence [Calabazas Creek Research, Inc., San Mateo, CA (United States); Marsden, David [Calabazas Creek Research, Inc., San Mateo, CA (United States); Collins, George [Calabazas Creek Research, Inc., San Mateo, CA (United States); Temkin, Richard [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Guss, William [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Lohr, John [General Atomics, La Jolla, CA (United States); Neilson, Jeffrey [Lexam Research, Redwood City, CA (United States); Bui, Thuc [Calabazas Creek Research, Inc., San Mateo, CA (United States)

    2016-11-28

    The Phase II program developed an internal RF coupler that transforms the whispering gallery RF mode produced in gyrotron cavities to an HE11 waveguide mode propagating in corrugated waveguide. This power is extracted from the vacuum using a broadband, chemical vapor deposited (CVD) diamond, Brewster angle window capable of transmitting more than 1.5 MW CW of RF power over a broad range of frequencies. This coupling system eliminates the Mirror Optical Units now required to externally couple Gaussian output power into corrugated waveguide, significantly reducing system cost and increasing efficiency. The program simulated the performance using a broad range of advanced computer codes to optimize the design. Both a direct coupler and Brewster angle window were built and tested at low and high power. Test results confirmed the performance of both devices and demonstrated they are capable of achieving the required performance for scientific, defense, industrial, and medical applications.

  19. Design of EAST LHCD high power supply feedback control system based on PLC

    International Nuclear Information System (INIS)

    Hu Huaichuan; Shan Jiafang

    2009-01-01

    Design of EAST LHCD -35kV/5.6MW high power supply feedback control system based on PLC is described. Industrial computer and PLC are used to control high power supply in the system. PID arithmetic is adopted to achieve the feedback control of voltage of high power supply. Operating system is base on real-time operating system of QNX. Good controlling properties and reliable protective properties of the feedback control system are proved by the experiment results. (authors)

  20. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  1. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  2. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based...

  3. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  4. Chaos in high-power high-frequency gyrotrons

    International Nuclear Information System (INIS)

    Airila, M.

    2004-01-01

    Gyrotron interaction is a complex nonlinear dynamical process, which may turn chaotic in certain circumstances. The emergence of chaos renders dynamical systems unpredictable and causes bandwidth broadening of signals. Such effects would jeopardize the prospect of advanced gyrotrons in fusion. Therefore, it is important to be aware of the possibility of chaos in gyrotrons. There are three different chaos scenarios closely related to the development of high-power gyrotrons: First, the onset of chaos in electron trajectories would lead to difficulties in the design and efficient operation of depressed potential collectors, which are used for efficiency enhancement. Second, the radio-frequency signal could turn chaotic, decreasing the output power and the spectral purity of the output signal. As a result, mode conversion, transmission, and absorption efficiencies would be reduced. Third, spatio-temporal chaos in the resonator field structure can set a limit for the use of large-diameter interaction cavities and high-order TE modes (large azimuthal index) allowing higher generated power. In this thesis, the issues above are addressed with numerical modeling. It is found that chaos in electron residual energies is practically absent in the parameter region corresponding to high efficiency. Accordingly, depressed collectors are a feasible solution also in advanced high-power gyrotrons. A new method is presented for straightforward numerical solution of the one-dimensional self-consistent time-dependent gyrotron equations, and the method is generalized to two dimensions. In 1D, a chart of gyrotron oscillations is calculated. It is shown that the regions of stationary oscillations, automodulation, and chaos have a complicated topology in the plane of generalized gyrotron variables. The threshold current for chaotic oscillations exceeds typical operating currents by a factor of ten. However, reflection of the output signal may significantly lower the threshold. 2D

  5. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  6. Concept of a computer network architecture for complete automation of nuclear power plants

    International Nuclear Information System (INIS)

    Edwards, R.M.; Ray, A.

    1990-01-01

    The state of the art in automation of nuclear power plants has been largely limited to computerized data acquisition, monitoring, display, and recording of process signals. Complete automation of nuclear power plants, which would include plant operations, control, and management, fault diagnosis, and system reconfiguration with efficient and reliable man/machine interactions, has been projected as a realistic goal. This paper presents the concept of a computer network architecture that would use a high-speed optical data highway to integrate diverse, interacting, and spatially distributed functions that are essential for a fully automated nuclear power plant

  7. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  8. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  9. Computational Analysis of Powered Lift Augmentation for the LEAPTech Distributed Electric Propulsion Wing

    Science.gov (United States)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of a distributed electric propulsion wing with a 40deg flap deflection has been completed using FUN3D. Two lift-augmentation power conditions were compared with the power-off configuration on the high-lift wing (40deg flap) at a 73 mph freestream flow and for a range of angles of attack from -5 degrees to 14 degrees. The computational study also included investigating the benefit of corotating versus counter-rotating propeller spin direction to powered-lift performance. The results indicate a large benefit in lift coefficient, over the entire range of angle of attack studied, by using corotating propellers that all spin counter to the wingtip vortex. For the landing condition, 73 mph, the unpowered 40deg flap configuration achieved a maximum lift coefficient of 2.3. With high-lift blowing the maximum lift coefficient increased to 5.61. Therefore, the lift augmentation is a factor of 2.4. Taking advantage of the fullspan lift augmentation at similar performance means that a wing powered with the distributed electric propulsion system requires only 42 percent of the wing area of the unpowered wing. This technology will allow wings to be 'cruise optimized', meaning that they will be able to fly closer to maximum lift over drag conditions at the design cruise speed of the aircraft.

  10. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  11. V-1 nuclear power plant standby RPP-16S computer software

    International Nuclear Information System (INIS)

    Suchy, R.

    1988-01-01

    The software structure of the function of program modules of the RPP-16S standby computer which is part of the information system of the V-1 Bohunice nuclear power plant are described. The multitasking AMOS operational system is used for the organization of programs in the computer. The program modules are classified in five groups by function, i.e., in modules for the periodical collection of values and for the measurement of process quantities for both nuclear power plant units; for the primary processing of the values; for the monitoring of exceedance of preset limits; for unit operators' communication with the computer. The fifth group consists of users program modules. The standby computer software was tested in the actual operating conditions of the V-1 power plant. The results showed it operated correctly; minor shortcomings were removed. (Z.M.). 1 fig

  12. A solar powered wireless computer mouse: industrial design concepts

    NARCIS (Netherlands)

    Reich, N.H.; Veefkind, M.; van Sark, W.G.J.H.M.; Alsema, E.A.; Turkenburg, W.C.; Silvester, S.

    2009-01-01

    A solar powered wireless computer mouse (SPM) was chosen to serve as a case study for the evaluation and optimization of industrial design processes of photovoltaic (PV) powered consumer systems. As the design process requires expert knowledge in various technical fields, we assessed and compared

  13. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  14. Use of computer codes to improve nuclear power plant operation

    International Nuclear Information System (INIS)

    Misak, J.; Polak, V.; Filo, J.; Gatas, J.

    1985-01-01

    For safety and economic reasons, the scope for carrying out experiments on operational nuclear power plants (NPPs) is very limited and any changes in technical equipment and operating parameters or conditions have to be supported by theoretical calculations. In the Nuclear Power Plant Scientific Research Institute (NIIAEhS), computer codes are systematically used to analyse actual operating events, assess safety aspects of changes in equipment and operating conditions, optimize the conditions, preparation and analysis of NPP startup trials and review and amend operating instructions. In addition, calculation codes are gradually being introduced into power plant computer systems to perform real time processing of the parameters being measured. The paper describes a number of specific examples of the use of calculation codes for the thermohydraulic analysis of operating and accident conditions aimed at improving the operation of WWER-440 units at the Jaslovske Bohunice V-1 and V-2 nuclear power plants. These examples confirm that computer calculations are an effective way of solving operating problems and of further increasing the level of safety and economic efficiency of NPP operation. (author)

  15. High power klystrons for efficient reliable high power amplifiers

    Science.gov (United States)

    Levin, M.

    1980-11-01

    This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.

  16. Axial power difference control strategy and computer simulation for GNPS during stretch-out and power decrease

    International Nuclear Information System (INIS)

    Liao Yehong; Xiao Min; Li Xianfeng; Zhu Minhong

    2004-01-01

    Successful control of the axial power difference for PWR is crucial to nuclear safety. After analyzing various elements' effect on the axial power distribution, different axial power deviation control strategies have been proposed to comply with different power decrease scenarios. Application of the strategy to computer simulation shows that our prediction of axial power deviation evolution is comparable to the measurement value, and that our control strategy is effective

  17. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  18. Utilization of logistic computer programs in the power plant piping industry

    International Nuclear Information System (INIS)

    Motzel, E.

    1982-01-01

    Starting from the general situation of the power plant piping industry, the utilization of computer programs as well as the specific magnitude of complexity connected with the project realisation, the necessity for using logistic computer programs especially in case of nuclear power plants is explained. The logistic term as well as the logistic data are described. At the example of the nuclear power plant KRB II, Gundremmingen, Block B/C the practical use of such programs is shown. The planning, scheduling and supervision is carried out computer-aided by means of network-technique. The material management, prefabrication, installation including management of certificates for welding and testing activities is planned and controlled by computer programs as well. With the piping systems installed a complete erection work documentation is available which also serves as base for the billing versus the client. The budgeted costs are continuously controlled by means of a cost control program. Summing-up the further development in controlling piping contracts computer-supported is described with regard to software, hardware and the organisation structure. Furthermore the concept of a self-supporting field computer is introduced for the first time. (orig.) [de

  19. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  20. Utilizing the Double-Precision Floating-Point Computing Power of GPUs for RSA Acceleration

    Directory of Open Access Journals (Sweden)

    Jiankuo Dong

    2017-01-01

    Full Text Available Asymmetric cryptographic algorithm (e.g., RSA and Elliptic Curve Cryptography implementations on Graphics Processing Units (GPUs have been researched for over a decade. The basic idea of most previous contributions is exploiting the highly parallel GPU architecture and porting the integer-based algorithms from general-purpose CPUs to GPUs, to offer high performance. However, the great potential cryptographic computing power of GPUs, especially by the more powerful floating-point instructions, has not been comprehensively investigated in fact. In this paper, we fully exploit the floating-point computing power of GPUs, by various designs, including the floating-point-based Montgomery multiplication/exponentiation algorithm and Chinese Remainder Theorem (CRT implementation in GPU. And for practical usage of the proposed algorithm, a new method is performed to convert the input/output between octet strings and floating-point numbers, fully utilizing GPUs and further promoting the overall performance by about 5%. The performance of RSA-2048/3072/4096 decryption on NVIDIA GeForce GTX TITAN reaches 42,211/12,151/5,790 operations per second, respectively, which achieves 13 times the performance of the previous fastest floating-point-based implementation (published in Eurocrypt 2009. The RSA-4096 decryption precedes the existing fastest integer-based result by 23%.

  1. The computer program system for structural design of nuclear power plants

    International Nuclear Information System (INIS)

    Aihara, S.; Atsumi, K.; Sasagawa, K.; Satoh, S.

    1979-01-01

    In recent days, the design method of the Nuclear Power Plant has become more complex than in the past. The Finite Element Method (FEM) applied for analysis of Nuclear Power Plants, especially requires more computer use. The recent computers have made remarkable progress, so that in design work manpower and time necessary for analysis have been reduced considerably. However, instead the arrangement of outputs have increased tremendously. Therefore, a computer program system was developed for performing all of the processes, from data making to output arrangement, and rebar evaluations. This report introduces the computer program system pertaining to the design flow of the Reactor Building. (orig.)

  2. Turning a $10 Computer into a Powerful DIY Data Logger

    Science.gov (United States)

    Schilperoort, B.

    2017-12-01

    Due the rapid advance of consumer electronics, much more powerful and cheaper options are available for DIY projects. The $10 `Raspberry Pi Zero W' computer, with abilities like WiFi, Bluetooth, HDMI video output, and a large cheap memory, can be used for data logging purposes. The computer has a range of input and output pins on the board, with which virtually every type of digital sensor communication is possible. With an extra component, analog measurements can also be made. An extra option is the addition of a camera, which can be connected straight to the board. However, due to the relatively high power consumption (0.5 - 0.7 Watt), the `Zero W' is not optimal for off-the-grid locations. For ease of use, the collected data can be downloaded over a local WiFi network using your smartphone or a laptop. No extra software or skills are needed, it is as simple as visiting a webpage and pressing download, making data collection a quick and easy task. With simple step by step instructions you can set up your own data logger, to collect data from sensors ranging from simple temperature and water level measurements, to sonic anemometers.

  3. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  4. Design and Characterization of High Power Targets for RIB Generation

    International Nuclear Information System (INIS)

    Zhang, Y.

    2001-01-01

    In this article, thermal modeling techniques are used to simulate ISOL targets irradiated with high power proton beams. Beam scattering effects, nuclear reactions and beam power deposition distributions in the target were computed with the Monte Carlo simulation code, GEANT4. The power density information was subsequently used as input to the finite element thermal analysis code, ANSYS, for extracting temperature distribution information for a variety of target materials. The principal objective of the studies was to evaluate techniques for more uniformly distributing beam deposited heat over the volumes of targets to levels compatible with their irradiation with the highest practical primary-beam power, and to use the preferred technique to design high power ISOL targets. The results suggest that radiation cooling, in combination, with primary beam manipulation, can be used to control temperatures in practically sized targets, to levels commensurate with irradiation with 1 GeV, 100 kW proton beams

  5. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  6. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  7. Development of computer-aided design and production system for nuclear power plant

    International Nuclear Information System (INIS)

    Ishii, Masanori

    1983-01-01

    The technically required matters related to the design and production of nuclear power stations tended to increase from the viewpoint of the safety and reliability, and it is indispensable to cope with such technically required matters skillfully for the rationalization of the design and production and for the construction of highly reliable plants. Ishikawajima Harima Heavy Industries Co., Ltd., has developed the computer-aided design data information and engineering system which performs dialogue type design and drawing, and as the result, the design-production consistent system is developed to do stress analysis, production design, production management and the output of data for numerically controlled machine tools consistently. In this paper, mainly the consistent system in the field of plant design centering around piping and also the computer system for the design of vessels and others are outlined. The features of the design works for nuclear power plants, the rationalization of the design and production management of piping and vessels, and the application of the CAD system to other general equipment and improvement works are reported. This system is the powerful means to meet the requirement of heightening quality and reducing cost. (Kako, I.)

  8. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  9. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  10. Switching power converters medium and high power

    CERN Document Server

    Neacsu, Dorin O

    2013-01-01

    An examination of all of the multidisciplinary aspects of medium- and high-power converter systems, including basic power electronics, digital control and hardware, sensors, analog preprocessing of signals, protection devices and fault management, and pulse-width-modulation (PWM) algorithms, Switching Power Converters: Medium and High Power, Second Edition discusses the actual use of industrial technology and its related subassemblies and components, covering facets of implementation otherwise overlooked by theoretical textbooks. The updated Second Edition contains many new figures, as well as

  11. Careful determination of inservice inspection of piping by computer analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Lim, H. T.; Lee, S. L.; Lee, J. P.; Kim, B. C.

    1992-01-01

    Stress analysis has been performed using computer program ANSYS in the pressurizer surge line in order to predict possibility of crack generation due to thermal stratification phenomena in pipes connected to reactor coolant system of Nuclear power plants. Highly vulnerable area to crack generation has been chosen by the analysis of fatigue due to thermal stress in pressurizer surge line. This kind of result can be helpful to choose the location requiring intensive care during inservice inspection of nuclear power plants.

  12. The Plant-Window System: A framework for an integrated computing environment at advanced nuclear power plants

    International Nuclear Information System (INIS)

    Wood, R.T.; Mullens, J.A.; Naser, J.A.

    1997-01-01

    Power plant data, and the information that can be derived from it, provide the link to the plant through which the operations, maintenance and engineering staff understand and manage plant performance. The extensive use of computer technology in advanced reactor designs provides the opportunity to greatly expand the capability to obtain, analyze, and present data about the plant to station personnel. However, to support highly efficient and increasingly safe operation of nuclear power plants, it is necessary to transform the vast quantity of available data into clear, concise, and coherent information that can be readily accessed and used throughout the plant. This need can be met by an integrated computer workstation environment that provides the necessary information and software applications, in a manner that can be easily understood and sued, to the proper users throughout the plan. As part of a Cooperative Research and Development Agreement with the Electric Power Research Institute, the Oak Ridge National laboratory has developed functional requirements for a Plant-Wide Integrated Environment Distributed On Workstations (Plant-Window) System. The Plant-Window System (PWS) can serve the needs of operations, engineering, and maintenance personnel at nuclear power stations by providing integrated data and software applications within a common computing environment. The PWS requirements identify functional capabilities and provide guidelines for standardized hardware, software, and display interfaces so as to define a flexible computing environment for both current generation nuclear power plants and advanced reactor designs

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. EMI Evaluation on Wireless Computer Devices in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Jae Ki; JI Yeong Hwa; Sung, Chan Ho

    2011-01-01

    Wireless computer devices, for example, mice and keyboards are widely used in various industries. However, I and C (instrumentation and control) equipment in nuclear power plants are very susceptible to the EMI (Electro-magnetic interference) and there are concerns regarding EMI induced transient caused by wireless computer devices which emit electromagnetic waves for communication. In this paper, industrial practices and nuclear related international standards are investigated to verify requirements of wireless devices. In addition, actual measurement and evaluation for the intensity of EMI of some commercially available wireless devices is performed to verify their compatibility in terms of EMI. Finally we suggest an appropriate method of using wireless computer devices in nuclear power plant control rooms for better office circumstances of operators

  15. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  16. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  17. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  18. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  19. Computational Error Estimate for the Power Series Solution of Odes ...

    African Journals Online (AJOL)

    This paper compares the error estimation of power series solution with recursive Tau method for solving ordinary differential equations. From the computational viewpoint, the power series using zeros of Chebyshevpolunomial is effective, accurate and easy to use. Keywords: Lanczos Tau method, Chebyshev polynomial, ...

  20. Cloud Computing and the Power to Choose

    Science.gov (United States)

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  1. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  2. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  3. Careful Determination of Inservice Inspection of piping by Computer Analysis in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lim, H. T.; Lee, S. L.; Lee, J. P.; Kim, B. C.

    1992-01-01

    Stress analysis has been performed using computer program ANSYS in the pressurizer surge line in accordance with ASME Sec. III in order to predict possibility of fatigue failure due to thermal stratification phenomena in pipes connected to reactor coolant system of nuclear power plants. Highly vulnerable area to crack generation has been chosen by the analysis of fatigue due to thermal stress in pressurizer surge line. This kind of result can be helpful to choose the location requiring intensive care during inservice inspection of nuclear power plants

  4. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  5. Power Consumption Evaluation of Distributed Computing Network Considering Traffic Locality

    Science.gov (United States)

    Ogawa, Yukio; Hasegawa, Go; Murata, Masayuki

    When computing resources are consolidated in a few huge data centers, a massive amount of data is transferred to each data center over a wide area network (WAN). This results in increased power consumption in the WAN. A distributed computing network (DCN), such as a content delivery network, can reduce the traffic from/to the data center, thereby decreasing the power consumed in the WAN. In this paper, we focus on the energy-saving aspect of the DCN and evaluate its effectiveness, especially considering traffic locality, i.e., the amount of traffic related to the geographical vicinity. We first formulate the problem of optimizing the DCN power consumption and describe the DCN in detail. Then, numerical evaluations show that, when there is strong traffic locality and the router has ideal energy proportionality, the system's power consumption is reduced to about 50% of the power consumed in the case where a DCN is not used; moreover, this advantage becomes even larger (up to about 30%) when the data center is located farthest from the center of the network topology.

  6. High Powered Rocketry: Design, Construction, and Launching Experience and Analysis

    Science.gov (United States)

    Paulson, Pryce; Curtis, Jarret; Bartel, Evan; Cyr, Waycen Owens; Lamsal, Chiranjivi

    2018-01-01

    In this study, the nuts and bolts of designing and building a high powered rocket have been presented. A computer simulation program called RockSim was used to design the rocket. Simulation results are consistent with time variations of altitude, velocity, and acceleration obtained in the actual flight. The actual drag coefficient was determined…

  7. Low Power Design with High-Level Power Estimation and Power-Aware Synthesis

    CERN Document Server

    Ahuja, Sumit; Shukla, Sandeep Kumar

    2012-01-01

    Low-power ASIC/FPGA based designs are important due to the need for extended battery life, reduced form factor, and lower packaging and cooling costs for electronic devices. These products require fast turnaround time because of the increasing demand for handheld electronic devices such as cell-phones, PDAs and high performance machines for data centers. To achieve short time to market, design flows must facilitate a much shortened time-to-product requirement. High-level modeling, architectural exploration and direct synthesis of design from high level description enable this design process. This book presents novel research techniques, algorithms,methodologies and experimental results for high level power estimation and power aware high-level synthesis. Readers will learn to apply such techniques to enable design flows resulting in shorter time to market and successful low power ASIC/FPGA design. Integrates power estimation and reduction for high level synthesis, with low-power, high-level design; Shows spec...

  8. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  9. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1991-01-01

    In the process of review and evaluation of licensing issues related to nuclear power plants, it is essential to understand the behavior of seismic loading, foundation and structural properties and their impact on the overall structural response. In most cases, such knowledge could be obtained by using simplified engineering models which, when properly implemented, can capture the essential parameters describing the physics of the problem. Such models do not require execution on large computer systems and could be implemented through a personal computer (PC) based capability. Recognizing the need for a PC software package that can perform structural response computations required for typical licensing reviews, the US Nuclear Regulatory Commission sponsored the development of a PC operated computer software package CARES (Computer Analysis for Rapid Evaluation of Structures) system. This development was undertaken by Brookhaven National Laboratory (BNL) during FY's 1988 and 1989. A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to operate on a PC, have user friendly input/output interface, and have quick turnaround. This paper describes the various features which have been implemented into the seismic module of CARES version 1.0

  10. The modernization of the process computer of the Trillo Nuclear Power Plant

    International Nuclear Information System (INIS)

    Martin Aparicio, J.; Atanasio, J.

    2011-01-01

    The paper describes the modernization of the Process computer of the Trillo Nuclear Power Plant. The process computer functions, have been incorporated in the non Safety I and C platform selected in Trillo NPP: the Siemens SPPA-T2000 OM690 (formerly known as Teleperm XP). The upgrade of the Human Machine Interface of the control room has been included in the project. The modernization project has followed the same development process used in the upgrade of the process computer of PWR German nuclear power plants. (Author)

  11. Applications of parallel computer architectures to the real-time simulation of nuclear power systems

    International Nuclear Information System (INIS)

    Doster, J.M.; Sills, E.D.

    1988-01-01

    In this paper the authors report on efforts to utilize parallel computer architectures for the thermal-hydraulic simulation of nuclear power systems and current research efforts toward the development of advanced reactor operator aids and control systems based on this new technology. Many aspects of reactor thermal-hydraulic calculations are inherently parallel, and the computationally intensive portions of these calculations can be effectively implemented on modern computers. Timing studies indicate faster-than-real-time, high-fidelity physics models can be developed when the computational algorithms are designed to take advantage of the computer's architecture. These capabilities allow for the development of novel control systems and advanced reactor operator aids. Coupled with an integral real-time data acquisition system, evolving parallel computer architectures can provide operators and control room designers improved control and protection capabilities. Current research efforts are currently under way in this area

  12. Cyber Security on Nuclear Power Plant's Computer Systems

    International Nuclear Information System (INIS)

    Shin, Ick Hyun

    2010-01-01

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  13. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  14. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  15. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    International Nuclear Information System (INIS)

    Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T

    2006-01-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems

  16. Elucidation of complicated phenomena in nuclear power field by computation science techniques

    International Nuclear Information System (INIS)

    Takahashi, Ryoichi

    1996-01-01

    In this crossover research, the complicated phenomena treated in nuclear power field are elucidated, and for connecting them to engineering application research, the development of high speed computer utilization technology and the large scale numerical simulation utilizing it are carried out. As the scale of calculation, it is aimed at to realize the three-dimensional numerical simulation of the largest scale in the world of about 100 million mesh and to develop the results into engineering research. In the nuclear power plants of next generation, the further improvement of economical efficiency is demanded together with securing safety, and it is important that the design window is large. The work of confirming quantitatively the size of design window is not easy, and it is very difficult to separate observed phenomena into elementary events. As the method of forecasting and reproducing complicated phenomena and quantifying design window, large scale numerical simulation is promising. The roles of theory, experiment and computation science are discussed. The system of executing this crossover research is described. (K.I.)

  17. CSTI High Capacity Power

    International Nuclear Information System (INIS)

    Winter, J.M.

    1989-01-01

    The SP-100 program was established in 1983 by DOD, DOE, and NASA as a joint program to develop the technology necessary for space nuclear power systems for military and civil application. During FY-86 and 87, the NASA SP-100 Advanced Technology Program was devised to maintain the momentum of promising technology advancement efforts started during Phase 1 of SP-100 and to strengthen, in key areas, the chances for successful development and growth capability of space nuclear reactor power systems for future space applications. In FY-88, the Advanced Technology Program was incorporated into NASA's new Civil Space Technology Initiative (CSTI). The CSTI Program was established to provide the foundation for technology development in automation and robotics, information, propulsion, and power. The CSTI High Capacity Power Program builds on the technology efforts of the SP-100 program, incorporates the previous NASA SP-100 Advanced Technology project, and provides a bridge to NASA Project Pathfinder. The elements of CSTI High Capacity Power development include Conversion Systems, Thermal Management, Power Management, System Diagnostics, and Environmental Interactions. Technology advancement in all areas, including materials, is required to assure the high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall program will develop and demonstrate the technology base required to provide a wide range of modular power systems as well as allowing mission independence from solar and orbital attitude requirements. Several recent advancements in CSTI High Capacity power development will be discussed

  18. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  19. Solid state high power amplifier for driving the SLC injector klystron

    International Nuclear Information System (INIS)

    Judkins, J.G.; Clendenin, J.E.; Schwarz, H.D.

    1985-03-01

    The SLC injector klystron rf drive is now provided by a recently developed solid-state amplifier. The high gain of the amplifier permits the use of a fast low-power electronic phase shifter. Thus the SLC computer control system can be used to shift the phase of the high-power rf rapidly during the fill time of the injector accelerator section. These rapid phase shifts are used to introduce a phase-energy relationship in the accelerated electron pulse in conjunction with the operation of the injector bunch compressor. The amplifier, the method of controlling the rf phase, and the operational characteristics of the system are described. 5 refs., 4 figs

  20. A Computer Program for Short Circuit Analysis of Electric Power ...

    African Journals Online (AJOL)

    The Short Circuit Analysis Program (SCAP) is to be used to assess the composite effects of unbalanced and balanced faults on the overall reliability of electric power system. The program uses the symmetrical components method to compute all phase and sequence quantities for any bus or branch of a given power network ...

  1. High to ultra-high power electrical energy storage.

    Science.gov (United States)

    Sherrill, Stefanie A; Banerjee, Parag; Rubloff, Gary W; Lee, Sang Bok

    2011-12-14

    High power electrical energy storage systems are becoming critical devices for advanced energy storage technology. This is true in part due to their high rate capabilities and moderate energy densities which allow them to capture power efficiently from evanescent, renewable energy sources. High power systems include both electrochemical capacitors and electrostatic capacitors. These devices have fast charging and discharging rates, supplying energy within seconds or less. Recent research has focused on increasing power and energy density of the devices using advanced materials and novel architectural design. An increase in understanding of structure-property relationships in nanomaterials and interfaces and the ability to control nanostructures precisely has led to an immense improvement in the performance characteristics of these devices. In this review, we discuss the recent advances for both electrochemical and electrostatic capacitors as high power electrical energy storage systems, and propose directions and challenges for the future. We asses the opportunities in nanostructure-based high power electrical energy storage devices and include electrochemical and electrostatic capacitors for their potential to open the door to a new regime of power energy.

  2. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  3. Modeling of mode purity in high power gyrotrons

    International Nuclear Information System (INIS)

    Cai, S.Y.; Antonsen, T.M. Jr.; Saraph, G.P.

    1993-01-01

    Spurious mode generation at the same frequency of the operational mode in a high power gyrotron can significantly reduce the power handling capability and the stability of a gyrotron oscillator because these modes are usually not matched at the output window and thus have high absorption and reflection rates. To study the generation of this kind of mode, the authors developed a numerical model based on an existing multimode self-consistent time-dependent computer code. This model includes both TE and TM modes and accounts for mode transformations due to the waveguide inhomogeneity. With this new tool, they study the mode transformation in the gyrotron and the possibility of excitation of parasitic TE and TM modes in the up taper section due to the gyroklystron mechanism. Their preliminary results show moderate excitation of both TE and TM modes at the same frequency as the main operating mode at locations near their cutoff. Details of the model and further simulation results will be presented

  4. Saving Energy and Money: A Lesson in Computer Power Management

    Science.gov (United States)

    Lazaros, Edward J.; Hua, David

    2012-01-01

    In this activity, students will develop an understanding of the economic impact of technology by estimating the cost savings of power management strategies in the classroom. Students will learn how to adjust computer display settings to influence the impact that the computer has on the financial burden to the school. They will use mathematics to…

  5. [Restoration filtering based on projection power spectrum for single-photon emission computed tomography].

    Science.gov (United States)

    Kubo, N

    1995-04-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical "least squares filter" theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the "Butterworth" filtering method (cut-off frequency of 0.15 cycles/pixel), and "Wiener" filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99mTc filled cylinder, were used. NMSE of the "Butterworth" filter, "Wiener" filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images.

  6. Restoration filtering based on projection power spectrum for single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Kubo, Naoki

    1995-01-01

    To improve the quality of single-photon emission computed tomographic (SPECT) images, a restoration filter has been developed. This filter was designed according to practical 'least squares filter' theory. It is necessary to know the object power spectrum and the noise power spectrum. The power spectrum is estimated from the power spectrum of a projection, when the high-frequency power spectrum of a projection is adequately approximated as a polynomial exponential expression. A study of the restoration with the filter based on a projection power spectrum was conducted, and compared with that of the 'Butterworth' filtering method (cut-off frequency of 0.15 cycles/pixel), and 'Wiener' filtering (signal-to-noise power spectrum ratio was a constant). Normalized mean-squared errors (NMSE) of the phantom, two line sources located in a 99m Tc filled cylinder, were used. NMSE of the 'Butterworth' filter, 'Wiener' filter, and filtering based on a power spectrum were 0.77, 0.83, and 0.76 respectively. Clinically, brain SPECT images utilizing this new restoration filter improved the contrast. Thus, this filter may be useful in diagnosis of SPECT images. (author)

  7. Computation of the Mutual Inductance between Air-Cored Coils of Wireless Power Transformer

    International Nuclear Information System (INIS)

    Anele, A O; Hamam, Y; Djouani, K; Chassagne, L; Alayli, Y; Linares, J

    2015-01-01

    Wireless power transfer system is a modern technology which allows the transfer of electric power between the air-cored coils of its transformer via high frequency magnetic fields. However, due to its coil separation distance and misalignment, maximum power transfer is not guaranteed. Based on a more efficient and general model available in the literature, rederived mathematical models for evaluating the mutual inductance between circular coils with and without lateral and angular misalignment are presented. Rather than presenting results numerically, the computed results are graphically implemented using MATLAB codes. The results are compared with the published ones and clarification regarding the errors made are presented. In conclusion, this study shows that power transfer efficiency of the system can be improved if a higher frequency alternating current is supplied to the primary coil, the reactive parts of the coils are compensated with capacitors and ferrite cores are added to the coils. (paper)

  8. Recent computer applications in boiling water reactor power plants

    International Nuclear Information System (INIS)

    Hiraga, Shoji; Joge, Toshio; Kiyokawa, Kazuhiro; Kato, Kanji; Nigawara, Seiitsu

    1976-01-01

    Process computers in boiling water reactor power plants have won the position of important equipments for the calculation of the core and plant performances and for data logging. Their application technique is growing larger and larger every year. Here, two systems are introduced; plant diagnostic system and computerized control panel. The plant diagnostic system consists of the part processing the signals from a plant, the operation part mainly composed of a computer to diagnose the operating conditions of each system component using input signal, and the result display (CRT or typewriter). The concept on the indications on control panels in nuclear power plants is changing from ''Plant parameters and to be indicated on panel meters as much as possible'' to ''Only the data required for operation are to be indicated.'' Thus the computerized control panel is attracting attention, in which the process computer for processing the operating information and CRT display are introduced. The experimental model of that panel comprises and operator's console and a chief watchmen's console. Its functions are dialogic data access and the automatic selection of preferential information. (Wakatsuki, Y.)

  9. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  10. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  11. Modelling switching-time effects in high-frequency power conditioning networks

    Science.gov (United States)

    Owen, H. A.; Sloane, T. H.; Rimer, B. H.; Wilson, T. G.

    1979-01-01

    Power transistor networks which switch large currents in highly inductive environments are beginning to find application in the hundred kilohertz switching frequency range. Recent developments in the fabrication of metal-oxide-semiconductor field-effect transistors in the power device category have enhanced the movement toward higher switching frequencies. Models for switching devices and of the circuits in which they are imbedded are required to properly characterize the mechanisms responsible for turning on and turning off effects. Easily interpreted results in the form of oscilloscope-like plots assist in understanding the effects of parametric studies using topology oriented computer-aided analysis methods.

  12. The numerical computation of seismic fragility of base-isolated Nuclear Power Plants buildings

    International Nuclear Information System (INIS)

    Perotti, Federico; Domaneschi, Marco; De Grandis, Silvia

    2013-01-01

    Highlights: • Seismic fragility of structural components in base isolated NPP is computed. • Dynamic integration, Response Surface, FORM and Monte Carlo Simulation are adopted. • Refined approach for modeling the non-linearities behavior of isolators is proposed. • Beyond-design conditions are addressed. • The preliminary design of the isolated IRIS is the application of the procedure. -- Abstract: The research work here described is devoted to the development of a numerical procedure for the computation of seismic fragilities for equipment and structural components in Nuclear Power Plants; in particular, reference is made, in the present paper, to the case of isolated buildings. The proposed procedure for fragility computation makes use of the Response Surface Methodology to model the influence of the random variables on the dynamic response. To account for stochastic loading, the latter is computed by means of a simulation procedure. Given the Response Surface, the Monte Carlo method is used to compute the failure probability. The procedure is here applied to the preliminary design of the Nuclear Power Plant reactor building within the International Reactor Innovative and Secure international project; the building is equipped with a base isolation system based on the introduction of High Damping Rubber Bearing elements showing a markedly non linear mechanical behavior. The fragility analysis is performed assuming that the isolation devices become the critical elements in terms of seismic risk and that, once base-isolation is introduced, the dynamic behavior of the building can be captured by low-dimensional numerical models

  13. High power microwave emission and diagnostics of microsecond electron beams

    Energy Technology Data Exchange (ETDEWEB)

    Gilgenbach, R; Hochman, J M; Jayness, R; Rintamaki, J I; Lau, Y Y; Luginsland, J; Lash, J S [Univ. of Michigan, Ann Arbor, MI (United States). Intense Electron Beam Interaction Lab.; Spencer, T A [Air Force Phillips Lab., Kirtland AFB, NM (United States)

    1997-12-31

    Experiments were performed to generate high power, long-pulse microwaves by the gyrotron mechanism in rectangular cross-section interaction cavities. Long-pulse electron beams are generated by MELBA (Michigan Electron Long Beam Accelerator), which operates with parameters: -0.8 MV, 1-10 kA, and 0.5-1 microsecond pulse length. Microwave power levels are in the megawatt range. Polarization control is being studied by adjustment of the solenoidal magnetic field. Initial results show polarization power ratios up to a factor of 15. Electron beam dynamics (V{sub perp}/V{sub par}) are being measured by radiation darkening on glass plates. Computer modeling utilizes the MAGIC Code for electromagnetic waves and a single electron orbit code that includes a distribution of angles. (author). 4 figs., 4 refs.

  14. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  15. An Adaptive and Integrated Low-Power Framework for Multicore Mobile Computing

    Directory of Open Access Journals (Sweden)

    Jongmoo Choi

    2017-01-01

    Full Text Available Employing multicore in mobile computing such as smartphone and IoT (Internet of Things device is a double-edged sword. It provides ample computing capabilities required in recent intelligent mobile services including voice recognition, image processing, big data analysis, and deep learning. However, it requires a great deal of power consumption, which causes creating a thermal hot spot and putting pressure on the energy resource in a mobile device. In this paper, we propose a novel framework that integrates two well-known low-power techniques, DPM (Dynamic Power Management and DVFS (Dynamic Voltage and Frequency Scaling for energy efficiency in multicore mobile systems. The key feature of the proposed framework is adaptability. By monitoring the online resource usage such as CPU utilization and power consumption, the framework can orchestrate diverse DPM and DVFS policies according to workload characteristics. Real implementation based experiments using three mobile devices have shown that it can reduce the power consumption ranging from 22% to 79%, while affecting negligibly the performance of workloads.

  16. Computer model for large-scale offshore wind-power systems

    Energy Technology Data Exchange (ETDEWEB)

    Dambolena, I G [Bucknell Univ., Lewisburg, PA; Rikkers, R F; Kaminsky, F C

    1977-01-01

    A computer-based planning model has been developed to evaluate the cost and simulate the performance of offshore wind-power systems. In these systems, the electricity produced by wind generators either satisfies directly demand or produces hydrogen by water electrolysis. The hydrogen is stored and later used to produce electricity in fuel cells. Using as inputs basic characteristics of the system and historical or computer-generated time series for wind speed and electricity demand, the model simulates system performance over time. A history of the energy produced and the discounted annual cost of the system are used to evaluate alternatives. The output also contains information which is useful in pointing towards more favorable design alternatives. Use of the model to analyze a specific wind-power system for New England indicates that electric energy could perhaps be generated at a competitive cost.

  17. Practical application of computer graphics in nuclear power plant engineering

    International Nuclear Information System (INIS)

    Machiba, Hiroshi; Kawamura, Hirobumi; Sasaki, Norio

    1992-01-01

    A nuclear power plant is composed of a vast amount of equipment, piping, and so on, and six or seven years are required to complete the design and engineering from the initial planning stage to the time of commercial operation. Furthermore, operating plants must be continually maintained and improved for a long period. Computer graphics were first applied to the composite arrangement design of nuclear power plants in the form of 3-dimensional CAD. Subsequently, as the introduction of CAE has progressed, a huge assortment of information has been accumulated in database, and measures have been sought that would permit the convenient utilization of this information. Using computer graphics technologies, improvement of the interface between the user and such databases has recently been accomplished. In response to the growth in environmental consciousness, photo-realistic simulations for artistic design of the interior and overviews showing harmony with the surroundings have been achieved through the application of computer graphics. (author)

  18. Soft computing for fault diagnosis in power plants

    International Nuclear Information System (INIS)

    Ciftcioglu, O.; Turkcan, E.

    1998-01-01

    Considering the advancements in the AI technology, there arises a new concept known as soft computing. It can be defined as the processing of uncertain information with the AI methods, that refers to explicitly the methods using neural networks, fuzzy logic and evolutionary algorithms. In this respect, soft computing is a new dimension in information processing technology where linguistic information can also be processed in contrast with the classical stochastic and deterministic treatments of data. On one hand it can process uncertain/incomplete information and on the other hand it can deal with non-linearity of large-scale systems where uncertainty is particularly relevant with respect to linguistic information and incompleteness is related to fault tolerance in fault diagnosis. In this perspective, the potential role of soft computing in power plant operation is presented. (author)

  19. Computer-assisted power plant management

    International Nuclear Information System (INIS)

    Boettcher, D.

    1990-01-01

    Operating a power plant and keeping it operational is ensured by a multiplicity of technical management subtasks which are cross referenced and based on an extensive inventory of descriptive and operational plant data. These data stocks are still registered in an isolated mode and managed and updated manually. This is a labor intensive, error prone procedure. In this situation, the introduction of a computer-assisted plant management system, whose core is a data-base of assured quality common to all activities, and which contains standardized processing aids fully planned for the subtasks occurring in the plant, is likely to achieve a considerable improvement in the quality of plant management and to relieve the staff of administrative activities. (orig.) [de

  20. Development of superconductor electronics technology for high-end computing

    Energy Technology Data Exchange (ETDEWEB)

    Silver, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kleinsasser, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kerber, G [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Herr, Q [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Dorojevets, M [Department of Electrical and Computer Engineering, SUNY-Stony Brook, NY 11794-2350 (United States); Bunyk, P [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Abelson, L [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2003-12-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm{sup -2}, 1.25 {mu}m junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s{sup -1}, both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density.

  1. Development of superconductor electronics technology for high-end computing

    International Nuclear Information System (INIS)

    Silver, A; Kleinsasser, A; Kerber, G; Herr, Q; Dorojevets, M; Bunyk, P; Abelson, L

    2003-01-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm -2 , 1.25 μm junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s -1 , both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density

  2. Computer model of the MFTF-B neutral beam Accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel dc Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of huge increases in computing time that result. The model has been successfully extended to include the accel modulator

  3. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  4. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. Nanoelectromechanical Switches for Low-Power Digital Computing

    Directory of Open Access Journals (Sweden)

    Alexis Peschot

    2015-08-01

    Full Text Available The need for more energy-efficient solid-state switches beyond complementary metal-oxide-semiconductor (CMOS transistors has become a major concern as the power consumption of electronic integrated circuits (ICs steadily increases with technology scaling. Nano-Electro-Mechanical (NEM relays control current flow by nanometer-scale motion to make or break physical contact between electrodes, and offer advantages over transistors for low-power digital logic applications: virtually zero leakage current for negligible static power consumption; the ability to operate with very small voltage signals for low dynamic power consumption; and robustness against harsh environments such as extreme temperatures. Therefore, NEM logic switches (relays have been investigated by several research groups during the past decade. Circuit simulations calibrated to experimental data indicate that scaled relay technology can overcome the energy-efficiency limit of CMOS technology. This paper reviews recent progress toward this goal, providing an overview of the different relay designs and experimental results achieved by various research groups, as well as of relay-based IC design principles. Remaining challenges for realizing the promise of nano-mechanical computing, and ongoing efforts to address these, are discussed.

  6. High power microwaves

    CERN Document Server

    Benford, James; Schamiloglu, Edl

    2016-01-01

    Following in the footsteps of its popular predecessors, High Power Microwaves, Third Edition continues to provide a wide-angle, integrated view of the field of high power microwaves (HPMs). This third edition includes significant updates in every chapter as well as a new chapter on beamless systems that covers nonlinear transmission lines. Written by an experimentalist, a theorist, and an applied theorist, respectively, the book offers complementary perspectives on different source types. The authors address: * How HPM relates historically and technically to the conventional microwave field * The possible applications for HPM and the key criteria that HPM devices have to meet in order to be applied * How high power sources work, including their performance capabilities and limitations * The broad fundamental issues to be addressed in the future for a wide variety of source types The book is accessible to several audiences. Researchers currently in the field can widen their understanding of HPM. Present or pot...

  7. Development and application of project management computer system in nuclear power station

    International Nuclear Information System (INIS)

    Chen Junpu

    2000-01-01

    According to the experiences in the construction of Daya Bay and Lingao nuclear power plants presents, the necessity to use the computers for management and their application in the nuclear power engineering project are explained

  8. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  9. Effect of high entropy magnetic regenerator materials on power of the GM refrigerator

    International Nuclear Information System (INIS)

    Hashimoto, Takasu; Yabuki, Masanori; Eda, Tatsuji; Kuriyama, Toru; Nakagome, Hideki

    1994-01-01

    In previous work the authors have proved that heavy rare earth compounds with low magnetic transition temperature T c are very useful as regenerator materials in low temperature range. Applying the magnetic material Er 3 Ni particles to the 2nd regenerator of the GM refrigerator, they were able to reach the 2 K range but could not obtain high refrigeration power at 4.2 K. This is thought to be due to the temperature dependence of the magnetic specific heat. They present here a method by which high refrigeration power is obtained at low temperature. The simplest means of obtaining high power is with a hybrid structure regenerator which is composed of two kinds of magnetic materials, high T c and low T c materials. Computer simulation and experiments were carried out to verify the superiority of the hybrid regenerator. The authors succeeded experimentally in obtaining the high power of ∼ 1.1 watt at 4.2 K. They will report other detailed results and discuss developing way of the magnetic regenerator in future

  10. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  11. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Duran, Felicia Angelica [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Security Systems Analysis Dept.; Waymire, Russell L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Security Systems Analysis Dept.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documents have also been provided to KHNP-CRI.

  12. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    International Nuclear Information System (INIS)

    Duran, Felicia Angelica; Waymire, Russell L.

    2013-01-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documents have also been provided to KHNP-CRI.

  13. Development of computer program for safety of nuclear power plant against tsunami

    International Nuclear Information System (INIS)

    Jin, S. B.; Choi, K. R.; Lee, S. K.; Cho, Y. S.

    2001-01-01

    The main objective of this study is the development of a computer program to check the safety of nuclear power plants along the coastline of the Korean Peninsula. The computer program describes the propagation and associated run-up process of tsunamis by solving linear and nonlinear shallow-water equations with finite difference methods. The computer program has been applied to several ideal and simplified problems. Obtained numerical solutions are compared to existing and available solutions and measurements. A very good agreement between numerical solutions and existing measurement is observed. The computer program developed in this study can be to check the safety analysis of nuclear power plants against tsunamis. The program can also be used to study the propagation of tsunamis for a long distance, and associated run-up and run-down process along a shoreline. Furthermore, the computer program can be used to provide the proper design criteria of coastal facilities and structures

  14. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    Science.gov (United States)

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  15. Modeling of the dynamics of wind to power conversion including high wind speed behavior

    DEFF Research Database (Denmark)

    Litong-Palima, Marisciel; Bjerge, Martin Huus; Cutululis, Nicolaos Antonio

    2016-01-01

    This paper proposes and validates an efficient, generic and computationally simple dynamic model for the conversion of the wind speed at hub height into the electrical power by a wind turbine. This proposed wind turbine model was developed as a first step to simulate wind power time series...... for power system studies. This paper focuses on describing and validating the single wind turbine model, and is therefore neither describing wind speed modeling nor aggregation of contributions from a whole wind farm or a power system area. The state-of-the-art is to use static power curves for the purpose...... of power system studies, but the idea of the proposed wind turbine model is to include the main dynamic effects in order to have a better representation of the fluctuations in the output power and of the fast power ramping especially because of high wind speed shutdowns of the wind turbine. The high wind...

  16. Application of control computer system TESLA RPP-16 in the Bohunice nuclear power plant

    International Nuclear Information System (INIS)

    Spetko, V.

    1976-01-01

    The reasons are given for the installation of a computer at the A-1 nuclear power plant in Czechoslovakia with regard to applied research. The configuration, placement, and software of the computer system is described. The programmes are written in the SAM and FORTRAN-IV languages. The knowledge acquired in the course of tests and the prospect of the future installation of computer control equipment in the A-1 nuclear power plant are described. (J.P.)

  17. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  18. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Directory of Open Access Journals (Sweden)

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  19. CERN-group conceptual design of a fast neutron operated high power energy amplifier

    International Nuclear Information System (INIS)

    Rubbia, C.; Rubio, J.A.; Buono, S.

    1997-01-01

    The practical feasibility of an Energy Amplifier (EA) with power and power density which are comparable to the ones of the present generation of large PWR is discussed in this paper. This is only possible with fast neutrons. Schemes are described which offer a high gain, a large maximum power density and an extended burn-up, well in excess of 100 GW x d/t corresponding to about five years at full power operation with no intervention on the fuel core. The following topics are discussed: physics considerations and parameter definition, the accelerator complex, the energy amplifier unit, computer simulated operation, and fuel cycle closing

  20. CERN-group conceptual design of a fast neutron operated high power energy amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Rubbia, C; Rubio, J A [European Organization for Nuclear Research, CERN, Geneva (Switzerland); Buono, S [Laboratoire du Cyclotron, Nice (France); and others

    1997-11-01

    The practical feasibility of an Energy Amplifier (EA) with power and power density which are comparable to the ones of the present generation of large PWR is discussed in this paper. This is only possible with fast neutrons. Schemes are described which offer a high gain, a large maximum power density and an extended burn-up, well in excess of 100 GW x d/t corresponding to about five years at full power operation with no intervention on the fuel core. The following topics are discussed: physics considerations and parameter definition, the accelerator complex, the energy amplifier unit, computer simulated operation, and fuel cycle closing. 84 refs, figs, tabs.

  1. High Power High Efficiency Diode Laser Stack for Processing

    Science.gov (United States)

    Gu, Yuanyuan; Lu, Hui; Fu, Yueming; Cui, Yan

    2018-03-01

    High-power diode lasers based on GaAs semiconductor bars are well established as reliable and highly efficient laser sources. As diode laser is simple in structure, small size, longer life expectancy with the advantages of low prices, it is widely used in the industry processing, such as heat treating, welding, hardening, cladding and so on. Respectively, diode laser could make it possible to establish the practical application because of rectangular beam patterns which are suitable to make fine bead with less power. At this power level, it can have many important applications, such as surgery, welding of polymers, soldering, coatings and surface treatment of metals. But there are some applications, which require much higher power and brightness, e.g. hardening, key hole welding, cutting and metal welding. In addition, High power diode lasers in the military field also have important applications. So all developed countries have attached great importance to high-power diode laser system and its applications. This is mainly due their low performance. In this paper we will introduce the structure and the principle of the high power diode stack.

  2. A personal computer code for seismic evaluations of nuclear power plant facilities

    International Nuclear Information System (INIS)

    Xu, J.; Graves, H.

    1990-01-01

    A wide range of computer programs and modeling approaches are often used to justify the safety of nuclear power plants. It is often difficult to assess the validity and accuracy of the results submitted by various utilities without developing comparable computer solutions. Taken this into consideration, CARES is designed as an integrated computational system which can perform rapid evaluations of structural behavior and examine capability of nuclear power plant facilities, thus CARES may be used by the NRC to determine the validity and accuracy of analysis methodologies employed for structural safety evaluations of nuclear power plants. CARES has been designed to: operate on a PC, have user friendly input/output interface, and have quick turnaround. The CARES program is structured in a modular format. Each module performs a specific type of analysis. The basic modules of the system are associated with capabilities for static, seismic and nonlinear analyses. This paper describes the various features which have been implemented into the Seismic Module of CARES version 1.0. In Section 2 a description of the Seismic Module is provided. The methodologies and computational procedures thus far implemented into the Seismic Module are described in Section 3. Finally, a complete demonstration of the computational capability of CARES in a typical soil-structure interaction analysis is given in Section 4 and conclusions are presented in Section 5. 5 refs., 4 figs

  3. Fast computation of the roots of polynomials over the ring of power series

    DEFF Research Database (Denmark)

    Neiger, Vincent; Rosenkilde, Johan; Schost, Éric

    2017-01-01

    We give an algorithm for computing all roots of polynomials over a univariate power series ring over an exact field K. More precisely, given a precision d, and a polynomial Q whose coefficients are power series in x, the algorithm computes a representation of all power series f(x) such that Q......(f(x)) = 0 mod xd. The algorithm works unconditionally, in particular also with multiple roots, where Newton iteration fails. Our main motivation comes from coding theory where instances of this problem arise and multiple roots must be handled. The cost bound for our algorithm matches the worst-case input...

  4. A first accident simulation for Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-02-01

    The acquisition of the Almod computer code from GRS-Munich to CNEN has permited doing calculations of transients in PWR nuclear power plants, in which doesn't occur loss of coolant. The implementation of the german computer code Almod and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation; and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (Author) [pt

  5. High power CW linac in PNC

    International Nuclear Information System (INIS)

    Toyama, S.; Wang, Y.L.; Emoto, T.

    1994-01-01

    Power Reactor and Nuclear Fuel Development Corporation (PNC) is developing a high power electron linac for various applications. The electron beam is accelerated in CW operation to get maximum beam current of 100 mA and energy of 10 MeV. Crucial components such as a high power L-band klystron and a high power traveling wave resonant ring (TWRR) accelerator guides were designed and manufactured and their performance were examined. These design and results from the recent high power RF tests were described in this paper. (author)

  6. A Study on Gas Insulation Characteristics for Design Optimization of High Voltage Power Apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Kim, I S; Kim, M K; Seo, K S; Moon, I W; Choi, C K [Korea Electrotechnology Research Institute (Korea, Republic of)

    1996-12-01

    This study aim of obtaining the basic data for gas insulation in the high voltage apparatus and for investigating the breakdown characteristics in uniform field and non-uniform which the geometric construction in the practical power apparatus. In this study, the research results on the insulation technology published earlier are reviewed and the basic data for an optimum design of a high voltage apparatus are obtained thorough the experiment and computer simulation by using a uniform field. The main result are summarized as follows: (A) Investigation on the insulation technology in a large-capacity power apparatus. (B) Investigation on the breakdown characteristics in particle contaminated condition. (C) Investigation on the design in computer simulation. (D) Investigation on the simulation technology of breakdown characteristics. (E) Investigation on breakdown characteristics in the nonuniform field and experiment. (author). refs., figs., tabs.

  7. A high power cross-field amplifier at X-Band

    International Nuclear Information System (INIS)

    Eppley, K.; Feinstein, J.; Ko, K.; Kroll, N.; Lee, T.; Nelson, E.

    1991-05-01

    A high power cross-field amplifier is under development at SLAC with the objective of providing sufficient peak power to feed a section of an X-Band (11.424 GHz) accelerator without the need for pulse compression. The CFA being designed employs a conventional distributed secondary emission cathode but a novel anode structure which consists of an array of vane resonators alternatively coupled to a rectangular waveguide. The waveguide impedance (width) is tapered linearly from input to output so as to provide a constant RF voltage at the vane tips, leading to uniform power generation along the structure. Nominal design for this tube calls for 300 MW output power, 20 dB gain, DC voltage 142 KV, magnetic field 5 KG, anode-cathode gap 3.6 mm and total interaction length of about 60 cm. These specifications have been supported by computer simulations of both the RF slow wave structure as well as the electron space charge wave interaction. We have used ARGUS to model the cold circuit properties and CONDOR to model the electronic power conversion. An efficiency of 60 percent can be expected. We will discuss the details of the design effort. 5 refs., 6 figs

  8. Experimental and computational analysis of a 1.2 kW PEMFC designed for communications backup power applications

    International Nuclear Information System (INIS)

    Krishnan, K.J.; Claveria, J.; Varadharajan, L.; Kalam, A.

    2011-01-01

    Usage of Fuel Cells due to their high power density and low greenhouse gas emissions which combine H/sub 2/ and O/sub 2/ electrochemically to produce electricity and H/sub 2/O as the by-product will become widespread in the near future due to its quality, reliability and portability. Among all types of fuel cells, Proton Exchange Membrane Fuel Cells (PEMFC) is most attractive for residential and automotive industry use due to its low operating temperature, silent operation, quick start-up characteristics and better performance. The T-1000 1.2 kW PEMFC are mainly used for communications backup power applications because of its high reliability, simplicity and ease of maintenance in telecommunication sector, utility and government etc. This paper discuses the features of T- 1000 PEMFC and also the production losses due to power outages in US and different parts of the globe and the advantages of using it in different sectors to reduce the production loses occurred by the power outages. This work focuses on the experimental data and the computational data of load, P, V, A and H/sub 2/ consumed under laboratory conditions at Power Lab in Victoria University, Melbourne. The paper also describes various load, P, V and A curves recorded at regular intervals between the experimental and computational data. The work shows notably the benefit of using T-1000 1.2 kW PEMFC for residential, automobile, government and telecom sectors. (author)

  9. Power absorption of high-frequency electromagnetic waves in a partially ionized magnetized plasma

    International Nuclear Information System (INIS)

    Guo Bin; Wang Xiaogang

    2005-01-01

    Power absorption of high-frequency electromagnetic waves in a uniformly magnetized plasma layer covering a highly conducting surface is studied under atmosphere conditions. It is assumed that the system consists of not only electrons and positive ions but negative ions as well. By a general formula derived in our previous work [B. Guo and X. G. Wang, Plasma Sci. Tech. 7, 2645 (2005)], the total power absorption in the plasma layer with multiple reflections between an air-plasma interface and the conducting surface is computed. The results show that although the existence of negative ions greatly reduces the total power absorption, the magnetization of the plasma can, however, partially enhance it. Parameter dependence of the effects is calculated and discussed

  10. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  11. A powerful way of cooling computer chip using liquid metal with low melting point as the cooling fluid

    Energy Technology Data Exchange (ETDEWEB)

    Li Teng; Lv Yong-Gang [Chinese Academy of Sciences, Beijing (China). Cryogenic Lab.; Chinese Academy of Sciences, Beijing (China). Graduate School; Liu Jing; Zhou Yi-Xin [Chinese Academy of Sciences, Beijing (China). Cryogenic Lab.

    2006-12-15

    With the improvement of computational speed, thermal management becomes a serious concern in computer system. CPU chips are squeezing into tighter and tighter spaces with no more room for heat to escape. Total power-dissipation levels now reside about 110 W, and peak power densities are reaching 400-500 W/mm{sup 2} and are still steadily climbing. As a result, higher performance and greater reliability are extremely tough to attain. But since the standard conduction and forced-air convection techniques no longer be able to provide adequate cooling for sophisticated electronic systems, new solutions are being looked into liquid cooling, thermoelectric cooling, heat pipes, and vapor chambers. In this paper, we investigated a novel method to significantly lower the chip temperature using liquid metal with low melting point as the cooling fluid. The liquid gallium was particularly adopted to test the feasibility of this cooling approach, due to its low melting point at 29.7 C, high thermal conductivity and heat capacity. A series of experiments with different flow rates and heat dissipation rates were performed. The cooling capacity and reliability of the liquid metal were compared with that of the water-cooling and very attractive results were obtained. Finally, a general criterion was introduced to evaluate the cooling performance difference between the liquid metal cooling and the water-cooling. The results indicate that the temperature of the computer chip can be significantly reduced with the increasing flow rate of liquid gallium, which suggests that an even higher power dissipation density can be achieved with a large flow of liquid gallium and large area of heat dissipation. The concept discussed in this paper is expected to provide a powerful cooling strategy for the notebook PC, desktop PC and large computer. It can also be extended to more wide area involved with thermal management on high heat generation rate. (orig.)

  12. Computing Active Power Losses Using a Mathematical Model of a Regulated Street Luminaire

    Directory of Open Access Journals (Sweden)

    Roman Sikora

    2018-05-01

    Full Text Available Before the use of regulated street luminaires with variable power and luminous flux, computations were performed using constant values for their electrical and photometric parameters. At present, where such lighting is in use, it is no longer possible to base calculations on such assumptions. Computations of energy and power losses, for example, need to be performed for all dimming levels and based on the applied regulation algorithm. Based on measurements carried out on regulated luminaires, it was found that certain electrical parameters have a nonlinear dependence on the dimming level. Electrical parameters were also observed to depend on the value of the supply voltage. The results of the measurements are presented in this article. Failure to take account of power losses in computations of the energy efficiency of street lighting in accordance with the applicable EN 13201 standard causes values of energy efficiency indicators to be overstated. Power loss computations are presented in this article for a sample street lighting system with regulated luminaires, for the whole range of dimming levels and additionally for fluctuations of ±10% in the supply voltage. In addition, a mathematical model of a regulated luminaire is constructed with the use of regression methods, and a practical application of that model is described.

  13. Powering the High-Luminosity Triplets

    Science.gov (United States)

    Ballarino, A.; Burnet, J. P.

    The powering of the magnets in the LHC High-Luminosity Triplets requires production and transfer of more than 150 kA of DC current. High precision power converters will be adopted, and novel High Temperature Superconducting (HTS) current leads and MgB2 based transfer lines will provide the electrical link between the power converters and the magnets. This chapter gives an overview of the systems conceived in the framework of the LHC High-Luminosity upgrade for feeding the superconducting magnet circuits. The focus is on requirements, challenges and novel developments.

  14. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Throneburg, E. B.; Jones, J. M. [AREVA NP Inc., 7207 IBM Drive, Charlotte, NC 28262 (United States)

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  15. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    International Nuclear Information System (INIS)

    Throneburg, E. B.; Jones, J. M.

    2006-01-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  16. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  17. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  18. On the Super-Turing Computational Power of Non-Uniform Families of Neuromata

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří

    2002-01-01

    Roč. 12, č. 5 (2002), s. 509-516 ISSN 1210-0552. [SOFSEM 2002 Workshop on Soft Computing. Milovy, 28.11.2002-29.11.2002] R&D Projects: GA ČR GA201/00/1489 Institutional research plan: AV0Z1030915 Keywords : neuromata * Turing machines with advice * non-uniform computational complexity * super-Turing computational power Subject RIV: BA - General Mathematics

  19. Review of Power System Stability with High Wind Power Penetration

    DEFF Research Database (Denmark)

    Hu, Rui; Hu, Weihao; Chen, Zhe

    2015-01-01

    analyzing methods and stability improvement approaches. With increasing wind power penetration, system balancing and the reduced inertia may cause a big threaten for stable operation of power systems. To mitigate or eliminate the wind impacts for high wind penetration systems, although the practical......This paper presents an overview of researches on power system stability with high wind power penetration including analyzing methods and improvement approaches. Power system stability issues can be classified diversely according to different considerations. Each classified issue has special...... and reliable choices currently are the strong outside connections or sufficient reserve capacity constructions, many novel theories and approaches are invented to investigate the stability issues, looking forward to an extra-high penetration or totally renewable resource based power systems. These analyzing...

  20. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  1. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  2. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  3. On Computational Power of Quantum Read-Once Branching Programs

    Directory of Open Access Journals (Sweden)

    Farid Ablayev

    2011-03-01

    Full Text Available In this paper we review our current results concerning the computational power of quantum read-once branching programs. First of all, based on the circuit presentation of quantum branching programs and our variant of quantum fingerprinting technique, we show that any Boolean function with linear polynomial presentation can be computed by a quantum read-once branching program using a relatively small (usually logarithmic in the size of input number of qubits. Then we show that the described class of Boolean functions is closed under the polynomial projections.

  4. Bringing high-performance computing to the biologist's workbench: approaches, applications, and challenges

    International Nuclear Information System (INIS)

    Oehmen, C S; Cannon, W R

    2008-01-01

    Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIH's National Center for Biotechnology Information, The Institute for Genomic Research, and the DOE's Joint Genome Institute) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasets, thus enabling users to rapidly access a growing public data source, as well as use analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machines-from small clusters to emerging petascale machines-can most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats

  5. Computation of Lie transformations from a power series: Bounds and optimum truncation

    International Nuclear Information System (INIS)

    Gjaja, I.

    1996-01-01

    The problem considered is the computation of an infinite product (composition) of Lie transformations generated by homogeneous polynomials of increasing order from a given asymptotic power series. Bounds are computed for the infinitesimal form of the Lie transformations and for the domain of analyticity of the first n of them. Even when the power series is convergent, the estimates exhibit a factorial-type growth, and thus do not guarantee convergence of the product. The optimum truncation is determined by minimizing the remainder after the first n Lie transformations have been applied

  6. Power enhancement of piezoelectric transformers for power supplies

    DEFF Research Database (Denmark)

    Ekhtiari, Marzieh; Steenstrup, Anders Resen; Zhang, Zhe

    2016-01-01

    This paper studies power enhancement of piezoelectric transformers to be used in inductorless, half-bridge, piezoelecteric-based switch mode power supplies for driving a piezo actuator motor system in a high strength magnetic environment for magnetic resonance imaging and computed tomography...... applications. A new multi element-piezo transformer solution is proposed along with a dual mode piezo transformer, providing power scaling and potentially improving the internal heat-up of a high power piezo transformer system....

  7. The impact of changing computing technology on EPRI [Electric Power Research Institute] nuclear analysis codes

    International Nuclear Information System (INIS)

    Breen, R.J.

    1988-01-01

    The Nuclear Reload Management Program of the Nuclear Power Division (NPD) of the Electric Power Research Institute (EPRI) has the responsibility for initiating and managing applied research in selected nuclear engineering analysis functions for nuclear utilities. The computer systems that result from the research projects consist of large FORTRAN programs containing elaborate computational algorithms used to access such areas as core physics, fuel performance, thermal hydraulics, and transient analysis. This paper summarizes a study of computing technology trends sponsored by the NPD. The approach taken was to interview hardware and software vendors, industry observers, and utility personnel focusing on expected changes that will occur in the computing industry over the next 3 to 5 yr. Particular emphasis was placed on how these changes will impact engineering/scientific computer code development, maintenance, and use. In addition to the interviews, a workshop was held with attendees from EPRI, Power Computing Company, industry, and utilities. The workshop provided a forum for discussing issues and providing input into EPRI's long-term computer code planning process

  8. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  9. Mesh influence on the fire computer modeling in nuclear power plants

    Directory of Open Access Journals (Sweden)

    D. Lázaro

    2018-04-01

    Full Text Available Fire computer models allow to study real fire scenarios consequences. Its use in nuclear power plants has increased with the new regulations to apply risk informed performance-based methods for the analysis and design of fire safety solutions. The selection of the cell side factor is very important in these kinds of models. The mesh must establish a compromise between the geometry adjustment, the resolution of the equations and the computation times. This paper aims to study the impact of several cell sizes, using the fire computer model FDS, to evaluate the relative affectation in the final simulation results. In order to validate that, we have employed several scenarios of interest for nuclear power plants. Conclusions offer relevant data for users and show some cell sizes that can be selected to guarantee the quality of the simulations and reduce the results uncertainty.

  10. Autonomously managed high power systems

    International Nuclear Information System (INIS)

    Weeks, D.J.; Bechtel, R.T.

    1985-01-01

    The need for autonomous power management capabilities will increase as the power levels of spacecraft increase into the multi-100 kW range. The quantity of labor intensive ground and crew support consumed by the 9 kW Skylab cannot be afforded in support of a 75-300 kW Space Station or high power earth orbital and interplanetary spacecraft. Marshall Space Flight Center is managing a program to develop necessary technologies for high power system autonomous management. To date a reference electrical power system and automation approaches have been defined. A test facility for evaluation and verification of management algorithms and hardware has been designed with the first of the three power channel capability nearing completion

  11. ICAN: High power neutral beam generation

    International Nuclear Information System (INIS)

    Moustaizis, S.D.; Lalousis, P.; Perrakis, K.; Auvray, P.; Larour, J.; Ducret, J.E.; Balcou, P.

    2015-01-01

    During the last few years there is an increasing interest on the development of alternative high power new negative ion source for Tokamak applications. The proposed new neutral beam device presents a number of advantages with respect to: the density current, the acceleration voltage, the relative compact dimension of the negative ion source, and the coupling of a high power laser beam for photo-neutralization of the negative ion beam. Here we numerically investigate, using a multi- fluid 1-D code, the acceleration and the extraction of high power ion beam from a Magnetically Insulated Diode (MID). The diode configuration will be coupled to a high power device capable of extracting a current up to a few kA with an accelerating voltage up to MeV. An efficiency of up to 92% of the coupling of the laser beam, is required in order to obtain a high power, up to GW, neutral beam. The new high energy, high average power, high efficiency (up to 30%) ICAN fiber laser is proposed for both the plasma generation and the photo-neutralizer configuration. (authors)

  12. Charging system of ECRH high-voltage power supply and its control system

    International Nuclear Information System (INIS)

    Hu Guofu; Ding Tonghai; Liu Baohua; Jiang Shufang

    2003-01-01

    High-voltage power supply (HVPS) of Electron Cyclotron Resonance Heating (ECRH) for HT-7 and HT-7U is presently being constructed. The high voltage (100 kV) energy of HVPS is stored in the capacitor banks, and they can power one or two gyrotrons. All the operation of the charging system will be done by the control system, where the field signals are interfaced to programmable logic controller (PLC). The use of PLC not only simplifies the control system, but also enhances the reliability. The software written by using configuration software installed in the master computer allows for remote and multiple operator control, and the status and data information is also remotely available

  13. Green computing: power optimisation of vfi-based real-time multiprocessor dataflow applications

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  14. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  15. High Temperature, High Power Piezoelectric Composite Transducers

    Science.gov (United States)

    Lee, Hyeong Jae; Zhang, Shujun; Bar-Cohen, Yoseph; Sherrit, StewarT.

    2014-01-01

    Piezoelectric composites are a class of functional materials consisting of piezoelectric active materials and non-piezoelectric passive polymers, mechanically attached together to form different connectivities. These composites have several advantages compared to conventional piezoelectric ceramics and polymers, including improved electromechanical properties, mechanical flexibility and the ability to tailor properties by using several different connectivity patterns. These advantages have led to the improvement of overall transducer performance, such as transducer sensitivity and bandwidth, resulting in rapid implementation of piezoelectric composites in medical imaging ultrasounds and other acoustic transducers. Recently, new piezoelectric composite transducers have been developed with optimized composite components that have improved thermal stability and mechanical quality factors, making them promising candidates for high temperature, high power transducer applications, such as therapeutic ultrasound, high power ultrasonic wirebonding, high temperature non-destructive testing, and downhole energy harvesting. This paper will present recent developments of piezoelectric composite technology for high temperature and high power applications. The concerns and limitations of using piezoelectric composites will also be discussed, and the expected future research directions will be outlined. PMID:25111242

  16. A 380 V High Efficiency and High Power Density Switched-Capacitor Power Converter using Wide Band Gap Semiconductors

    DEFF Research Database (Denmark)

    Fan, Lin; Knott, Arnold; Jørgensen, Ivan Harald Holger

    2018-01-01

    . This paper presents such a high voltage low power switched-capacitor DC-DC converter with an input voltage upto 380 V (compatible with rectified European mains) and an output power experimentally validated up to 21.3 W. The wideband gap semiconductor devices of GaN switches and SiC diodes are combined...... to compose the proposed power stage. Their switching and loss characteristics are analyzed with transient waveforms and thermal images. Different isolated driving circuits are compared and a compact isolated halfbridge driving circuit is proposed. The full-load efficiencies of 98.3% and 97.6% are achieved......State-of-the-art switched-capacitor DC-DC power converters mainly focus on low voltage and/or high power applications. However, at high voltage and low power levels, new designs are anticipated to emerge and a power converter that has both high efficiency and high power density is highly desirable...

  17. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  18. A Switched Capacitor Based AC/DC Resonant Converter for High Frequency AC Power Generation

    Directory of Open Access Journals (Sweden)

    Cuidong Xu

    2015-09-01

    Full Text Available A switched capacitor based AC-DC resonant power converter is proposed for high frequency power generation output conversion. This converter is suitable for small scale, high frequency wind power generation. It has a high conversion ratio to provide a step down from high voltage to low voltage for easy use. The voltage conversion ratio of conventional switched capacitor power converters is fixed to n, 1/n or −1/n (n is the switched capacitor cell. In this paper, A circuit which can provide n, 1/n and 2n/m of the voltage conversion ratio is presented (n is stepping up the switched capacitor cell, m is stepping down the switching capacitor cell. The conversion ratio can be changed greatly by using only two switches. A resonant tank is used to assist in zero current switching, and hence the current spike, which usually exists in a classical switching switched capacitor converter, can be eliminated. Both easy operation and efficiency are possible. Principles of operation, computer simulations and experimental results of the proposed circuit are presented. General analysis and design methods are given. The experimental result verifies the theoretical analysis of high frequency AC power generation.

  19. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  20. High-speed extended-term time-domain simulation for online cascading analysis of power system

    Science.gov (United States)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have

  1. High Power Density Power Electronic Converters for Large Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk

    . For these VSCs, high power density is required due to limited turbine nacelle space. Also, high reliability is required since maintenance cost of these remotely located wind turbines is quite high and these turbines operate under harsh operating conditions. In order to select a high power density and reliability......In large wind turbines (in MW and multi-MW ranges), which are extensively utilized in wind power plants, full-scale medium voltage (MV) multi-level (ML) voltage source converters (VSCs) are being more preferably employed nowadays for interfacing these wind turbines with electricity grids...... VSC solution for wind turbines, first, the VSC topology and the switch technology to be employed should be specified such that the highest possible power density and reliability are to be attained. Then, this qualitative approach should be complemented with the power density and reliability...

  2. How to compute the power of a steam turbine with condensation, knowing the steam quality of saturated steam in the turbine discharge

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Albarran, Manuel Jaime; Krever, Marcos Paulo Souza [Braskem, Sao Paulo, SP (Brazil)

    2009-07-01

    To compute the power and the thermodynamic performance in a steam turbine with condensation, it is necessary to know the quality of the steam in the turbine discharge and, information of process variables that permit to identifying with high precision the enthalpy of saturated steam. This paper proposes to install an operational device that will expand the steam from high pressure point on the shell turbine to atmosphere, both points with measures of pressure and temperature. Arranging these values on the Mollier chart, it can be know the steam quality value and with this data one can compute the enthalpy value of saturated steam. With the support of this small instrument and using the ASME correlations to determine the equilibrium temperature and knowing the discharge pressure in the inlet of surface condenser, the absolute enthalpy of the steam discharge can be computed with high precision and used to determine the power and thermodynamic efficiency of the turbine. (author)

  3. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  4. A computer model of the MFTF-B neutral beam accel dc power supply

    International Nuclear Information System (INIS)

    Wilson, J.H.

    1983-01-01

    Using the SCEPTRE circuit modeling code, a computer model was developed for the MFTF Neutral Beam Power Supply System (NBPSS) Accel DC Power Supply (ADCPS). The ADCPS provides 90 kV, 88 A, to the Accel Modulator. Because of the complex behavior of the power supply, use of the computer model is necessary to adequately understand the power supply's behavior over a wide range of load conditions and faults. The model developed includes all the circuit components and parameters, and some of the stray values. The model has been well validated for transients with times on the order of milliseconds, and with one exception, for steady-state operation. When using a circuit modeling code for a system with a wide range of time constants, it can become impossible to obtain good solutions for all time ranges at once. The present model concentrates on the millisecond-range transients because the compensating capacitor bank tends to isolate the power supply from the load for faster transients. Attempts to include stray circuit elements with time constants in the microsecond and shorter range have had little success because of hugh increases in computing time that result. The model has been successfully extended to include the accel modulator

  5. Radiation physics of high power spallation targets. State of the art simulation methods and experiments, the 'European Spallation Source' (ESS)

    International Nuclear Information System (INIS)

    Filges, D.; Cloth, P.; Neef, R.D.; Schaal, H.

    1998-01-01

    Particle transport and nuclear interactions of planned high power spallation targets with GeV proton beams can be simulated using widely developed Monte Carlo transport methods. This includes available high energy radiation transport codes and systems for low energy, earlier developed for reactor physics and fusion technology. Monte Carlo simulation codes and applied methods are discussed. The capabilities of the world-wide existing state-of-the-art computer code systems are demonstrated. Results of computational studies for the 'European Spallation Source' (ESS) mercury high power target station are given. The needs for spallation related data and planned experiments are shown. (author)

  6. Commercialization issues and funding opportunities for high-performance optoelectronic computing modules

    Science.gov (United States)

    Hessenbruch, John M.; Guilfoyle, Peter S.

    1997-01-01

    Low power, optoelectronic integrated circuits are being developed for high speed switching and data processing applications. These high performance optoelectronic computing modules consist of three primary components: vertical cavity surface emitting lasers, diffractive optical interconnect elements, and detector/amplifier/laser driver arrays. Following the design and fabrication of an HPOC module prototype, selected commercial funding sources will be evaluated to support a product development stage. These include the formation of a strategic alliance with one or more microprocessor or telecommunications vendors, and/or equity investment from one or more venture capital firms.

  7. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  8. A high-power versatile wireless power transfer for biomedical implants.

    Science.gov (United States)

    Jiang, Hao; Zhang, Jun Min; Liou, Shy Shenq; Fechter, Richard; Hirose, Shinjiro; Harrison, Michael; Roy, Shuvo

    2010-01-01

    Implantable biomedical actuators are highly desired in modern medicine. However, how to power up these biomedical implants remains a challenge since most of them need more than several hundreds mW of power. The air-core based radio-frequency transformer (two face-to-face inductive coils) has been the only non-toxic and non-invasive power source for implants for the last three decades [1]. For various technical constraints, the maximum delivered power is limited by this approach. The highest delivered power reported is 275 mW over 1 cm distance [2]. Also, the delivered power is highly vulnerable to the coils' geometrical arrangement and the electrical property of the medium around them. In this paper, a novel rotating-magnets based wireless power transfer that can deliver ∼10 W over 1 cm is demonstrated. The delivered power is significantly higher than the existing start-of-art. Further, the new method is versatile since there is no need to have the impedance matching networks that are highly susceptible to the operating frequency, the coil arrangement and the environment.

  9. Application of computational intelligence techniques for load shedding in power systems: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Bakar, A.H.A.; Mohamad, Hasmaini

    2013-01-01

    Highlights: • The power system blackout history of last two decades is presented. • Conventional load shedding techniques, their types and limitations are presented. • Applications of intelligent techniques in load shedding are presented. • Intelligent techniques include ANN, fuzzy logic, ANFIS, genetic algorithm and PSO. • The discussion and comparison between these techniques are provided. - Abstract: Recent blackouts around the world question the reliability of conventional and adaptive load shedding techniques in avoiding such power outages. To address this issue, reliable techniques are required to provide fast and accurate load shedding to prevent collapse in the power system. Computational intelligence techniques, due to their robustness and flexibility in dealing with complex non-linear systems, could be an option in addressing this problem. Computational intelligence includes techniques like artificial neural networks, genetic algorithms, fuzzy logic control, adaptive neuro-fuzzy inference system, and particle swarm optimization. Research in these techniques is being undertaken in order to discover means for more efficient and reliable load shedding. This paper provides an overview of these techniques as applied to load shedding in a power system. This paper also compares the advantages of computational intelligence techniques over conventional load shedding techniques. Finally, this paper discusses the limitation of computational intelligence techniques, which restricts their usage in load shedding in real time

  10. Computational models for residual creep life prediction of power plant components

    International Nuclear Information System (INIS)

    Grewal, G.S.; Singh, A.K.; Ramamoortry, M.

    2006-01-01

    All high temperature - high pressure power plant components are prone to irreversible visco-plastic deformation by the phenomenon of creep. The steady state creep response as well as the total creep life of a material is related to the operational component temperature through, respectively, the exponential and inverse exponential relationships. Minor increases in the component temperature can thus have serious consequences as far as the creep life and dimensional stability of a plant component are concerned. In high temperature steam tubing in power plants, one mechanism by which a significant temperature rise can occur is by the growth of a thermally insulating oxide film on its steam side surface. In the present paper, an elegantly simple and computationally efficient technique is presented for predicting the residual creep life of steel components subjected to continual steam side oxide film growth. Similarly, fabrication of high temperature power plant components involves extensive use of welding as the fabrication process of choice. Naturally, issues related to the creep life of weldments have to be seriously addressed for safe and continual operation of the welded plant component. Unfortunately, a typical weldment in an engineering structure is a zone of complex microstructural gradation comprising of a number of distinct sub-zones with distinct meso-scale and micro-scale morphology of the phases and (even) chemistry and its creep life prediction presents considerable challenges. The present paper presents a stochastic algorithm, which can be' used for developing experimental creep-cavitation intensity versus residual life correlations for welded structures. Apart from estimates of the residual life in a mean field sense, the model can be used for predicting the reliability of the plant component in a rigorous probabilistic setting. (author)

  11. 77 FR 50720 - Test Documentation for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2012-08-22

    ... Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear Regulatory Commission. ACTION: Draft... Computer Software used in Safety Systems of Nuclear Power Plants.'' The DG-1207 is proposed Revision 1 of... for Digital Computer Software Used in Safety Systems of Nuclear Power Plants'' is temporarily...

  12. 10th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2017-01-01

    This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.

  13. Computer models and simulations of IGCC power plants with Canadian coals

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, L.; Furimsky, E.

    1999-07-01

    In this paper, three steady state computer models for simulation of IGCC power plants with Shell, Texaco and BGL (British Gas Lurgi) gasifiers will be presented. All models were based on a study by Bechtel for Nova Scotia Power Corporation. They were built by using Advanced System for Process Engineering (ASPEN) steady state simulation software together with Fortran programs developed in house. Each model was integrated from several sections which can be simulated independently, such as coal preparation, gasification, gas cooling, acid gas removing, sulfur recovery, gas turbine, heat recovery steam generation, and steam cycle. A general description of each process, model's overall structure, capability, testing results, and background reference will be given. The performance of some Canadian coals on these models will be discussed as well. The authors also built a computer model of IGCC power plant with Kellogg-Rust-Westinghouse gasifier, however, due to limitation of paper length, it is not presented here.

  14. CEGB philosophy and experience with fault-tolerant micro-computer application for power plant controls

    International Nuclear Information System (INIS)

    Clinch, D.A.L.

    1986-01-01

    From the mid-1960s until the late 1970s, automatic modulating control of the main boiler plant on CEGB fossil-fired power stations was largely implemented with hard wired electronic equipment. Mid-way through this period, the CEGB formulated a set of design requirements for this type of equipment; these laid particular emphasis on the fault tolerance of a control system and specified the nature of the interfaces with a control desk and with plant regulators. However, the automatic control of an Advanced Gas Cooled Reactor (AGR) is based upon measured values which are derived by processing a large number of thermocouple signals. This is more readily implemented digitally than with hard-wired equipment. Essential to the operation of an AGR power station is a data processing (DP) computer for monitoring the plant; so the first group of AGR power stations, designed in the 1960s, employed their DP computers for modulating control. Since the late 1970s, automatic modulating control of major plants, for new power stations and for re-fits on established power stations, has been implemented with micro-computers. Wherever practicable, the policy formulated earlier for hard-wired equipment has been retained, particularly in respect of the interfaces. This policy forms the foundation of the fault tolerance of these micro-computer systems

  15. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  16. Nuclear power reactor analysis, methods, algorithms and computer programs

    International Nuclear Information System (INIS)

    Matausek, M.V

    1981-01-01

    Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)

  17. Computer based training simulator for Hunterston Nuclear Power Station

    International Nuclear Information System (INIS)

    Bowden, R.S.M.; Hacking, D.

    1978-01-01

    For reasons which are stated, the Hunterston-B nuclear power station automatic control system includes a manual over-ride facility. It is therefore essential for the station engineers to be trained to recognise and control all feasible modes of plant and logic malfunction. A training simulator has been built which consists of a replica of the shutdown monitoring panel in the Central Control Room and is controlled by a mini-computer. This paper highlights the computer aspects of the simulator and relevant derived experience, under the following headings: engineering background; shutdown sequence equipment; simulator equipment; features; software; testing; maintenance. (U.K.)

  18. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  19. Pulsed high-power beams

    International Nuclear Information System (INIS)

    Reginato, L.L.; Birx, D.L.

    1988-01-01

    The marriage of induction linac technology with nonlinear magnetic modulators has produced some unique capabilities. It is now possible to produce short-pulse electron beams with average currents measured in amperes, at gradients approaching 1-MeV/m, and with power efficiencies exceeding 50%. This paper reports on a 70-MeV, 3-kA induction accelerator (ETA II) constructed at the Lawrence Livermore National Laboratory that incorporates the pulse technology concepts that have evolved over the past several years. The ETA II is a linear induction accelerator and provides a test facility for demonstration of the high-average-power components and high-brightness sources used in such accelerators. The pulse drive of the accelerator is based on state-of-the-art magnetic pulse compressors with very high peak-power capability, repetition rates exceeding 1 kHz, and excellent reliability

  20. Computer Assisted Fluid Power Instruction: A Comparison of Hands-On and Computer-Simulated Laboratory Experiences for Post-Secondary Students

    Science.gov (United States)

    Wilson, Scott B.

    2005-01-01

    The primary purpose of this study was to examine the effectiveness of utilizing a combination of lecture and computer resources to train personnel to assume roles as hydraulic system technicians and specialists in the fluid power industry. This study compared computer simulated laboratory instruction to traditional hands-on laboratory instruction,…

  1. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  2. Reviewing computer capabilities in nuclear power plants

    International Nuclear Information System (INIS)

    1990-06-01

    The OSART programme of the IAEA has become an effective vehicle for promoting international co-operation for the enhancement of plant operational safety. In order to maintain consistency in the OSART reviews, OSART Guidelines have been developed which are intended to ensure that the reviewing process is comprehensive. Computer technology is an area in which rapid development is taking place and new applications may be computerized to further enhance safety and the effectiveness of the plant. Supplementary guidance and reference material is needed to help attain comprehensiveness and consistency in OSART reviews. This document is devoted to the utilization of on-site and off-site computers in such a way that the safe operation of the plant is supported. In addition to the main text, there are several annexes illustrating adequate practices as found at various operating nuclear power plants. Refs, figs and tabs

  3. A battery-powered high-current power supply for superconductors

    CERN Document Server

    Wake, M; Suda, K

    2002-01-01

    Since superconductors do not require voltages, a high-current power supply could run with low power if the voltage is sufficiently reduced. Even a battery-powered power supply could give as much as 2,000A for a superconductor. To demonstrate this hypothesis, a battery-powered 2,000A power supply was constructed. It uses an IGBT chopper and Schottky diode together with a specially arranged transformer to produce a high current with low voltage. Testing of 2,000A operation was performed for about 1.5 hr using 10 car batteries. Charging time for this operation was 8 hr. Ramping control was smooth and caused no trouble. Although the IGBT frequency ripple of 16.6 kHz was easily removed using a passive filter, spike noise remained in the output voltage. This ripple did not cause any trouble in operating a pancake-type inductive superconducting load. (author)

  4. High-field, high-density tokamak power reactor

    International Nuclear Information System (INIS)

    Cohn, D.R.; Cook, D.L.; Hay, R.D.; Kaplan, D.; Kreischer, K.; Lidskii, L.M.; Stephany, W.; Williams, J.E.C.; Jassby, D.L.; Okabayashi, M.

    1977-11-01

    A conceptual design of a compact (R 0 = 6.0 m) high power density (average P/sub f/ = 7.7 MW/m 3 ) tokamak demonstration power reactor has been developed. High magnetic field (B/sub t/ = 7.4 T) and moderate elongation (b/a = 1.6) permit operation at the high density (n(0) approximately 5 x 10 14 cm -3 ) needed for ignition in a relatively small plasma, with a spatially-averaged toroidal beta of only 4%. A unique design for the Nb 3 Sn toroidal-field magnet system reduces the stress in the high-field trunk region, and allows modularization for simpler disassembly. The modest value of toroidal beta permits a simple, modularized plasma-shaping coil system, located inside the TF coil trunk. Heating of the dense central plasma is attained by the use of ripple-assisted injection of 120-keV D 0 beams. The ripple-coil system also affords dynamic control of the plasma temperature during the burn period. A FLIBE-lithium blanket is designed especially for high-power-density operation in a high-field environment, and gives an overall tritium breeding ratio of 1.05 in the slowly pumped lithium

  5. Hybrid simulation of electrode plasmas in high-power diodes

    International Nuclear Information System (INIS)

    Welch, Dale R.; Rose, David V.; Bruner, Nichelle; Clark, Robert E.; Oliver, Bryan V.; Hahn, Kelly D.; Johnston, Mark D.

    2009-01-01

    New numerical techniques for simulating the formation and evolution of cathode and anode plasmas have been successfully implemented in a hybrid code. The dynamics of expanding electrode plasmas has long been recognized as a limiting factor in the impedance lifetimes of high-power vacuum diodes and magnetically insulated transmission lines. Realistic modeling of such plasmas is being pursued to aid in understanding the operating characteristics of these devices as well as establishing scaling relations for reliable extrapolation to higher voltages. Here, in addition to kinetic and fluid modeling, a hybrid particle-in-cell technique is described that models high density, thermal plasmas as an inertial fluid which transitions to kinetic electron or ion macroparticles above a prescribed energy. The hybrid technique is computationally efficient and does not require resolution of the Debye length. These techniques are first tested on a simple planar diode then applied to the evolution of both cathode and anode plasmas in a high-power self-magnetic pinch diode. The impact of an intense electron flux on the anode surface leads to rapid heating of contaminant material and diode impedance loss.

  6. Assessment of computer codes for VVER-440/213-type nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  7. Applications of computer based safety systems in Korea nuclear power plants

    International Nuclear Information System (INIS)

    Won Young Yun

    1998-01-01

    With the progress of computer technology, the applications of computer based safety systems in Korea nuclear power plants have increased rapidly in recent decades. The main purpose of this movement is to take advantage of modern computer technology so as to improve the operability and maintainability of the plants. However, in fact there have been a lot of controversies on computer based systems' safety between the regulatory body and nuclear utility in Korea. The Korea Institute of Nuclear Safety (KINS), technical support organization for nuclear plant licensing, is currently confronted with the pressure to set up well defined domestic regulatory requirements from this aspect. This paper presents the current status and the regulatory activities related to the applications of computer based safety systems in Korea. (author)

  8. Cyber Security on Nuclear Power Plant's Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ick Hyun [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2010-10-15

    Computer systems are used in many different fields of industry. Most of us are taking great advantages from the computer systems. Because of the effectiveness and great performance of computer system, we are getting so dependable on the computer. But the more we are dependable on the computer system, the more the risk we will face when the computer system is unavailable or inaccessible or uncontrollable. There are SCADA, Supervisory Control And Data Acquisition, system which are broadly used for critical infrastructure such as transportation, electricity, water management. And if the SCADA system is vulnerable to the cyber attack, it is going to be nation's big disaster. Especially if nuclear power plant's main control systems are attacked by cyber terrorists, the results may be huge. Leaking of radioactive material will be the terrorist's main purpose without using physical forces. In this paper, different types of cyber attacks are described, and a possible structure of NPP's computer network system is presented. And the paper also provides possible ways of destruction of the NPP's computer system along with some suggestions for the protection against cyber attacks

  9. EBR-II high-ramp transients under computer control

    International Nuclear Information System (INIS)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.; Booty, W.F.; Dean, E.M.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients

  10. Applications of high power microwaves

    International Nuclear Information System (INIS)

    Benford, J.; Swegle, J.

    1993-01-01

    The authors address a number of applications for HPM technology. There is a strong symbiotic relationship between a developing technology and its emerging applications. New technologies can generate new applications. Conversely, applications can demand development of new technological capability. High-power microwave generating systems come with size and weight penalties and problems associated with the x-radiation and collection of the electron beam. Acceptance of these difficulties requires the identification of a set of applications for which high-power operation is either demanded or results in significant improvements in peRFormance. The authors identify the following applications, and discuss their requirements and operational issues: (1) High-energy RF acceleration; (2) Atmospheric modification (both to produce artificial ionospheric mirrors for radio waves and to save the ozone layer); (3) Radar; (4) Electronic warfare; and (5) Laser pumping. In addition, they discuss several applications requiring high average power than border on HPM, power beaming and plasma heating

  11. An application of the process computer and CRT display system in BWR nuclear power station

    International Nuclear Information System (INIS)

    Goto, Seiichiro; Aoki, Retsu; Kawahara, Haruo; Sato, Takahisa

    1975-01-01

    A color CRT display system was combined with a process computer in some BWR nuclear power plants in Japan. Although the present control system uses the CRT display system only as an output device of the process computer, it has various advantages over conventional control panel as an efficient plant-operator interface. Various graphic displays are classified into four categories. The first is operational guide which includes the display of control rod worth minimizer and that of rod block monitor. The second is the display of the results of core performance calculation which include axial and radial distributions of power output, exit quality, channel flow rate, CHFR (critical heat flux ratio), FLPD (fraction of linear power density), etc. The third is the display of process variables and corresponding computational values. The readings of LPRM, control rod position and the process data concerning turbines and feed water system are included in this category. The fourth category includes the differential axial power distribution between base power distribution (obtained from TIP) and the reading of each LPRM detector, and the display of various input parameters being used by the process computer. Many photographs are presented to show examples of those applications. (Aoki, K.)

  12. When power does not corrupt: superior individuation processes among powerful perceivers.

    Science.gov (United States)

    Overbeck, J R; Park, B

    2001-10-01

    To examine whether powerful people fail to individuate the less powerful, the authors assigned participants to either a high-power or low-power role for a computer E-mail role play. In 3 studies, participants in the high-power role made decisions and determined the outcomes of interactions; low-power role players had no power and relied on high-power targets for outcome decisions. Studies I and 2 found that high-power perceivers better individuated low-power targets. Study 3 demonstrated that high-power role players' superior judgment can be impaired by including a task that directs their responsibility toward organizational rather than interpersonal concerns. In all, results suggest that the effect of power on social judgment may be more complex and multifaceted than has previously been acknowledged.

  13. Computer program for afterheat temperature distribution for mobile nuclear power plant

    Science.gov (United States)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  14. Computational aspects in high intensity ultrasonic surgery planning.

    Science.gov (United States)

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Advanced digital computers, controls, and automation technologies for power plants: Proceedings

    International Nuclear Information System (INIS)

    Bhatt, S.C.

    1992-08-01

    This document is a compilation of the papers that were presented at an EPRI workshop on Advances in Computers, Controls, and Automation Technologies for Power Plants. The workshop, sponsored by EPRI's Nuclear Power Division, took place February 1992. It was attended by 157 representatives from electric utilities, equipment manufacturers, engineering consulting organizations, universities, national laboratories, government agencies and international utilities. More than 40% of the attendees were from utilities representing the majority group. There were 30% attendees from equipment manufacturers and the engineering consulting organizations. The participants from government agencies, universities, and national laboratories were about 10% each. The workshop included a keynote address, 35 technical papers, and vendor's equipment demonstrations. The technical papers described the state-of-the-art in the areas of recent utility digital upgrades such as digital feedwater controllers, steam generator level controllers, integrated plant computer systems, computer aided diagnostics, automated testing and surveillance and other applications. A group of technical papers presented the ongoing B ampersand W PWR integrated plant control system prototype developments with the triple redundant advanced digital control system. Several international papers from France, Japan and U.K. presented their programs on advance power plant design and applications. Significant advances in the control and automation technologies such as adaptive controls, self-tuning methods, neural networks and expert systems were presented by developers, universities, and national laboratories. Individual papers are indexed separately

  16. Proposal for grid computing for nuclear applications

    International Nuclear Information System (INIS)

    Faridah Mohamad Idris; Wan Ahmad Tajuddin Wan Abdullah; Zainol Abidin Ibrahim; Zukhaimira Zolkapli

    2013-01-01

    Full-text: The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process. (author)

  17. Computer-based systems for nuclear power stations

    International Nuclear Information System (INIS)

    Humble, P.J.; Welbourne, D.; Belcher, G.

    1995-01-01

    The published intentions of vendors are for extensive touch-screen control and computer-based protection. The software features needed for acceptance in the UK are indicated. The defence in depth needed is analyzed. Current practice in aircraft flight control systems and the software methods available are discussed. Software partitioning and mathematically formal methods are appropriate for the structures and simple logic needed for nuclear power applications. The potential for claims of diversity and independence between two computer-based subsystems of a protection system is discussed. Features needed to meet a single failure criterion applied to software are discussed. Conclusions are given on the main factors which a design should allow for. The work reported was done for the Health and Safety Executive of the UK (HSE), and acknowledgement is given to them, to NNC Ltd and to GEC-Marconi Avionics Ltd for permission to publish. The opinions and recommendations expressed are those of the authors and do not necessarily reflect those of HSE. (Author)

  18. HEDPIN: a computer program to estimate pinwise power density

    International Nuclear Information System (INIS)

    Cappiello, M.W.

    1976-05-01

    A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given

  19. Thermoelectric cooling of microelectronic circuits and waste heat electrical power generation in a desktop personal computer

    International Nuclear Information System (INIS)

    Gould, C.A.; Shammas, N.Y.A.; Grainger, S.; Taylor, I.

    2011-01-01

    Thermoelectric cooling and micro-power generation from waste heat within a standard desktop computer has been demonstrated. A thermoelectric test system has been designed and constructed, with typical test results presented for thermoelectric cooling and micro-power generation when the computer is executing a number of different applications. A thermoelectric module, operating as a heat pump, can lower the operating temperature of the computer's microprocessor and graphics processor to temperatures below ambient conditions. A small amount of electrical power, typically in the micro-watt or milli-watt range, can be generated by a thermoelectric module attached to the outside of the computer's standard heat sink assembly, when a secondary heat sink is attached to the other side of the thermoelectric module. Maximum electrical power can be generated by the thermoelectric module when a water cooled heat sink is used as the secondary heat sink, as this produces the greatest temperature difference between both sides of the module.

  20. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  1. Optics assembly for high power laser tools

    Science.gov (United States)

    Fraze, Jason D.; Faircloth, Brian O.; Zediker, Mark S.

    2016-06-07

    There is provided a high power laser rotational optical assembly for use with, or in high power laser tools for performing high power laser operations. In particular, the optical assembly finds applications in performing high power laser operations on, and in, remote and difficult to access locations. The optical assembly has rotational seals and bearing configurations to avoid contamination of the laser beam path and optics.

  2. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  3. Application of high power microwave vacuum electron devices

    International Nuclear Information System (INIS)

    Ding Yaogen; Liu Pukun; Zhang Zhaochuan; Wang Yong; Shen Bin

    2011-01-01

    High power microwave vacuum electron devices can work at high frequency, high peak and average power. They have been widely used in military and civil microwave electron systems, such as radar, communication,countermeasure, TV broadcast, particle accelerators, plasma heating devices of fusion, microwave sensing and microwave heating. In scientific research, high power microwave vacuum electron devices are used mainly on high energy particle accelerator and fusion research. The devices include high peak power klystron, CW and long pulse high power klystron, multi-beam klystron,and high power gyrotron. In national economy, high power microwave vacuum electron devices are used mainly on weather and navigation radar, medical and radiation accelerator, TV broadcast and communication system. The devices include high power pulse and CW klystron, extended interaction klystron, traveling wave tube (TWT), magnetron and induced output tube (IOT). The state of art, common technology problems and trends of high power microwave vacuum electron devices are introduced in this paper. (authors)

  4. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. Application of the Fuzzy Computational Intelligence in Power Quality Data Management

    Directory of Open Access Journals (Sweden)

    Hoda Farag

    2017-03-01

    Full Text Available In Electrical Power Distribution System the sustained availability and quality of electric power are the main challenge they need to satisfy so overcoming the power quality (PQ degradation became an asset. This Paper addresses the perfect load management using the computational techniques by analyzing the data of the system taking into account the density of the  feeding nodes and its distribution  also the classification of major Power quality degradations such as power factor and harmonics in the System and The methodology will be illustrated, simulated and evaluated using the fuzzy technique clustering the data and on an Artificial Neural Network (ANN to achieve the optimum utilization of the energy loads and perfect load management and optimization. Simulation results demonstrate the effectiveness of the proposed algorithm in reducing the power and energy losses, improving the quality of the electric power system.

  6. Computation On dP Type power System Stabilizer Using Fuzzy Logic

    International Nuclear Information System (INIS)

    Iskandar, M.A.; Irwan, R.; Husdi; Riza; Mardhana, E.; Triputranto, A.

    1997-01-01

    Power system stabilizers (PSS) are widely applied in power generators to damp power oscillation caused by certain disturbances in order to increase the power supply capacity. PSS design is often suffered from the difficulty on setting periodically its parameters, which are gain and compensators, in order to have an optimal damping characteristic. This paper proposes a methode to determine parameters of dP type PSS by implementing fuzzy logic rules in a computer program,to obtain the appropriate characteristics of synchronous torque and damping torque. PSS with the calculated parameters is investigated on a simulation using a non-linear electric power system of a thermal generator connected to infinite bus system model. Simulation results show that great improvement in damping characteristic and enhancement of stability margin of electric power system are obtained by using the proposed PSS

  7. 14 CFR 101.25 - Operating limitations for Class 2-High Power Rockets and Class 3-Advanced High Power Rockets.

    Science.gov (United States)

    2010-01-01

    ... Power Rockets and Class 3-Advanced High Power Rockets. 101.25 Section 101.25 Aeronautics and Space... OPERATING RULES MOORED BALLOONS, KITES, AMATEUR ROCKETS AND UNMANNED FREE BALLOONS Amateur Rockets § 101.25 Operating limitations for Class 2-High Power Rockets and Class 3-Advanced High Power Rockets. When operating...

  8. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  9. Computer programs for the in-core fuel management of power reactors

    International Nuclear Information System (INIS)

    1981-08-01

    This document gives a survey of the presently tested and used computer programs applicable to the in-core fuel management of light and heavy water moderated nuclear power reactors. Each computer program is described (provided that enough information was supplied) such that the nature of the physical problem solved and the basic mathematical or calculational approach are evident. In addition, further information regarding computer requirements, up-to-date applications and experiences and specific details concerning implementation, staff requirements, etc., are provided. Program procurement conditions, possible program implementation assistance and commercial conditions (where applicable) are given. (author)

  10. High-frequency high-voltage high-power DC-to-DC converters

    Science.gov (United States)

    Wilson, T. G.; Owen, H. A.; Wilson, P. M.

    1982-09-01

    A simple analysis of the current and voltage waveshapes associated with the power transistor and the power diode in an example current-or-voltage step-up (buck-boost) converter is presented. The purpose of the analysis is to provide an overview of the problems and design trade-offs which must be addressed as high-power high-voltage converters are operated at switching frequencies in the range of 100 kHz and beyond. Although the analysis focuses on the current-or-voltage step-up converter as the vehicle for discussion, the basic principles presented are applicable to other converter topologies as well.

  11. Polymer waveguides for electro-optical integration in data centers and high-performance computers.

    Science.gov (United States)

    Dangel, Roger; Hofrichter, Jens; Horst, Folkert; Jubin, Daniel; La Porta, Antonio; Meier, Norbert; Soganci, Ibrahim Murat; Weiss, Jonas; Offrein, Bert Jan

    2015-02-23

    To satisfy the intra- and inter-system bandwidth requirements of future data centers and high-performance computers, low-cost low-power high-throughput optical interconnects will become a key enabling technology. To tightly integrate optics with the computing hardware, particularly in the context of CMOS-compatible silicon photonics, optical printed circuit boards using polymer waveguides are considered as a formidable platform. IBM Research has already demonstrated the essential silicon photonics and interconnection building blocks. A remaining challenge is electro-optical packaging, i.e., the connection of the silicon photonics chips with the system. In this paper, we present a new single-mode polymer waveguide technology and a scalable method for building the optical interface between silicon photonics chips and single-mode polymer waveguides.

  12. Computer- Aided Design in Power Engineering Application of Software Tools

    CERN Document Server

    Stojkovic, Zlatan

    2012-01-01

    This textbooks demonstrates the application of software tools in solving a series of problems from the field of designing power system structures and systems. It contains four chapters: The first chapter leads the reader through all the phases necessary in the procedures of computer aided modeling and simulation. It guides through the complex problems presenting on the basis of eleven original examples. The second chapter presents  application of software tools in power system calculations of power systems equipment design. Several design example calculations are carried out using engineering standards like MATLAB, EMTP/ATP, Excel & Access, AutoCAD and Simulink. The third chapters focuses on the graphical documentation using a collection of software tools (AutoCAD, EPLAN, SIMARIS SIVACON, SIMARIS DESIGN) which enable the complete automation of the development of graphical documentation of a power systems. In the fourth chapter, the application of software tools in the project management in power systems ...

  13. Simulation of nuclear fuel rods by using process computer-controlled power for indirect electrically heated rods

    International Nuclear Information System (INIS)

    Malang, S.

    1975-11-01

    An investigation was carried out to determine how the simulation of nuclear fuel rods with indirect electrically heated rods could be improved by use of a computer to control the electrical power during a loss-of-coolant accident (LOCA). To aid in the experiment, a new version of the HETRAP code was developed which simulates a LOCA with heater rod power controlled by a computer that adjusts rod power during a blowdown to minimize the difference in heat flux of the fuel and heater rods. Results show that without computer control of heater rod power, only the part of a blowdown up to the time when the heat transfer mode changes from nucleate boiling to transition or film boiling can be simulated well and then only for short times. With computer control, the surface heat flux and temperature of an electrically heated rod can be made nearly identical to that of a reactor fuel rod with the same cooling conditions during much of the LOCA. A small process control computer can be used to achieve close simulation of a nuclear fuel rod with an indirect electrically heated rod

  14. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  15. High-power, high-efficiency FELs

    International Nuclear Information System (INIS)

    Sessler, A.M.

    1989-04-01

    High power, high efficiency FELs require tapering, as the particles loose energy, so as to maintain resonance between the electromagnetic wave and the particles. They also require focusing of the particles (usually done with curved pole faces) and focusing of the electromagnetic wave (i.e. optical guiding). In addition, one must avoid transverse beam instabilities (primarily resistive wall) and longitudinal instabilities (i.e sidebands). 18 refs., 7 figs., 3 tabs

  16. Advanced in-core monitoring system for high-power reactors

    International Nuclear Information System (INIS)

    Mitin, V.I.; Alekseev, A.N.; Golovanov, M.N.; Zorin, A.V.; Kalinushkin, A.E.; Kovel, A.I.; Milto, N.V.; Musikhin, A.M.; Tikhonova, N.V.; Filatov, V.P.

    2006-01-01

    This paper encompasses such section as objective, conception and engineering solution for construction of advanced in-core instrumentation system for high power reactor, including WWER-1000. The ICIS main task is known to be an on-line monitoring of power distribution and functionals independently of design programs to avoid a common cause error. This paper shows in what way the recovery of power distribution has been carried out using the signals from in-core neutron detectors or temperature sensors. On the basis of both measured and processed data, the signals of preventive and emergency protection on local parameters (linear power of the maximum intensive fuel rods, departure from nucleate boiling ratio peaking factor) have been automatically generated. The paper presents a detection technology and processing methods for signals from SPNDs and TCs, ICIS composition and structure, computer hardware, system and applied software. Structure, composition and the taken decisions allow combining class IE and class B and C tasks in accordance with international standards of separation and safety category realization. Nowadays, ICIS-M is a system that is capable to ensure: monitoring, safety, information display and diagnostics function, which allow securing actual increase of quality, reliability and safety in operation of nuclear fuel and power units. Meanwhile, it reduce negative influence of human factor on thermal technical reliability in the operational process (Authors)

  17. Cellular computational generalized neuron network for frequency situational intelligence in a multi-machine power system.

    Science.gov (United States)

    Wei, Yawei; Venayagamoorthy, Ganesh Kumar

    2017-09-01

    To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Water Vapour Propulsion Powered by a High-Power Laser-Diode

    Science.gov (United States)

    Minami, Y.; Uchida, S.

    Most of the laser propulsion schemes now being proposed and developed assume neither power supplies nor on-board laser devices and therefore are bound to remote laser stations like a kite via a laser beam “string”. This is a fatal disadvantage for a space vehicle that flies freely though it is often said that no need of installing an energy source is an advantage of a laser propulsion scheme. The possibility of an independent laser propulsion space vehicle that carries a laser source and a power supply on board is discussed. This is mainly due to the latest development of high power laser diode (LD) technology. Both high specific impulse-low thrust mode and high thrust-low specific impulse mode can be selected by controlling the laser output by using vapour or water as a propellant. This mode change can be performed by switching between a high power continuous wave (cw), LD engine for high thrust with a low specific impulse mode and high power LD pumping Q-switched Nd:YAG laser engine for low thrust with the high specific impulse mode. This paper describes an Orbital Transfer Vehicle equipped with the above-mentioned laser engine system and fuel cell that flies to the Moon from a space platform or space hotel in Earth orbit, with cargo shipment from lunar orbit to the surface of the Moon, including the possibility of a sightseeing trip.

  19. Power cycle heat balance software for personal computer (PC)2TM

    International Nuclear Information System (INIS)

    Bockh, P. von; Rodriguez, H.

    1996-01-01

    This paper describes the PC-based power cycle balance of plant software (PC)trademark (Power Cycle on Personal Computer). It is designed to assist nuclear, fossil, and industrial power plants so that steam cycles can be simulated, analyzed and optimized. First, the cycle model is developed on the screen. The elements of the power cycle are taken from a tool box containing all components of a modern power cycle. The elements are connected by using a mouse. The next step is the input of the design values of the components or data taken from performance tests. This entire input sequence is guided by the program. Based on the input data, the physical behavior of each component is simulated according to established physical rules. Part load operation or other off-design conditions can be calculated. The program is designed for use by power plant engineers and power engineering firms to optimize new power cycles, perform problem-solving analyses, optimize component retrofit, and train power plant engineers and operators. It also can be used by universities to educate engineering students

  20. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  1. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  2. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    International Nuclear Information System (INIS)

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs

  3. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-06-01

    This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs.

  4. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly but important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.

  5. Optimize the Power Consumption and SNR of the 3D Photonic High-Radix Switch Architecture Based on Extra Channels and Redundant Rings

    Directory of Open Access Journals (Sweden)

    Jie Jian

    2018-01-01

    Full Text Available The demand from exascale computing has made the design of high-radix switch chips an attractive and challenging research field in EHPC (exascale high-performance computing. The static power, due to the thermal sensitivity and process variation of the microresonator rings, and the cross talk noise of the optical network become the main bottlenecks of the network’s scalability. This paper proposes the analyze model of the trimming power, process variation power, and signal-to-noise ratio (SNR for the Graphein-based high-radix optical switch networks and uses the extra channels and the redundant rings to decrease the trimming power and the process variation power. The paper also explores the SNR under different configurations. The simulation result shows that when using 8 extra channels in the 64×64 crossbar optical network, the trimming power reduces almost 80% and the process variation power decreases 65% by adding 16 redundant rings in the 64×64 crossbar optical network. All of these schemes have little influence on the SNR. Meanwhile, the greater channel spacing has great advantages to decrease the static power and increase the SNR of the optical network.

  6. Virginia Power's computer-based interactive videodisc training: a prototype for the future

    International Nuclear Information System (INIS)

    Seigler, G.G.; Adams, R.H.

    1987-01-01

    Virginia Power has developed a system and internally produced a prototype for computer-based interactive videodisc (CBIV) training. Two programs have been developed using the CBIV instructional methodology: Fire Team Retraining and General Employee Training (practical factors). In addition, the company developed a related program for conducting a videodisc tour of their nuclear power stations using a videodisc information management system (VIMS)

  7. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. Volunteer Computing for Science Gateways

    OpenAIRE

    Anderson, David

    2017-01-01

    This poster offers information about volunteer computing for science gateways that offer high-throughput computing services. Volunteer computing can be used to get computing power. This increases the visibility of the gateway to the general public as well as increasing computing capacity at little cost.

  9. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  10. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  11. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  12. High speed micromachining with high power UV laser

    Science.gov (United States)

    Patel, Rajesh S.; Bovatsek, James M.

    2013-03-01

    Increasing demand for creating fine features with high accuracy in manufacturing of electronic mobile devices has fueled growth for lasers in manufacturing. High power, high repetition rate ultraviolet (UV) lasers provide an opportunity to implement a cost effective high quality, high throughput micromachining process in a 24/7 manufacturing environment. The energy available per pulse and the pulse repetition frequency (PRF) of diode pumped solid state (DPSS) nanosecond UV lasers have increased steadily over the years. Efficient use of the available energy from a laser is important to generate accurate fine features at a high speed with high quality. To achieve maximum material removal and minimal thermal damage for any laser micromachining application, use of the optimal process parameters including energy density or fluence (J/cm2), pulse width, and repetition rate is important. In this study we present a new high power, high PRF QuasarR 355-40 laser from Spectra-Physics with TimeShiftTM technology for unique software adjustable pulse width, pulse splitting, and pulse shaping capabilities. The benefits of these features for micromachining include improved throughput and quality. Specific example and results of silicon scribing are described to demonstrate the processing benefits of the Quasar's available power, PRF, and TimeShift technology.

  13. Power affects performance when the pressure is on: evidence for low-power threat and high-power lift.

    Science.gov (United States)

    Kang, Sonia K; Galinsky, Adam D; Kray, Laura J; Shirako, Aiwa

    2015-05-01

    The current research examines how power affects performance in pressure-filled contexts. We present low-power-threat and high-power-lift effects, whereby performance in high-stakes situations suffers or is enhanced depending on one's power; that is, the power inherent to a situational role can produce effects similar to stereotype threat and lift. Three negotiations experiments demonstrate that role-based power affects outcomes but only when the negotiation is diagnostic of ability and, therefore, pressure-filled. We link these outcomes conceptually to threat and lift effects by showing that (a) role power affects performance more strongly when the negotiation is diagnostic of ability and (b) underperformance disappears when the low-power negotiator has an opportunity to self-affirm. These results suggest that stereotype threat and lift effects may represent a more general phenomenon: When the stakes are raised high, relative power can act as either a toxic brew (stereotype/low-power threat) or a beneficial elixir (stereotype/high-power lift) for performance. © 2015 by the Society for Personality and Social Psychology, Inc.

  14. Computer-aided load monitoring system for nuclear power plant steel framing structures

    International Nuclear Information System (INIS)

    Skaczylo, A.T.; Fung, S-J; Hooks, R.W.

    1984-01-01

    The design of nuclear power plant steel framing structures is a long and involved process. It is often complicated by numerous changes in design loads as a result of additions, deletions and modifications of HVAC hangers, cable tray hangers, electric conduit hangers, and small bore and large bore mechanical component supports. Manual tracking of load changes of thousands of supports and their impact to the structural steel design adequacy is very time-consuming and is susceptible to errors. This paper presents a computer-aided load monitoring system using the latest technology of data base management and interactive computer software. By linking the data base to analysis and investigation computer programs, the engineer has a very powerful tool to monitor not only the load revisions but also their impact on the steel structural floor framing members and connections. Links to reporting programs allow quick information retrieval in the form of comprehensive reports. Drawing programs extract data from the data base to draw hanger load system drawings on a computer-aided drafting system. These capabilities allow engineers to minimize modifications by strategically locating new hangers or rearranging auxiliary steel configuration

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem

    2011-08-31

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag.

  17. Advanced High Voltage Power Device Concepts

    CERN Document Server

    Baliga, B Jayant

    2012-01-01

    Advanced High Voltage Power Device Concepts describes devices utilized in power transmission and distribution equipment, and for very high power motor control in electric trains and steel-mills. Since these devices must be capable of supporting more than 5000-volts in the blocking mode, this books covers operation of devices rated at 5,000-V, 10,000-V and 20,000-V. Advanced concepts (the MCT, the BRT, and the EST) that enable MOS-gated control of power thyristor structures are described and analyzed in detail. In addition, detailed analyses of the silicon IGBT, as well as the silicon carbide MOSFET and IGBT, are provided for comparison purposes. Throughout the book, analytical models are generated to give a better understanding of the physics of operation for all the structures. This book provides readers with: The first comprehensive treatment of high voltage (over 5000-volts) power devices suitable for the power distribution, traction, and motor-control markets;  Analytical formulations for all the device ...

  18. Management of the high-level nuclear power facilities

    International Nuclear Information System (INIS)

    Preda, Marin

    2003-05-01

    of energy produced in computer assisted high power facilities. A final chapter summarizes the concluding remarks and recommendations concerning a high performance management of high-power nuclear stations evidencing the original results of the research presented in this PhD thesis. An annex exposes the practical NPP management decision making for ensuring safe operation regimes. The experiments were conducted on the 14 MW TRIGA SSR reactor at INR Pitesti. The concepts developed in this thesis were applied to Cernavoda NPP with a special stress onto nuclear installation monitoring. In conclusion, the following items can be pointed out as achieved in this work: 1. Evidencing of nuclear facility operational monitoring policies concerning primarily the preventive maintenance and NPP safety assurance; 2. Analysis of nuclear accidents within the frame of risk-catastrophe-chaos theories highlighting the operative measures for preventing hazard events and quality assurance monitoring of nuclear reactor components; 3. Development of hybrid neuro-expert systems (with extensions to neuro-fuzzy and fuzzy models) implying process automated programming. This combined system undergoes currently a patent procedure as giving a innovative structure of intelligent hard-soft systems devoted to safe operation of power systems with nuclear injection; 4. Establishing the descriptors of high-performing managing for analysis of specific activities relating to nuclear processes; 5. Modelling of nuclear power systems in the frame of operational approach on managing operators as for instance, market and system operators, human resource and quality operator, economical-financial operator and decision-communication operator; 6. Achievement of experimental system for decision making in NPP monitoring based on 14 MW TRIGA SSR reactor at INR Pitesti. (authors)

  19. Identification and simulation of the power quality problems using computer models

    International Nuclear Information System (INIS)

    Abro, M.R.; Memon, A.P.; Memon, Z.A.

    2005-01-01

    The Power Quality has become the main factor in our life. If this quality of power is being polluted over the Electrical Power Network, serious problems could arise within the modem social structure and its conveniences. The Nonlinear Characteristics of various office and Industrial equipment connected to the power grid could cause electrical disturbances to poor power quality. In many cases the electric power consumed is first converted to different form and such conversion process introduces harmonic pollution in the grid. These electrical disturbances could destroy certain sensitive equipment connected to the grid or in some cases could cause them to malfunction. In the huge power network identifying the source of such disturbance without causing interruption to the supply is a big problem. This paper attempts to study the power quality problem caused by typical loads using computer models paving the way to identify the source of the problem. PSB (Power System Blockset) Toolbox of MATLAB is used for this paper, which is designed to provide modem tool that rapidly and easily builds models and simulates the power system. The blockset uses the Simulink environment, allowing a model to be built using simple click and drag procedures. (author)

  20. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto

    2013-09-10

    State-of-the art computers need high performance transistors, which consume ultra-low power resulting in longer battery lifetime. Billions of transistors are integrated neatly using matured silicon fabrication process to maintain the performance per cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today\\'s computers. One limitation is silicon\\'s rigidity and brittleness. Here we show a generic batch process to convert high performance silicon electronics into flexible and semi-transparent one while retaining its performance, process compatibility, integration density and cost. We demonstrate high-k/metal gate stack based p-type metal oxide semiconductor field effect transistors on 4 inch silicon fabric released from bulk silicon (100) wafers with sub-threshold swing of 80 mV dec(-1) and on/off ratio of near 10(4) within 10% device uniformity with a minimum bending radius of 5 mm and an average transmittance of similar to 7% in the visible spectrum.

  1. Fiber facet gratings for high power fiber lasers

    Science.gov (United States)

    Vanek, Martin; Vanis, Jan; Baravets, Yauhen; Todorov, Filip; Ctyroky, Jiri; Honzatko, Pavel

    2017-12-01

    We numerically investigated the properties of diffraction gratings designated for fabrication on the facet of an optical fiber. The gratings are intended to be used in high-power fiber lasers as mirrors either with a low or high reflectivity. The modal reflectance of low reflectivity polarizing grating has a value close to 3% for TE mode while it is significantly suppressed for TM mode. Such a grating can be fabricated on laser output fiber facet. The polarizing grating with high modal reflectance is designed as a leaky-mode resonant diffraction grating. The grating can be etched in a thin layer of high index dielectric which is sputtered on fiber facet. We used refractive index of Ta2O5 for such a layer. We found that modal reflectance can be close to 0.95 for TE polarization and polarization extinction ratio achieves 18 dB. Rigorous coupled wave analysis was used for fast optimization of grating parameters while aperiodic rigorous coupled wave analysis, Fourier modal method and finite difference time domain method were compared and used to compute modal reflectance of designed gratings.

  2. High-power density miniscale power generation and energy harvesting systems

    International Nuclear Information System (INIS)

    Lyshevski, Sergey Edward

    2011-01-01

    This paper reports design, analysis, evaluations and characterization of miniscale self-sustained power generation systems. Our ultimate objective is to guarantee highly-efficient mechanical-to-electrical energy conversion, ensure premier wind- or hydro-energy harvesting capabilities, enable electric machinery and power electronics solutions, stabilize output voltage, etc. By performing the advanced scalable power generation system design, we enable miniscale energy sources and energy harvesting technologies. The proposed systems integrate: (1) turbine which rotates a radial- or axial-topology permanent-magnet synchronous generator at variable angular velocity depending on flow rate, speed and load, and, (2) power electronic module with controllable rectifier, soft-switching converter and energy storage stages. These scalable energy systems can be utilized as miniscale auxiliary and self-sustained power units in various applications, such as, aerospace, automotive, biotechnology, biomedical, and marine. The proposed systems uniquely suit various submersible and harsh environment applications. Due to operation in dynamic rapidly-changing envelopes (variable speed, load changes, etc.), sound solutions are researched, proposed and verified. We focus on enabling system organizations utilizing advanced developments for various components, such as generators, converters, and energy storage. Basic, applied and experimental findings are reported. The prototypes of integrated power generation systems were tested, characterized and evaluated. It is documented that high-power density, high efficiency, robustness and other enabling capabilities are achieved. The results and solutions are scalable from micro (∼100 μW) to medium (∼100 kW) and heavy-duty (sub-megawatt) auxiliary and power systems.

  3. The computer code system for reactor radiation shielding in design of nuclear power plant

    International Nuclear Information System (INIS)

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  4. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  6. Application of ESER computers to operation management of nuclear power plants

    International Nuclear Information System (INIS)

    Kuhne, E.; Poetter, K.F.; Suschok, G.

    1990-01-01

    Operation management of nuclear reactors is essentially support by calculational studies in which large computers have to be employed. A system of programs is presented that support the solution of those tasks which are related to refuelling and stationary operation of WWER-440 type reactors. Application of this system is made in the Greifswald nuclear power plant 'Bruno Leuschner' using access to the ESER computers at the Neubrandenburg Data Processing Centre in the teleprocessing mode. System solution and hardware used are described. (author)

  7. Power MOSFET Linearizer of a High-Voltage Power Amplifier for High-Frequency Pulse-Echo Instrumentation.

    Science.gov (United States)

    Choi, Hojong; Woo, Park Chul; Yeom, Jung-Yeol; Yoon, Changhan

    2017-04-04

    A power MOSFET linearizer is proposed for a high-voltage power amplifier (HVPA) used in high-frequency pulse-echo instrumentation. The power MOSFET linearizer is composed of a DC bias-controlled series power MOSFET shunt with parallel inductors and capacitors. The proposed scheme is designed to improve the gain deviation characteristics of the HVPA at higher input powers. By controlling the MOSFET bias voltage in the linearizer, the gain reduction into the HVPA was compensated, thereby reducing the echo harmonic distortion components generated by the ultrasonic transducers. In order to verify the performance improvement of the HVPA implementing the power MOSFET linearizer, we measured and found that the gain deviation of the power MOSFET linearizer integrated with HVPA under 10 V DC bias voltage was reduced (-1.8 and -0.96 dB, respectively) compared to that of the HVPA without the power MOSFET linearizer (-2.95 and -3.0 dB, respectively) when 70 and 80 MHz, three-cycle, and 26 dB m input pulse waveforms are applied, respectively. The input 1-dB compression point (an index of linearity) of the HVPA with power MOSFET linearizer (24.17 and 26.19 dB m at 70 and 80 MHz, respectively) at 10 V DC bias voltage was increased compared to that of HVPA without the power MOSFET linearizer (22.03 and 22.13 dB m at 70 and 80 MHz, respectively). To further verify the reduction of the echo harmonic distortion components generated by the ultrasonic transducers, the pulse-echo responses in the pulse-echo instrumentation were compared when using HVPA with and without the power MOSFET linearizer. When three-cycle 26 dB m input power was applied, the second, third, fourth, and fifth harmonic distortion components of a 75 MHz transducer driven by the HVPA with power MOSFET linearizer (-48.34, -44.21, -48.34, and -46.56 dB, respectively) were lower than that of the HVPA without the power MOSFET linearizer (-45.61, -41.57, -45.01, and -45.51 dB, respectively). When five-cycle 20 dB m input

  8. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  9. High Power Orbit Transfer Vehicle

    National Research Council Canada - National Science Library

    Gulczinski, Frank

    2003-01-01

    ... from Virginia Tech University and Aerophysics, Inc. to examine propulsion requirements for a high-power orbit transfer vehicle using thin-film voltaic solar array technologies under development by the Space Vehicles Directorate (dubbed PowerSail...

  10. High impact data visualization with Power View, Power Map, and Power BI

    CERN Document Server

    Aspin, Adam

    2014-01-01

    High Impact Data Visualization with Power View, Power Map, and Power BI helps you take business intelligence delivery to a new level that is interactive, engaging, even fun, all while driving commercial success through sound decision-making. Learn to harness the power of Microsoft's flagship, self-service business intelligence suite to deliver compelling and interactive insight with remarkable ease. Learn the essential techniques needed to enhance the look and feel of reports and dashboards so that you can seize your audience's attention and provide them with clear and accurate information. Al

  11. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  12. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  13. Improved Structure and Fabrication of Large, High-Power KHPS Rotors - Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Corren, Dean [Verdant Power, Inc.; Colby, Jonathan [Verdant Power, Inc.; Adonizio, Mary Ann [Verdant Power, Inc.

    2013-01-29

    Verdant Power, Inc, working in partnership with the National Renewable Energy Laboratory (NREL), Sandia National Laboratories (SNL), and the University of Minnesota St. Anthony Falls Laboratory (SAFL), among other partners, used evolving Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) models and techniques to improve the structure and fabrication of large, high-power composite Kinetic Hydropower System (KHPS) rotor blades. The objectives of the project were to: design; analyze; develop for manufacture and fabricate; and thoroughly test, in the lab and at full scale in the water, the improved KHPS rotor blade.

  14. The Power of Computer-aided Tomography to Investigate Marine Benthic Communities

    Science.gov (United States)

    Utilization of Computer-aided-Tomography (CT) technology is a powerful tool to investigate benthic communities in aquatic systems. In this presentation, we will attempt to summarize our 15 years of experience in developing specific CT methods and applications to marine benthic co...

  15. Designing fault-tolerant real-time computer systems with diversified bus architecture for nuclear power plants

    International Nuclear Information System (INIS)

    Behera, Rajendra Prasad; Murali, N.; Satya Murty, S.A.V.

    2014-01-01

    Fault-tolerant real-time computer (FT-RTC) systems are widely used to perform safe operation of nuclear power plants (NPP) and safe shutdown in the event of any untoward situation. Design requirements for such systems need high reliability, availability, computational ability for measurement via sensors, control action via actuators, data communication and human interface via keyboard or display. All these attributes of FT-RTC systems are required to be implemented using best known methods such as redundant system design using diversified bus architecture to avoid common cause failure, fail-safe design to avoid unsafe failure and diagnostic features to validate system operation. In this context, the system designer must select efficient as well as highly reliable diversified bus architecture in order to realize fault-tolerant system design. This paper presents a comparative study between CompactPCI bus and Versa Module Eurocard (VME) bus architecture for designing FT-RTC systems with switch over logic system (SOLS) for NPP. (author)

  16. OPC Server and BridgeView Application for High Voltage Power Supply Lecroy 1458

    CERN Document Server

    Swoboda, D; CERN. Geneva

    2000-01-01

    Abstract The aim of this project was to develop an OPC server to communicate over an RS232 serial line. This communication media is commonly used with commercial instruments. The development was made for a High Voltage power supply in the context of the Alice [1] experiment. In addition, the structured modular concept will allow changing the transmission media or power supply type with little effort. The high voltage power supply should be accessible remotely through a network. OPC[2] is an acronym for OLE[3] for Process Control. OPC is based on the DCOM [3] communication protocol, which allows communication with any computer running a Windows based OS. This standard is widely used in industry to access device data through Windows applications. The concept is based on the client-server architecture. The hardware and the software architecture are described. Subsequently details of the implemented programs are given with emphasis on the possibility to replace parts of the software in order to use differ...

  17. The first accident simulation of Angra-1 power plant using the ALMOD computer code

    International Nuclear Information System (INIS)

    Camargo, C.T.M.

    1981-01-01

    The implementation of the german computer code ALMOD and its application in the calculation of Angra-1, a nuclear power plant different from the KWU power plants, demanded study and models adaptation, and due to economic reasons simplifications and optimizations were necessary. The first results define the analytical potential of the computer code, confirm the adequacy of the adaptations done and provide relevant conclusions about the Angra-1 safety analysis, showing at the same time areas in which the model can be applied or simply improved. (E.G.) [pt

  18. Computer-aided design in power engineering. Application of software tools

    International Nuclear Information System (INIS)

    Stojkovic, Zlatan

    2012-01-01

    Demonstrates the use software tools in the practice of design in the field of power systems. Presents many applications in the design in the field of power systems. Useful for educative purposes and practical work. This textbooks demonstrates the application of software tools in solving a series of problems from the field of designing power system structures and systems. It contains four chapters: The first chapter leads the reader through all the phases necessary in the procedures of computer aided modeling and simulation. It guides through the complex problems presenting on the basis of eleven original examples. The second chapter presents application of software tools in power system calculations of power systems equipment design. Several design example calculations are carried out using engineering standards like MATLAB, EMTP/ATP, Excel and Access, AutoCAD and Simulink. The third chapters focuses on the graphical documentation using a collection of software tools (AutoCAD, EPLAN, SIMARIS SIVACON, SIMARIS DESIGN) which enable the complete automation of the development of graphical documentation of a power systems. In the fourth chapter, the application of software tools in the project management in power systems is discussed. Here, the emphasis is put on the standard software MS Excel and MS Project.

  19. Computer-aided design in power engineering. Application of software tools

    Energy Technology Data Exchange (ETDEWEB)

    Stojkovic, Zlatan

    2012-07-01

    Demonstrates the use software tools in the practice of design in the field of power systems. Presents many applications in the design in the field of power systems. Useful for educative purposes and practical work. This textbooks demonstrates the application of software tools in solving a series of problems from the field of designing power system structures and systems. It contains four chapters: The first chapter leads the reader through all the phases necessary in the procedures of computer aided modeling and simulation. It guides through the complex problems presenting on the basis of eleven original examples. The second chapter presents application of software tools in power system calculations of power systems equipment design. Several design example calculations are carried out using engineering standards like MATLAB, EMTP/ATP, Excel and Access, AutoCAD and Simulink. The third chapters focuses on the graphical documentation using a collection of software tools (AutoCAD, EPLAN, SIMARIS SIVACON, SIMARIS DESIGN) which enable the complete automation of the development of graphical documentation of a power systems. In the fourth chapter, the application of software tools in the project management in power systems is discussed. Here, the emphasis is put on the standard software MS Excel and MS Project.

  20. Series-Tuned High Efficiency RF-Power Amplifiers

    DEFF Research Database (Denmark)

    Vidkjær, Jens

    2008-01-01

    An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits.......An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits....

  1. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  2. High-power density miniscale power generation and energy harvesting systems

    Energy Technology Data Exchange (ETDEWEB)

    Lyshevski, Sergey Edward [Department of Electrical and Microelectronics Engineering, Rochester Institute of Technology, Rochester, NY 14623-5603 (United States)

    2011-01-15

    This paper reports design, analysis, evaluations and characterization of miniscale self-sustained power generation systems. Our ultimate objective is to guarantee highly-efficient mechanical-to-electrical energy conversion, ensure premier wind- or hydro-energy harvesting capabilities, enable electric machinery and power electronics solutions, stabilize output voltage, etc. By performing the advanced scalable power generation system design, we enable miniscale energy sources and energy harvesting technologies. The proposed systems integrate: (1) turbine which rotates a radial- or axial-topology permanent-magnet synchronous generator at variable angular velocity depending on flow rate, speed and load, and, (2) power electronic module with controllable rectifier, soft-switching converter and energy storage stages. These scalable energy systems can be utilized as miniscale auxiliary and self-sustained power units in various applications, such as, aerospace, automotive, biotechnology, biomedical, and marine. The proposed systems uniquely suit various submersible and harsh environment applications. Due to operation in dynamic rapidly-changing envelopes (variable speed, load changes, etc.), sound solutions are researched, proposed and verified. We focus on enabling system organizations utilizing advanced developments for various components, such as generators, converters, and energy storage. Basic, applied and experimental findings are reported. The prototypes of integrated power generation systems were tested, characterized and evaluated. It is documented that high-power density, high efficiency, robustness and other enabling capabilities are achieved. The results and solutions are scalable from micro ({proportional_to}100 {mu}W) to medium ({proportional_to}100 kW) and heavy-duty (sub-megawatt) auxiliary and power systems. (author)

  3. high power facto high power factor high power factor hybrid rectifier

    African Journals Online (AJOL)

    eobe

    increase in the number of electrical loads that some kind of ... components in the AC power system. Thus, suppl ... al output power; assuring reliability in ... distribution systems. This can be ...... Thesis- Califonia Institute of Technology, Capitulo.

  4. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  5. Electronic DC transformer with high power density

    NARCIS (Netherlands)

    Pavlovský, M.

    2006-01-01

    This thesis is concerned with the possibilities of increasing the power density of high-power dc-dc converters with galvanic isolation. Three cornerstones for reaching high power densities are identified as: size reduction of passive components, reduction of losses particularly in active components

  6. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  7. Green computing: power optimisation of VFI-based real-time multiprocessor dataflow applications (extended version)

    NARCIS (Netherlands)

    Ahmad, W.; Holzenspies, P.K.F.; Stoelinga, Mariëlle Ida Antoinette; van de Pol, Jan Cornelis

    2015-01-01

    Execution time is no longer the only performance metric for computer systems. In fact, a trend is emerging to trade raw performance for energy savings. Techniques like Dynamic Power Management (DPM, switching to low power state) and Dynamic Voltage and Frequency Scaling (DVFS, throttling processor

  8. Sexual aggression when power is new: Effects of acute high power on chronically low-power individuals.

    Science.gov (United States)

    Williams, Melissa J; Gruenfeld, Deborah H; Guillory, Lucia E

    2017-02-01

    Previous theorists have characterized sexually aggressive behavior as an expression of power, yet evidence that power causes sexual aggression is mixed. We hypothesize that power can indeed create opportunities for sexual aggression-but that it is those who chronically experience low power who will choose to exploit such opportunities. Here, low-power men placed in a high-power role showed the most hostility in response to a denied opportunity with an attractive woman (Studies 1 and 2). Chronically low-power men and women given acute power were the most likely to say they would inappropriately pursue an unrequited workplace attraction (Studies 3 and 4). Finally, having power over an attractive woman increased harassment behavior among men with chronic low, but not high, power (Study 5). People who see themselves as chronically denied power appear to have a stronger desire to feel powerful and are more likely to use sexual aggression toward that end. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. A self-powered thin-film radiation detector using intrinsic high-energy current

    Energy Technology Data Exchange (ETDEWEB)

    Zygmanski, Piotr, E-mail: pzygmanski@LROC.HARVARD.EDU, E-mail: Erno-Sajo@uml.edu [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Sajo, Erno, E-mail: pzygmanski@LROC.HARVARD.EDU, E-mail: Erno-Sajo@uml.edu [Department of Physics and Applied Physics, Medical Physics Program, University of Massachusetts Lowell, Lowell, Massachusetts 01854 (United States)

    2016-01-15

    Purpose: The authors introduce a radiation detection method that relies on high-energy current (HEC) formed by secondary charged particles in the detector material, which induces conduction current in an external readout circuit. Direct energy conversion of the incident radiation powers the signal formation without the need for external bias voltage or amplification. The detector the authors consider is a thin-film multilayer device, composed of alternating disparate electrically conductive and insulating layers. The optimal design of HEC detectors consists of microscopic or nanoscopic structures. Methods: Theoretical and computational developments are presented to illustrate the salient properties of the HEC detector and to demonstrate its feasibility. In this work, the authors examine single-sandwiched and periodic layers of Cu and Al, and Au and Al, ranging in thickness from 100 nm to 300 μm and separated by similarly sized dielectric gaps, exposed to 120 kVp x-ray beam (half-value thickness of 4.1 mm of Al). The energy deposition characteristics and the high-energy current were determined using radiation transport computations. Results: The authors found that in a dual-layer configuration, the signal is in the measurable range. For a defined total detector thickness in a multilayer structure, the signal sharply increases with decreasing thickness of the high-Z conductive layers. This paper focuses on the computational results while a companion paper reports the experimental findings. Conclusions: Significant advantages of the device are that it does not require external power supply and amplification to create a measurable signal; it can be made in any size and geometry, including very thin (sub-millimeter to submicron) flexible curvilinear forms, and it is inexpensive. Potential applications include medical dosimetry (both in vivo and external), radiation protection, and other settings where one or more of the above qualities are desired.

  10. Computational models of an inductive power transfer system for electric vehicle battery charge

    Science.gov (United States)

    Anele, A. O.; Hamam, Y.; Chassagne, L.; Linares, J.; Alayli, Y.; Djouani, K.

    2015-09-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV.

  11. Computational models of an inductive power transfer system for electric vehicle battery charge

    International Nuclear Information System (INIS)

    Anele, A O; Hamam, Y; Djouani, K; Chassagne, L; Alayli, Y; Linares, J

    2015-01-01

    One of the issues to be solved for electric vehicles (EVs) to become a success is the technical solution of its charging system. In this paper, computational models of an inductive power transfer (IPT) system for EV battery charge are presented. Based on the fundamental principles behind IPT systems, 3 kW single phase and 22 kW three phase IPT systems for Renault ZOE are designed in MATLAB/Simulink. The results obtained based on the technical specifications of the lithium-ion battery and charger type of Renault ZOE show that the models are able to provide the total voltage required by the battery. Also, considering the charging time for each IPT model, they are capable of delivering the electricity needed to power the ZOE. In conclusion, this study shows that the designed computational IPT models may be employed as a support structure needed to effectively power any viable EV. (paper)

  12. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  13. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  14. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  15. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  16. Switching transients in high-frequency high-power converters using power MOSFET's

    Science.gov (United States)

    Sloane, T. H.; Owen, H. A., Jr.; Wilson, T. G.

    1979-01-01

    The use of MOSFETs in a high-frequency high-power dc-to-dc converter is investigated. Consideration is given to the phenomena associated with the paralleling of MOSFETs and to the effect of stray circuit inductances on the converter circuit performance. Analytical relationships between various time constants during the turning-on and turning-off intervals are derived which provide estimates of plateau and peak levels during these intervals.

  17. High Power Electron Accelerator Prototype

    CERN Document Server

    Tkachenko, Vadim; Cheskidov, Vladimir; Korobeynikov, G I; Kuznetsov, Gennady I; Lukin, A N; Makarov, Ivan; Ostreiko, Gennady; Panfilov, Alexander; Sidorov, Alexey; Tarnetsky, Vladimir V; Tiunov, Michael A

    2005-01-01

    In recent time the new powerful industrial electron accelerators appear on market. It caused the increased interest to radiation technologies using high energy X-rays due to their high penetration ability. However, because of low efficiency of X-ray conversion for electrons with energy below 5 MeV, the intensity of X-rays required for some industrial applications can be achieved only when the beam power exceeds 300 kW. The report describes a project of industrial electron accelerator ILU-12 for electron energy up to 5 MeV and beam power up to 300 kW specially designed for use in industrial applications. On the first stage of work we plan to use the existing generator designed for ILU-8 accelerator. It is realized on the GI-50A triode and provides the pulse power up to 1.5-2 MW and up to 20-30 kW of average power. In the report the basic concepts and a condition of the project for today are reflected.

  18. A personal computer code for seismic evaluations of nuclear power plants facilities

    International Nuclear Information System (INIS)

    Xu, J.; Philippacopoulos, A.J.; Graves, H.

    1990-01-01

    The program CARES (Computer Analysis for Rapid Evaluation of Structures) is an integrated computational system being developed by Brookhaven National Laboratory (BNL) for the U.S. Nuclear Regulatory Commission. It is specifically designed to be a personal computer (PC) operated package which may be used to determine the validity and accuracy of analysis methodologies used for structural safety evaluations of nuclear power plants. CARES is structured in a modular format. Each module performs a specific type of analysis i.e., static or dynamic, linear or nonlinear, etc. This paper describes the various features which have been implemented into the Seismic Module of CARES

  19. Computation and experiment results of the grounding model of Three Gorges Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Wen Xishan; Zhang Yuanfang; Yu Jianhui; Chen Cixuan [Wuhan University of Hydraulic and Electrical Engineering (China); Qin Liming; Xu Jun; Shu Lianfu [Yangtze River Water Resources Commission, Wuhan (China)

    1999-07-01

    A model for the computation of the grounding parameters of the grids of Three Gorges Power Plant (TGPP) on the Yangtze River is presented in this paper. Using this model computation and analysis of grounding grids is carried out. The results show that reinforcing the grid of the dam is the main body of current dissipation. It must be reliably welded to form a good grounding grid. The experimental results show that the method and program of the computations are correct. (UK)

  20. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    Mandic, D.; Kocnar, R.; Sucic, B.

    2002-01-01

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  1. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Science.gov (United States)

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  2. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Directory of Open Access Journals (Sweden)

    WenBo Xiao

    Full Text Available In this article, we introduced an artificial neural network (ANN based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-, multi-crystalline (multi-, and amorphous (amor- crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  3. High power rf amplifiers for accelerator applications: The large orbit gyrotron and the high current, space charge enhanced relativistic klystron

    International Nuclear Information System (INIS)

    Stringfield, R.M.; Fazio, M.V.; Rickel, D.G.; Kwan, T.J.T.; Peratt, A.L.; Kinross-Wright, J.; Van Haaften, F.W.; Hoeberling, R.F.; Faehl, R.; Carlsten, B.; Destler, W.W.; Warner, L.B.

    1991-01-01

    Los Alamos is investigating a number of high power microwave (HPM) sources for their potential to power advanced accelerators. Included in this investigation are the large orbit gyrotron amplifier and oscillator (LOG) and the relativistic klystron amplifier (RKA). LOG amplifier development is newly underway. Electron beam power levels of 3 GW, 70 ns duration, are planned, with anticipated conversion efficiencies into RF on the order of 20 percent. Ongoing investigations on this device include experimental improvement of the electron beam optics (to allow injection of a suitable fraction of the electron beam born in the gun into the amplifier structure), and computational studies of resonator design and RF extraction. Recent RKA studies have operated at electron beam powers into the device of 1.35 GW in microsecond duration pulses. The device has yielded modulated electron beam power approaching 300 MW using 3-5 kW of RF input drive. RF powers extracted into waveguide have been up to 70 MW, suggesting that more power is available from the device than has been converted to-date in the extractor

  4. Resonant High Power Combiners

    CERN Document Server

    Langlois, Michel; Peillex-Delphe, Guy

    2005-01-01

    Particle accelerators need radio frequency sources. Above 300 MHz, the amplifiers mostly used high power klystrons developed for this sole purpose. As for military equipment, users are drawn to buy "off the shelf" components rather than dedicated devices. IOTs have replaced most klystrons in TV transmitters and find their way in particle accelerators. They are less bulky, easier to replace, more efficient at reduced power. They are also far less powerful. What is the benefit of very compact sources if huge 3 dB couplers are needed to combine the power? To alleviate this drawback, we investigated a resonant combiner, operating in TM010 mode, able to combine 3 to 5 IOTs. Our IOTs being able to deliver 80 kW C.W. apiece, combined power would reach 400 kW minus the minor insertion loss. Values for matching and insertion loss are given. The behavior of the system in case of IOT failure is analyzed.

  5. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  6. FPGA Based Low Power Router Design Using High Speed Transeceiver Logic IO Standard

    DEFF Research Database (Denmark)

    Thind, Vandana; Hussain, Dil muhammed Akbar

    2015-01-01

    and information. Router is main component of computer networks is an intelligent device uses to transfer data packets between various computer networks. Router must consume low power to perform its work in an efficient manner. To achieve the same the work has been done to make a FPGA based low power design using...

  7. Power Constrained High-Level Synthesis of Battery Powered Digital Systems

    DEFF Research Database (Denmark)

    Nielsen, Sune Fallgaard; Madsen, Jan

    2003-01-01

    We present a high-level synthesis algorithm solving the combined scheduling, allocation and binding problem minimizing area under both latency and maximum power per clock-cycle constraints. Our approach eliminates the large power spikes, resulting in an increased battery lifetime, a property...... of utmost importance for battery powered embedded systems. Our approach extends the partial-clique partitioning algorithm by introducing power awareness through a heuristic algorithm which bounds the design space to those of power feasible schedules. We have applied our algorithm on a set of dataflow graphs...

  8. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    OpenAIRE

    Shin-ichi Kuribayashi

    2013-01-01

    The widespread use of cloud computing services is expected to deteriorate a Quality of Service and toincrease the power consumption of ICT devices, since the distance to a server becomes longer than before. Migration of virtual machines over a wide area can solve many problems such as load balancing and power saving in cloud computing environments. This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine is moved to a distant center, in order to preve...

  9. High stability, high current DC-power supplies

    International Nuclear Information System (INIS)

    Hosono, K.; Hatanaka, K.; Itahashi, T.

    1995-01-01

    Improvements of the power supplies and the control system of the AVF cyclotron which is used as an injector to the ring cyclotron and of the transport system to the ring cyclotron were done in order to get more high quality and more stable beam. The power supply of the main coil of the AVF cyclotron was exchanged to new one. The old DCCTs (zero-flux current transformers) used for the power supplies of the trim coils of the AVF cyclotron were changed to new DCCTs to get more stability. The potentiometers used for the reference voltages in the other power supplies of the AVF cyclotron and the transport system were changed to the temperature controlled DAC method for numerical-value settings. This paper presents the results of the improvements. (author)

  10. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  11. Computations of steam flow and heat transfer in nuclear power plant condensers

    International Nuclear Information System (INIS)

    Yuan, A.

    1997-01-01

    To improve performance of its PWR nuclear power plants, Electricite de France has developed a performance monitoring system that checks simultaneously the operation of the components of the secondary system. The performance monitoring system is based on a computational software CITER for steady state runs. A one-dimensional condenser model has been developed. Application of this code to a nuclear power plant condenser shows that predicted values in good agreement with the design values

  12. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  13. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  14. Thread selection according to predefined power characteristics during context switching on compute nodes

    Science.gov (United States)

    None, None

    2013-06-04

    Methods, apparatus, and products are disclosed for thread selection during context switching on a plurality of compute nodes that includes: executing, by a compute node, an application using a plurality of threads of execution, including executing one or more of the threads of execution; selecting, by the compute node from a plurality of available threads of execution for the application, a next thread of execution in dependence upon power characteristics for each of the available threads; determining, by the compute node, whether criteria for a thread context switch are satisfied; and performing, by the compute node, the thread context switch if the criteria for a thread context switch are satisfied, including executing the next thread of execution.

  15. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers

  16. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    Science.gov (United States)

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational

  17. Brain computation is organized via power-of-two-based permutation logic

    Directory of Open Access Journals (Sweden)

    Kun Xie

    2016-11-01

    Full Text Available There is considerable scientific interest in understanding how cell assemblies - the long-presumed computational motif - are organized so that the brain can generate cognitive behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N=2i–1, giving rise to the specific-to-general cell-assembly organization capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based computational logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social cognitions. However, modulatory neurons, such as dopaminergic neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact despite the NMDA receptors – the synaptic switch for learning and memory – were deleted throughout adulthood, suggesting that it is likely developmentally pre-configured. Moreover, this logic is implemented in the cortex vertically via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques – which preferentially encode specific and low-combinatorial features and project inter-cortically – is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination, and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the non-randomness in layers 5/6 - which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems – is ideal for robust feedback-control of motivation, emotion, consciousness, and behaviors. These observations suggest that the brain’s basic

  18. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.

  19. Application of parallel connected power-MOSFET elements to high current d.c. power supply

    International Nuclear Information System (INIS)

    Matsukawa, Tatsuya; Shioyama, Masanori; Shimada, Katsuhiro; Takaku, Taku; Neumeyer, Charles; Tsuji-Iio, Shunji; Shimada, Ryuichi

    2001-01-01

    The low aspect ratio spherical torus (ST), which has single turn toroidal field coil, requires the extremely high d.c. current like as 20 MA to energize the coil. Considering the ratings of such extremely high current and low voltage, power-MOSFET element is employed as the switching device for the a.c./d.c. converter of power supply. One of the advantages of power-MOSFET element is low on-state resistance, which is to meet the high current and low voltage operation. Recently, the capacity of power-MOSFET element has been increased and its on-state resistance has been decreased, so that the possibility of construction of high current and low voltage a.c./d.c. converter with parallel connected power-MOSFET elements has been growing. With the aim of developing the high current d.c. power supply using power-MOSFET, the basic characteristics of parallel operation with power-MOSFET elements are experimentally investigated. And, the synchronous rectifier type and the bi-directional self commutated type a.c./d.c. converters using parallel connected power-MOSFET elements are proposed

  20. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    Science.gov (United States)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  1. Highly-stabilized power supply for synchrotron accelerators. High speed, low ripple power supply

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Kenji [Osaka Univ., Ibaraki (Japan). Research Center for Nuclear Physics; Kumada, Masayuki; Fukami, Kenji; Koseki, Shoichiro; Kubo, Hiroshi; Kanazawa, Toru

    1997-02-01

    In synchrotron accelerators, in order to utilize high energy beam effectively, those are operated by repeating acceleration and taking-out at short period. In order to accelerate by maintaining beam track stable, the tracking performance with the error less than 10{sup -3} in the follow-up of current is required for the power supply. Further, in order to maintain the intensity and uniformity of beam when it is taken out, very low ripple is required for output current. The power supply having such characteristics has been developed, and applied to the HIMAC and the SPring-8. As the examples of the application of synchrotrons, the accelerators for medical treatment and the generation of synchrotron radiation are described. As to the power supply for the deflection magnets and quadrupole magnets of synchrotron accelerators, the specifications of the main power supply, the method of reducing ripple, the method of improving tracking, and active filter control are reported. As to the test results, the measurement of current ripple and tracking error is shown. The lowering of ripple was enabled by common mode filter and the symmetrical connection of electromagnets, and high speed response was realized by the compensation for delay with active filter. (K.I.)

  2. Could running experience on SPMD computers contribute to the architectural choices for future dedicated computers for high energy physics simulation?

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Silva, J.; Auguin, M.; Boeri, F.

    1989-01-01

    Results obtained on a strongly coupled parallel computer are reported. They concern Monte-Carlo simulation and pattern recognition. Though the calculations were made on an experimental computer of rather low processing power, it is believed that the quoted figures could give useful indications on architectural choices for dedicated computers. (orig.)

  3. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  4. A solar powered wireless computer mouse. Industrial design concepts

    Energy Technology Data Exchange (ETDEWEB)

    Reich, N.H.; Van Sark, W.G.J.H.M.; Alsema, E.A.; Turkenburg, W.C. [Department of Science, Technology and Society, Copernicus Institute, Utrecht University, Heidelberglaan 2, 3584 CS Utrecht (Netherlands); Veefkind, M.; Silvester, S. [Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628 CE Delft (Netherlands)

    2009-02-15

    A solar powered wireless computer mouse (SPM) was chosen to serve as a case study for the evaluation and optimization of industrial design processes of photovoltaic (PV) powered consumer systems. As the design process requires expert knowledge in various technical fields, we assessed and compared the following: appropriate selection of integrated PV type, battery capacity and type, possible electronic circuitries for PV-battery coupling, and material properties concerning mechanical incorporation of PV into the encasing. Besides technical requirements, ergonomic aspects and design aesthetics with respect to good 'sun-harvesting' properties influenced the design process. This is particularly important as simulations show users can positively influence energy balances by 'sun-bathing' the PV mouse. A total of 15 SPM prototypes were manufactured and tested by actual users. Although user satisfaction proved the SPM concept to be feasible, future research still needs to address user acceptance related to product dimensions and user willingness to pro-actively 'sun-bath' PV powered products in greater detail. (author)

  5. A new VME based high voltage power supply for large experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S.C.; Angstadt, R.D.; Droege, T.F.; Johnson, M.E.; MacKinnon, B.A.; McNulty, S.E.; Shea, M.F.; Thompson, R.N.; Watson, M.M. (Fermi National Accelerator Lab., Batavia, IL (United States)); Franzini, P. (Columbia Univ., New York, NY (United States)); Jones, A.A. (Superconducting Super Collider Lab., Dallas, TX (United States)); Lopez, M.L. (La Plata Univ. Nacional (Argentina)); Wimpenny, S.J.; Yang, M.J

    1991-11-01

    A new VME based high voltage power supply has been developed for the D{O} experiment at Fermilab. There are three types of supplies delivering up to {plus minus}5.6 kV at 1.0 mA or +2.0 kV at 3.0 mA with a set accuracy of 1.5 V and extremely low voltage ripples. Complete computer control has allowed many special features to be developed for the supply, including user-defined control land monitor groups, variable ramp rates, and advanced histogram and graphic functions. 3 refs.

  6. A new VME based high voltage power supply for large experiments

    International Nuclear Information System (INIS)

    Ahn, S.C.; Angstadt, R.D.; Droege, T.F.; Johnson, M.E.; MacKinnon, B.A.; McNulty, S.E.; Shea, M.F.; Thompson, R.N.; Watson, M.M.; Franzini, P.; Jones, A.A.; Lopez, M.L.; Wimpenny, S.J.; Yang, M.J.

    1991-11-01

    A new VME based high voltage power supply has been developed for the D OE experiment at Fermilab. There are three types of supplies delivering up to ±5.6 kV at 1.0 mA or +2.0 kV at 3.0 mA with a set accuracy of 1.5 V and extremely low voltage ripples. Complete computer control has allowed many special features to be developed for the supply, including user-defined control land monitor groups, variable ramp rates, and advanced histogram and graphic functions. 3 refs

  7. Cluster-state quantum computing enhanced by high-fidelity generalized measurements.

    Science.gov (United States)

    Biggerstaff, D N; Kaltenbaek, R; Hamel, D R; Weihs, G; Rudolph, T; Resch, K J

    2009-12-11

    We introduce and implement a technique to extend the quantum computational power of cluster states by replacing some projective measurements with generalized quantum measurements (POVMs). As an experimental demonstration we fully realize an arbitrary three-qubit cluster computation by implementing a tunable linear-optical POVM, as well as fast active feedforward, on a two-qubit photonic cluster state. Over 206 different computations, the average output fidelity is 0.9832+/-0.0002; furthermore the error contribution from our POVM device and feedforward is only of O(10(-3)), less than some recent thresholds for fault-tolerant cluster computing.

  8. Reactor G1: high power experiments

    International Nuclear Information System (INIS)

    Laage, F. de; Teste du Baillet, A.; Veyssiere, A.; Wanner, G.

    1957-01-01

    The experiments carried out in the starting-up programme of the reactor G1 comprised a series of tests at high power, which allowed the following points to be studied: 1- Effect of poisoning by Xenon (absolute value, evolution). 2- Temperature coefficients of the uranium and graphite for a temperature distribution corresponding to heating by fission. 3- Effect of the pressure (due to the coiling system) on the reactivity. 4- Calibration of the security rods as a function of their position in the pile (1). 5- Temperature distribution of the graphite, the sheathing, the uranium and the air leaving the canals, in a pile running normally at high power. 6- Neutron flux distribution in a pile running normally at high power. 7- Determination of the power by nuclear and thermodynamic methods. These experiments have been carried out under two very different pile conditions. From the 1. to the 15. of August 1956, a series of power increases, followed by periods of stabilisation, were induced in a pile containing uranium only, in 457 canals, amounting to about 34 tons of fuel. A knowledge of the efficiency of the control rods in such a pile has made it possible to measure with good accuracy the principal effects at high temperatures, that is, to deal with points 1, 2, 3, 5. Flux charts giving information on the variations of the material Laplacian and extrapolation lengths in the reflector have been drawn up. Finally the thermodynamic power has been measured under good conditions, in spite of some installation difficulties. On September 16, the pile had its final charge of 100 tons. All the canals were loaded, 1,234 with uranium and 53 (i.e. exactly 4 per cent of the total number) with thorium uniformly distributed in a square lattice of 100 cm side. Since technical difficulties prevented the calibration of the control rods, the measurements were limited to the determination of the thermodynamic power and the temperature distributions (points 5 and 7). This report will

  9. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  10. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  11. Voltage generators of high voltage high power accelerators

    International Nuclear Information System (INIS)

    Svinin, M.P.

    1981-01-01

    High voltage electron accelerators are widely used in modern radiation installations for industrial purposes. In the near future further increasing of their power may be effected, which enables to raise the efficiency of the radiation processes known and to master new power-consuming production in industry. Improvement of HV generators by increasing their power and efficiency is one of many scientific and engineering aspects the successful solution of which provides further development of these accelerators and their technical parameters. The subject is discussed in detail. (author)

  12. Optimal Operation of Plug-In Electric Vehicles in Power Systems with High Wind Power Penetrations

    DEFF Research Database (Denmark)

    Hu, Weihao; Su, Chi; Chen, Zhe

    2013-01-01

    in the power systems with high wind power penetrations. In this paper, the integration of plug-in electric vehicles in the power systems with high wind power penetrations is proposed and discussed. Optimal operation strategies of PEV in the spot market are proposed in order to decrease the energy cost for PEV......The Danish power system has a large penetration of wind power. The wind fluctuation causes a high variation in the power generation, which must be balanced by other sources. The battery storage based Plug-In Electric Vehicles (PEV) may be a possible solution to balance the wind power variations...... owners. Furthermore, the application of battery storage based aggregated PEV is analyzed as a regulation services provider in the power system with high wind power penetrations. The western Danish power system where the total share of annual wind power production is more than 27% of the electrical energy...

  13. Research & Implementation of AC - DC Converter with High Power Factor & High Efficiency

    Directory of Open Access Journals (Sweden)

    Hsiou-Hsian Nien

    2014-05-01

    Full Text Available In this paper, we design and develop a high power factor, high efficiency two-stage AC - DC power converter. This paper proposes a two-stage AC - DC power converter. The first stage is boost active power factor correction circuit. The latter stage is near constant frequency LLC resonant converter. In addition to traditional LLC high efficiency advantages, light-load conversion efficiency of this power converter can be improved. And it possesses high power factor and near constant frequency operating characteristics, can significantly reduce the electromagnetic interference. This paper first discusses the main structure and control manner of power factor correction circuit. And then by the LLC resonant converter equivalent model proceed to circuit analysis to determine the important parameters of the converter circuit elements. Then design a variable frequency resonant tank. The resonant frequency can change automatically on the basis of the load to reach near constant frequency operation and a purpose of high efficiency. Finally, actually design and produce an AC – DC power converter with output of 190W to verify the characteristics and feasibility of this converter. The experimental results show that in a very light load (9.5 W the efficiency is as high as 81%, the highest efficiency of 88% (90 W. Full load efficiency is 87%. At 19 W ~ 190 W power changes, the operating frequency change is only 0.4 kHz (AC 110 V and 0.3 kHz (AC 220 V.

  14. High-Bandwidth, High-Efficiency Envelope Tracking Power Supply for 40W RF Power Amplifier Using Paralleled Bandpass Current Sources

    DEFF Research Database (Denmark)

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2005-01-01

    This paper presents a high-performance power conversion scheme for power supply applications that require very high output voltage slew rates (dV/dt). The concept is to parallel 2 switching bandpass current sources, each optimized for its passband frequency space and the expected load current....... The principle is demonstrated with a power supply, designed for supplying a 40 W linear RF power amplifier for efficient amplification of a 16-QAM modulated data stream...

  15. High-power microwave LDMOS transistors for wireless data transmission technologies (Review)

    International Nuclear Information System (INIS)

    Kuznetsov, E. V.; Shemyakin, A. V.

    2010-01-01

    The fields of the application, structure, fabrication, and packaging technology of high-power microwave LDMOS transistors and the main advantages of these devices were analyzed. Basic physical parameters and some technology factors were matched for optimum device operation. Solid-state microwave electronics has been actively developed for the last 10-15 years. Simultaneously with improvement of old devices, new devices and structures are actively being adopted and developed and new semiconductor materials are being commercialized. Microwave LDMOS technology is in demand in such fields as avionics, civil and military radars, repeaters, base stations of cellular communication systems, television and broadcasting transmitters, and transceivers for high-speed wireless computer networks (promising Wi-Fi and Wi-Max standards).

  16. Small high cooling power space cooler

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, T. V.; Raab, J.; Durand, D.; Tward, E. [Northrop Grumman Aerospace Systems Redondo Beach, Ca, 90278 (United States)

    2014-01-29

    The small High Efficiency pulse tube Cooler (HEC) cooler, that has been produced and flown on a number of space infrared instruments, was originally designed to provide cooling of 10 W @ 95 K. It achieved its goal with >50% margin when limited by the 180 W output ac power of its flight electronics. It has also been produced in 2 stage configurations, typically for simultaneously cooling of focal planes to temperatures as low as 35 K and optics at higher temperatures. The need for even higher cooling power in such a low mass cryocooler is motivated by the advent of large focal plane arrays. With the current availability at NGAS of much larger power cryocooler flight electronics, reliable long term operation in space with much larger cooling powers is now possible with the flight proven 4 kg HEC mechanical cooler. Even though the single stage cooler design can be re-qualified for those larger input powers without design change, we redesigned both the linear and coaxial version passive pulse tube cold heads to re-optimize them for high power cooling at temperatures above 130 K while rejecting heat to 300 K. Small changes to the regenerator packing, the re-optimization of the tuned inertance and no change to the compressor resulted in the increased performance at 150 K. The cooler operating at 290 W input power achieves 35 W@ 150 K corresponding to a specific cooling power at 150 K of 8.25 W/W and a very high specific power of 72.5 W/Kg. At these powers the cooler still maintains large stroke, thermal and current margins. In this paper we will present the measured data and the changes to this flight proven cooler that were made to achieve this increased performance.

  17. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  18. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  19. Optimizing the roles of man and computer in nuclear power plant control

    International Nuclear Information System (INIS)

    Colley, R.W.; Seeman, S.E.

    1983-10-01

    We are presently participating in a program to optimize the functional man-machine interface for Liquid Metal-Cooled Fast Breeder Reactors. The overall objective of this program is to enhance operational safety; that is, to accommodate plant incidents through optimal integration of man and machine in performing the functions required to safely control a plant during both normal and off-normal conditions. Purpose of this talk is to describe an approach to determine the optimal roles of man and computer in the control of nuclear power plants. Purpose of this session was to get together people that are working in the areas of understanding of how operators control plants, and working on developing new aids for these operators. We were asked to explain how our modeling and approach we're taking will lead us to an optimization of the roles of the man and the computer in the control of nuclear power plants. Our emphasis was to be on the functions required for plant control, and how the attributes of the human operator and the attributes of the computer can be optimally used to enhance operational safety in performing these functions

  20. Gate Drive For High Speed, High Power IGBTs

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, M.N.; Cassel, R.L.; de Lamare, J.E.; Pappas, G.C.; /SLAC

    2007-06-18

    A new gate drive for high-voltage, high-power IGBTs has been developed for the SLAC NLC (Next Linear Collider) Solid State Induction Modulator. This paper describes the design and implementation of a driver that allows an IGBT module rated at 800A/3300V to switch up to 3000A at 2200V in 3{micro}S with a rate of current rise of more than 10000A/{micro}S, while still being short circuit protected. Issues regarding fast turn on, high de-saturation voltage detection, and low short circuit peak current will be presented. A novel approach is also used to counter the effect of unequal current sharing between parallel chips inside most high-power IGBT modules. It effectively reduces the collector-emitter peak current, and thus protects the IGBT from being destroyed during soft short circuit conditions at high di/dt.

  1. Gate Drive For High Speed, High Power IGBTs

    International Nuclear Information System (INIS)

    Nguyen, M.N.; Cassel, R.L.; de Lamare, J.E.; Pappas, G.C.; SLAC

    2007-01-01

    A new gate drive for high-voltage, high-power IGBTs has been developed for the SLAC NLC (Next Linear Collider) Solid State Induction Modulator. This paper describes the design and implementation of a driver that allows an IGBT module rated at 800A/3300V to switch up to 3000A at 2200V in 3(micro)S with a rate of current rise of more than 10000A/(micro)S, while still being short circuit protected. Issues regarding fast turn on, high de-saturation voltage detection, and low short circuit peak current will be presented. A novel approach is also used to counter the effect of unequal current sharing between parallel chips inside most high-power IGBT modules. It effectively reduces the collector-emitter peak current, and thus protects the IGBT from being destroyed during soft short circuit conditions at high di/dt

  2. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  3. Formulation, computation and improvement of steady state security margins in power systems. Part II: Results

    International Nuclear Information System (INIS)

    Echavarren, F.M.; Lobato, E.; Rouco, L.; Gomez, T.

    2011-01-01

    A steady state security margin for a particular operating point can be defined as the distance from this initial point to the secure operating limits of the system. Four of the most used steady state security margins are the power flow feasibility margin, the contingency feasibility margin, the load margin to voltage collapse, and the total transfer capability between system areas. This is the second part of a two part paper. Part I has proposed a novel framework of a general model able to formulate, compute and improve any steady state security margin. In Part II the performance of the general model is validated by solving a variety of practical situations in modern real power systems. Actual examples of the Spanish power system will be used for this purpose. The same computation and improvement algorithms outlined in Part I have been applied for the four security margins considered in the study, outlining the convenience of defining a general framework valid for the four of them. The general model is used here in Part II to compute and improve: (a) the power flow feasibility margin (assessing the influence of the reactive power generation limits in the Spanish power system), (b) the contingency feasibility margin (assessing the influence of transmission and generation capacity in maintaining a correct voltage profile), (c) the load margin to voltage collapse (assessing the location and quantity of loads that must be shed in order to be far away from voltage collapse) and (d) the total transfer capability (assessing the export import pattern of electric power between different areas of the Spanish system). (author)

  4. Formulation, computation and improvement of steady state security margins in power systems. Part II: Results

    Energy Technology Data Exchange (ETDEWEB)

    Echavarren, F.M.; Lobato, E.; Rouco, L.; Gomez, T. [School of Engineering of Universidad Pontificia Comillas, C/Alberto Aguilera, 23, 28015 Madrid (Spain)

    2011-02-15

    A steady state security margin for a particular operating point can be defined as the distance from this initial point to the secure operating limits of the system. Four of the most used steady state security margins are the power flow feasibility margin, the contingency feasibility margin, the load margin to voltage collapse, and the total transfer capability between system areas. This is the second part of a two part paper. Part I has proposed a novel framework of a general model able to formulate, compute and improve any steady state security margin. In Part II the performance of the general model is validated by solving a variety of practical situations in modern real power systems. Actual examples of the Spanish power system will be used for this purpose. The same computation and improvement algorithms outlined in Part I have been applied for the four security margins considered in the study, outlining the convenience of defining a general framework valid for the four of them. The general model is used here in Part II to compute and improve: (a) the power flow feasibility margin (assessing the influence of the reactive power generation limits in the Spanish power system), (b) the contingency feasibility margin (assessing the influence of transmission and generation capacity in maintaining a correct voltage profile), (c) the load margin to voltage collapse (assessing the location and quantity of loads that must be shed in order to be far away from voltage collapse) and (d) the total transfer capability (assessing the export import pattern of electric power between different areas of the Spanish system). (author)

  5. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    Directory of Open Access Journals (Sweden)

    Shaat Musbah

    2010-01-01

    Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  6. Overview on the high power excimer laser technology

    Science.gov (United States)

    Liu, Jingru

    2013-05-01

    High power excimer laser has essential applications in the fields of high energy density physics, inertial fusion energy and industry owing to its advantages such as short wavelength, high gain, wide bandwidth, energy scalable and repetition operating ability. This overview is aimed at an introduction and evaluation of enormous endeavor of the international high power excimer laser community in the last 30 years. The main technologies of high power excimer laser are reviewed, which include the pumping source technology, angular multiplexing and pulse compressing, beam-smoothing and homogenous irradiation, high efficiency and repetitive operation et al. A high power XeCl laser system developed in NINT of China is described in detail.

  7. Numerical analysis of high-power broad-area laser diode with improved heat sinking structure using epitaxial liftoff technique

    Science.gov (United States)

    Kim, Younghyun; Sung, Yunsu; Yang, Jung-Tack; Choi, Woo-Young

    2018-02-01

    The characteristics of high-power broad-area laser diodes with the improved heat sinking structure are numerically analyzed by a technology computer-aided design based self-consistent electro-thermal-optical simulation. The high-power laser diodes consist of a separate confinement heterostructure of a compressively strained InGaAsP quantum well and GaInP optical cavity layers, and a 100-μm-wide rib and a 2000-μm long cavity. In order to overcome the performance deteriorations of high-power laser diodes caused by self-heating such as thermal rollover and thermal blooming, we propose the high-power broad-area laser diode with improved heat-sinking structure, which another effective heat-sinking path toward the substrate side is added by removing a bulk substrate. It is possible to obtain by removing a 400-μm-thick GaAs substrate with an AlAs sacrificial layer utilizing well-known epitaxial liftoff techniques. In this study, we present the performance improvement of the high-power laser diode with the heat-sinking structure by suppressing thermal effects. It is found that the lateral far-field angle as well as quantum well temperature is expected to be improved by the proposed heat-sinking structure which is required for high beam quality and optical output power, respectively.

  8. High Power Fiber Laser Test Bed

    Data.gov (United States)

    Federal Laboratory Consortium — This facility, unique within DoD, power-combines numerous cutting-edge fiber-coupled laser diode modules (FCLDM) to integrate pumping of high power rare earth-doped...

  9. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  10. Recent trends in grid computing

    International Nuclear Information System (INIS)

    Miura, Kenichi

    2004-01-01

    Grid computing is a technology which allows uniform and transparent access to geographically dispersed computational resources, such as computers, databases, experimental and observational equipment etc. via high-speed, high-bandwidth networking. The commonly used analogy is that of electrical power grid, whereby the household electricity is made available from outlets on the wall, and little thought need to be given to where the electricity is generated and how it is transmitted. The usage of grid also includes distributed parallel computing, high through-put computing, data intensive computing (data grid) and collaborative computing. This paper reviews the historical background, software structure, current status and on-going grid projects, including applications of grid technology to nuclear fusion research. (author)

  11. Application of Nearly Linear Solvers to Electric Power System Computation

    Science.gov (United States)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  12. Design of The High Efficiency Power Factor Correction Circuit for Power Supply

    Directory of Open Access Journals (Sweden)

    Atiye Hülya OBDAN

    2017-12-01

    Full Text Available Designing power factor correction circuits for switched power supplies has become important in recent years in terms of efficient use of energy. Power factor correction techniques play a significant role in high power density and energy efficiency. For these purposes, bridgeless PFC topologies and control strategies have been developed alongside basic boost PFC circuits. The power density can be increased using bridgeless structures by means of reducing losses in the circuit. This article examines bridgeless PFC structures and compares their performances in terms of losses and power factor. A semi-bridgeless PFC, which is widely used at high power levels, was analyzed and simulated. The designed circuit simulation using the current mode control method was performed in the PSIM program. A prototype of a 900 W semi-bridgeless PFC circuit was implemented and the results obtained from the circuit are presented

  13. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  14. Mixed-mode distribution systems for high average power electron cyclotron heating

    International Nuclear Information System (INIS)

    White, T.L.; Kimrey, H.D.; Bigelow, T.S.

    1984-01-01

    The ELMO Bumpy Torus-Scale (EBT-S) experiment consists of 24 simple magnetic mirrors joined end-to-end to form a torus of closed magnetic field lines. In this paper, we first describe an 80% efficient mixed-mode unpolarized heating system which couples 28-GHz microwave power to the midplane of the 24 EBT-S cavities. The system consists of two radiused bends feeding a quasi-optical mixed-mode toroidal distribution manifold. Balancing power to the 24 cavities is determined by detailed computer ray tracing. A second 28-GHz electron cyclotron heating (ECH) system using a polarized grid high field launcher is described. The launcher penetrates the fundamental ECH resonant surface without a vacuum window with no observable breakdown up to 1 kW/cm 2 (source limited) with 24 kW delivered to the plasma. This system uses the same mixed-mode output as the first system but polarizes the launched power by using a grid of WR42 apertures. The efficiency of this system is 32%, but can be improved by feeding multiple launchers from a separate distribution manifold

  15. People powerComputer games in the classroom

    Directory of Open Access Journals (Sweden)

    Ivan Hilliard

    2014-03-01

    Full Text Available This article presents a case study in the use of the computer simulation game People Power, developed by the International Center on Nonviolent Conflict. The principal objective of the activity was to offer students an opportunity to understand the dynamics of social conflicts, in a format not possible in a traditional classroom setting. Due to the game complexity, it was decided to play it in a day-long (8 hour workshop format. A computer lab was prepared several weeks beforehand, which meant that each team of four students had access to a number of computers, being able to have the game open on several monitors at the same time, playing on one while using the others to constantly revise information as their strategy and tactics evolved. At the end of the workshop, and after handing in a group report, the 24 participants (6 groups were asked to complete a short survey of the activity. The survey was divided into three areas: the game itself, skill development, and the workshop organization. Results showed a strong relationship between the activity and the course content, skills and competencies development, and practical know-how and leadership, as well as a strong feeling that it works well as a learning tool and is enjoyable. DOI: 10.18870/hlrc.v4i1.200

  16. A high frequency, high power CARM proposal for the DEMO ECRH system

    International Nuclear Information System (INIS)

    Mirizzi, Francesco; Spassovsky, Ivan; Ceccuzzi, Silvio; Dattoli, Giuseppe; Di Palma, Emanuele; Doria, Andrea; Gallerano, Gianpiero; Lampasi, Alessandro; Maffia, Giuseppe; Ravera, GianLuca; Sabia, Elio; Tuccillo, Angelo Antonio; Zito, Pietro

    2015-01-01

    Highlights: • ECRH system for DEMO. • Cyclotron Auto-Resonance Maser (CARM) devices. • Relativistic electron beams. • Bragg reflectors. • High voltage pulse modulators. - Abstract: ECRH&CD systems are extensively used on tokamak plasmas due to their capability of highly tailored power deposition, allowing very localised heating and non-inductive current drive, useful for MHD and profiles control. The high electron temperatures expected in DEMO will require ECRH systems with operating frequency in the 200–300 GHz range, equipped with a reasonable number of high power (P ≥ 1 MW) CW RF sources, for allowing central RF power deposition. In this frame the ENEA Fusion Department (Frascati) is coordinating a task force aimed at the study and realisation of a suitable high power, high frequency reliable source.

  17. A high frequency, high power CARM proposal for the DEMO ECRH system

    Energy Technology Data Exchange (ETDEWEB)

    Mirizzi, Francesco, E-mail: francesco.mirizzi@enea.it [Consorzio CREATE, Via Claudio 21, I-80125 Napoli (Italy); Spassovsky, Ivan [Unità Tecnica Applicazioni delle Radiazioni – ENEA, C.R. Frascati, via E. Fermi 45, I-00044 Frascati (Italy); Ceccuzzi, Silvio [Unità Tecnica Fusione – ENEA C. R. Frascati, via E. Fermi 45, 00044 Frascati, Roma (Italy); Dattoli, Giuseppe; Di Palma, Emanuele; Doria, Andrea; Gallerano, Gianpiero [Unità Tecnica Applicazioni delle Radiazioni – ENEA, C.R. Frascati, via E. Fermi 45, I-00044 Frascati (Italy); Lampasi, Alessandro; Maffia, Giuseppe; Ravera, GianLuca [Unità Tecnica Fusione – ENEA C. R. Frascati, via E. Fermi 45, 00044 Frascati, Roma (Italy); Sabia, Elio [Unità Tecnica Applicazioni delle Radiazioni – ENEA, C.R. Frascati, via E. Fermi 45, I-00044 Frascati (Italy); Tuccillo, Angelo Antonio; Zito, Pietro [Unità Tecnica Fusione – ENEA C. R. Frascati, via E. Fermi 45, 00044 Frascati, Roma (Italy)

    2015-10-15

    Highlights: • ECRH system for DEMO. • Cyclotron Auto-Resonance Maser (CARM) devices. • Relativistic electron beams. • Bragg reflectors. • High voltage pulse modulators. - Abstract: ECRH&CD systems are extensively used on tokamak plasmas due to their capability of highly tailored power deposition, allowing very localised heating and non-inductive current drive, useful for MHD and profiles control. The high electron temperatures expected in DEMO will require ECRH systems with operating frequency in the 200–300 GHz range, equipped with a reasonable number of high power (P ≥ 1 MW) CW RF sources, for allowing central RF power deposition. In this frame the ENEA Fusion Department (Frascati) is coordinating a task force aimed at the study and realisation of a suitable high power, high frequency reliable source.

  18. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  19. High power gyrotrons: a close perspective

    International Nuclear Information System (INIS)

    Kartikeyan, M.V.

    2012-01-01

    Gyrotrons and their variants, popularly known as gyrodevices are millimetric wave sources provide very high powers ranging from long pulse to continuous wave (CW) for various technological, scientific and industrial applications. From their conception (monotron-version) in the late fifties until their successful development for various applications, these devices have come a long way technologically and made an irreversible impact on both users and developers. The possible applications of high power millimeter and sub-millimeter waves from gyrotrons and their variants (gyro-devices) span a wide range of technologies. The plasma physics community has already taken advantage of the recent advances of gyrotrons in the areas of RF plasma production, heating, non-inductive current drive, plasma stabilization and active plasma diagnostics for magnetic confinement thermonuclear fusion research, such as lower hybrid current drive (LHCD) (8 GHz), electron cyclotron resonance heating (ECRH) (28-170-220 GHz), electron cyclotron current drive (ECCD), collective Thomson scattering (CTS), heat-wave propagation experiments, and space-power grid (SPG) applications. Other important applications of gyrotrons are electron cyclotron resonance (ECR) discharges for the generation of multi- charged ions and soft X-rays, as well as industrial materials processing and plasma chemistry. Submillimeter wave gyrotrons are employed in high frequency, broadband electron paramagnetic resonance (EPR) spectroscopy. Additional future applications await the development of novel high power gyro-amplifiers and devices for high resolution radar ranging and imaging in atmospheric and planetary science as well as deep space and specialized satellite communications, RF drivers for next generation high gradient linear accelerators (supercolliders), high resolution Doppler radar, radar ranging and imaging in atmospheric and planetary science, drivers for next-generation high-gradient linear accelerators

  20. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  1. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    Science.gov (United States)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  2. Guidelines for design and development of computer/microprocessor based systems in research and power reactors

    International Nuclear Information System (INIS)

    Dhodapkar, S.D.; Chandra, A.K.

    1993-01-01

    Computer systems are being used in Indian research reactors and nuclear power plants in the areas of data acquisition, process monitoring and control, alarm annunciation and safety. The design and evaluation of these systems requires a special approach particularly due to the unique nature of the software which is an essential constituent of these systems. It was decided to evolve guidelines for designing and review of computer/microprocessor based systems for use in nuclear power plants in India. The present document tries to address various issues and presents guidelines which are as comprehensive as possible and cover all issues relating to the design and development of computer based systems. These guidelines are expected to be useful to the specifiers, designers and reviewers of such systems. (author). 6 refs., 1 fig

  3. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  4. High voltage generator circuit with low power and high efficiency applied in EEPROM

    International Nuclear Information System (INIS)

    Liu Yan; Zhang Shilin; Zhao Yiqiang

    2012-01-01

    This paper presents a low power and high efficiency high voltage generator circuit embedded in electrically erasable programmable read-only memory (EEPROM). The low power is minimized by a capacitance divider circuit and a regulator circuit using the controlling clock switch technique. The high efficiency is dependent on the zero threshold voltage (V th ) MOSFET and the charge transfer switch (CTS) charge pump. The proposed high voltage generator circuit has been implemented in a 0.35 μm EEPROM CMOS process. Measured results show that the proposed high voltage generator circuit has a low power consumption of about 150.48 μW and a higher pumping efficiency (83.3%) than previously reported circuits. This high voltage generator circuit can also be widely used in low-power flash devices due to its high efficiency and low power dissipation. (semiconductor integrated circuits)

  5. Optical Fiber for High-Power Optical Communication

    Directory of Open Access Journals (Sweden)

    Kenji Kurokawa

    2012-09-01

    Full Text Available We examined optical fibers suitable for avoiding such problems as the fiber fuse phenomenon and failures at bends with a high power input. We found that the threshold power for fiber fuse propagation in photonic crystal fiber (PCF and hole-assisted fiber (HAF can exceed 18 W, which is more than 10 times that in conventional single-mode fiber (SMF. We considered this high threshold power in PCF and HAF to be caused by a jet of high temperature fluid penetrating the air holes. We showed examples of two kinds of failures at bends in conventional SMF when the input power was 9 W. We also observed the generation of a fiber fuse under a condition that caused a bend-loss induced failure. We showed that one solution for the failures at bends is to use optical fibers with a low bending loss such as PCF and HAF. Therefore, we consider PCF and HAF to be attractive solutions to the problems of the fiber fuse phenomenon and failures at bends with a high power input.

  6. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  7. Plasmonic computing of spatial differentiation

    Science.gov (United States)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-01

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  8. Plasmonic computing of spatial differentiation.

    Science.gov (United States)

    Zhu, Tengfeng; Zhou, Yihan; Lou, Yijie; Ye, Hui; Qiu, Min; Ruan, Zhichao; Fan, Shanhui

    2017-05-19

    Optical analog computing offers high-throughput low-power-consumption operation for specialized computational tasks. Traditionally, optical analog computing in the spatial domain uses a bulky system of lenses and filters. Recent developments in metamaterials enable the miniaturization of such computing elements down to a subwavelength scale. However, the required metamaterial consists of a complex array of meta-atoms, and direct demonstration of image processing is challenging. Here, we show that the interference effects associated with surface plasmon excitations at a single metal-dielectric interface can perform spatial differentiation. And we experimentally demonstrate edge detection of an image without any Fourier lens. This work points to a simple yet powerful mechanism for optical analog computing at the nanoscale.

  9. Workshop on High Power ICH Antenna Designs for High Density Tokamaks

    Science.gov (United States)

    Aamodt, R. E.

    1990-02-01

    A workshop in high power ICH antenna designs for high density tokamaks was held to: (1) review the data base relevant to the high power heating of high density tokamaks; (2) identify the important issues which need to be addressed in order to ensure the success of the ICRF programs on CIT and Alcator C-MOD; and (3) recommend approaches for resolving the issues in a timely realistic manner. Some specific performance goals for the antenna system define a successful design effort. Simply stated these goals are: couple the specified power per antenna into the desired ion species; produce no more than an acceptable level of RF auxiliary power induced impurities; and have a mechanical structure which safely survives the thermal, mechanical and radiation stresses in the relevant environment. These goals are intimately coupled and difficult tradeoffs between scientific and engineering constraints have to be made.

  10. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  11. The NASA CSTI High Capacity Power Project

    International Nuclear Information System (INIS)

    Winter, J.; Dudenhoefer, J.; Juhasz, A.; Schwarze, G.; Patterson, R.; Ferguson, D.; Schmitz, P.; Vandersande, J.

    1992-01-01

    This paper describes the elements of NASA's CSTI High Capacity Power Project which include Systems Analysis, Stirling Power Conversion, Thermoelectric Power Conversion, Thermal Management, Power Management, Systems Diagnostics, Environmental Interactions, and Material/Structural Development. Technology advancement in all elements is required to provide the growth capability, high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall project will develop and demonstrate the technology base required to provide a wide range of modular power systems compatible with the SP-100 reactor which facilitates operation during lunar and planetary day/night cycles as well as allowing spacecraft operation at any attitude or distance from the sun. Significant accomplishments in all of the project elements will be presented, along with revised goals and project timeliness recently developed

  12. Process control in conventional power plants. The use of computer systems

    Energy Technology Data Exchange (ETDEWEB)

    Schievink, A; Woehrle, G

    1989-03-01

    To process information man can use his knowledge and his experience. Both these means however, permit only slow flows of information (about 25 bit/s) to be processed. The flow of information in a modern 700-MW-coal power station that the staff has to face is about 5000 bit per second, i.e. 200 times as much as a single human brain can process. One therefore needs modern computer-controlled process control systems which support the staff in recognizing and processing the complicated and rapid processes in such a way that the servicing staff is efficiently supported. The computer-man interface is ergonomically improved by visual display units.

  13. NET IBK Computer code package for the needs of planning, construction and operation of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Matausek, M V; Kocic, A; Marinkovic, N; Milosevic, M; Stancic, V [Boris Kidric Institute of nuclear sciences Vinca, Belgrade (Yugoslavia)

    1978-07-01

    Within the Nuclear Engineering Laboratory of the Boris Kidric Institute of Nuclear Sciences (NET IBK) a systematic work has been performed on collecting nuclear data for reactor calculation needs, on developing own methods and computing programs for reactor calculations, as well as on adapting and applying the foreign methods and codes. In this way a complete library of computer programs was formed for precise prediction of nuclear fuel burnup and depletion, for evaluation of the Power distribution variations with irradiation, for computing the amount of produced plutonium and its number densities etc. Programs for evaluation of location of different types of safety and economic analysis have been developed as well. The aim of this paper is to present our abilities to perform complex computations needed for planning, constructing and operating the nuclear power plants, by describing the NET IBK computer programs package. (author)

  14. Data management problems with a distributed computer network on nuclear power stations

    International Nuclear Information System (INIS)

    Davis, I.

    1980-01-01

    It is generally accepted within the Central Electricity Generating Board that the centralized process computers at some nuclear power plants are going to be replaced with distributed systems. Work on the theoretical considerations involved in such a replacement, including the allocation of data within the system, is going on with the goal of developing a simple, pragmatic approach to the determination of the required system resilience. A flexible network architecture which can accomodate expansions in the future and can be understood by non-computer specialists can thus be built up. (LL)

  15. High-speed algorithm for calculating the neutron field in a reactor when working in dialog mode with a computer

    International Nuclear Information System (INIS)

    Afanas'ev, A.M.

    1987-01-01

    The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information

  16. Computer study of isotope production for medical and industrial applications in high power accelerators

    Science.gov (United States)

    Mashnik, S. G.; Wilson, W. B.; Van Riper, K. A.

    2001-07-01

    Methods for radionuclide production calculation in a high power proton accelerator have been developed and applied to study production of 22 isotopes. These methods are readily applicable both to accelerator and reactor environments and to the production of other radioactive and stable isotopes. We have also developed methods for evaluating cross sections from a wide variety of sources into a single cross section set and have produced an evaluated library covering about a third of all natural elements that may be expanded to other reactions. A 684 page detailed report on this study, with 37 tables and 264 color figures, is available on the Web at http://t2.lanl.gov/publications/.

  17. Agglomeration Economies and the High-Tech Computer

    OpenAIRE

    Wallace, Nancy E.; Walls, Donald

    2004-01-01

    This paper considers the effects of agglomeration on the production decisions of firms in the high-tech computer cluster. We build upon an alternative definition of the high-tech computer cluster developed by Bardhan et al. (2003) and we exploit a new data source, the National Establishment Time-Series (NETS) Database, to analyze the spatial distribution of firms in this industry. An essential contribution of this research is the recognition that high-tech firms are heterogeneous collections ...

  18. A novel power source for high-precision, highly efficient micro w-EDM

    International Nuclear Information System (INIS)

    Chen, Shun-Tong; Chen, Chi-Hung

    2015-01-01

    The study presents the development of a novel power source for high-precision, highly efficient machining of micropart microstructures using micro wire electrical discharge machining (w-EDM). A novel power source based on a pluri resistance–capacitance (pRC) circuit that can generate a high-frequency, high-peak current with a short pulse train is proposed and designed to enhance the performance of micro w-EDM processes. Switching between transistors is precisely controlled in the designed power source to create a high-frequency short-pulse train current. Various microslot cutting tests in both aluminum and copper alloys are conducted. Experimental results demonstrate that the pRC power source creates instant spark erosion resulting in markedly less material for removal, diminishing discharge crater size, and consequently an improved surface finish. A new evaluation approach for spark erosion ability (SEA) to assess the merits of micro EDM power sources is also proposed. In addition to increasing the speed of micro w-EDM by increasing wire feed rates by 1.6 times the original feed rate, the power source is more appropriate for machining micropart microstructures since there is less thermal breaking. Satisfactory cutting of an elaborate miniature hook-shaped structure and a high-aspect ratio microstructure with a squared-pillar array also reveal that the developed pRC power source is effective, and should be very useful in the manufacture of intricate microparts. (paper)

  19. Power levels in office equipment: Measurements of new monitors and personal computers

    International Nuclear Information System (INIS)

    Roberson, Judy A.; Brown, Richard E.; Nordman, Bruce; Webber, Carrie A.; Homan, Gregory H.; Mahajan, Akshay; McWhinney, Marla; Koomey, Jonathan G.

    2002-01-01

    Electronic office equipment has proliferated rapidly over the last twenty years and is projected to continue growing in the future. Efforts to reduce the growth in office equipment energy use have focused on power management to reduce power consumption of electronic devices when not being used for their primary purpose. The EPA ENERGY STAR[registered trademark] program has been instrumental in gaining widespread support for power management in office equipment, and accurate information about the energy used by office equipment in all power levels is important to improving program design and evaluation. This paper presents the results of a field study conducted during 2001 to measure the power levels of new monitors and personal computers. We measured off, on, and low-power levels in about 60 units manufactured since July 2000. The paper summarizes power data collected, explores differences within the sample (e.g., between CRT and LCD monitors), and discusses some issues that arise in m etering office equipment. We also present conclusions to help improve the success of future power management programs.Our findings include a trend among monitor manufacturers to provide a single very low low-power level, and the need to standardize methods for measuring monitor on power, to more accurately estimate the annual energy consumption of office equipment, as well as actual and potential energy savings from power management

  20. Temperature Stabilized Characterization of High Voltage Power Supplies

    CERN Document Server

    Krarup, Ole

    2017-01-01

    High precision measurements of the masses of nuclear ions in the ISOLTRAP experiment relies on an MR-ToF. A major source of noise and drift is the instability of the high voltage power supplies employed. Electrical noise and temperature changes can broaden peaks in time-of-flight spectra and shift the position of peaks between runs. In this report we investigate how the noise and drift of high-voltage power supplies can be characterized. Results indicate that analog power supplies generally have better relative stability than digitally controlled ones, and that the high temperature coefficients of all power supplies merit efforts to stabilize them.

  1. Tracking and computing

    International Nuclear Information System (INIS)

    Niederer, J.

    1983-01-01

    This note outlines several ways in which large scale simulation computing and programming support may be provided to the SSC design community. One aspect of the problem is getting supercomputer power without the high cost and long lead times of large scale institutional computing. Another aspect is the blending of modern programming practices with more conventional accelerator design programs in ways that do not also swamp designers with the details of complicated computer technology

  2. High power diode lasers converted to the visible

    DEFF Research Database (Denmark)

    Jensen, Ole Bjarlin; Hansen, Anders Kragh; Andersen, Peter E.

    2017-01-01

    High power diode lasers have in recent years become available in many wavelength regions. However, some spectral regions are not well covered. In particular, the visible spectral range is lacking high power diode lasers with good spatial quality. In this paper, we highlight some of our recent...... results in nonlinear frequency conversion of high power near infrared diode lasers to the visible spectral region....

  3. Measurement of high-power microwave pulse under intense ...

    Indian Academy of Sciences (India)

    Abstract. KALI-1000 pulse power system has been used to generate single pulse nanosecond duration high-power microwaves (HPM) from a virtual cathode oscillator. (VIRCATOR) device. HPM power measurements were carried out using a transmitting– receiving system in the presence of intense high frequency (a few ...

  4. PEAC: A Power-Efficient Adaptive Computing Technology for Enabling Swarm of Small Spacecraft and Deployable Mini-Payloads

    Data.gov (United States)

    National Aeronautics and Space Administration — This task is to develop and demonstrate a path-to-flight and power-adaptive avionics technology PEAC (Power Efficient Adaptive Computing). PEAC will enable emerging...

  5. Workshop on high power ICH antenna designs for high density tokamaks

    International Nuclear Information System (INIS)

    Aamodt, R.E.

    1990-01-01

    A workshop in high power ICH antenna designs for high density tokamaks was held in Boulder, Colorado on January 31 through February 2, 1990. The purposes of the workshop were to: (1) review the data base relevant to the high power heating of high density tokamaks; (2) identify the important issues which need to be addressed in order to ensure the success of the ICRF programs on CIT and Alcator C-MOD; and (3) recommend approaches for resolving the issues in a timely realistic manner. Some specific performance goals for the antenna system define a successful design effort. Simply stated these goals are: couple the specified power per antenna into the desired ion species; produce no more than an acceptable level of rf auxiliary power induced impurities; and have a mechanical structure which safely survives the thermal, mechanical and radiation stresses in the relevant environment. These goals are intimately coupled and difficult tradeoffs between scientific and engineering constraints have to be made

  6. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  7. Integration of distributed plant process computer systems to nuclear power generation facilities

    International Nuclear Information System (INIS)

    Bogard, T.; Finlay, K.

    1996-01-01

    Many operating nuclear power generation facilities are replacing their plant process computer. Such replacement projects are driven by equipment obsolescence issues and associated objectives to improve plant operability, increase plant information access, improve man machine interface characteristics, and reduce operation and maintenance costs. This paper describes a few recently completed and on-going replacement projects with emphasis upon the application integrated distributed plant process computer systems. By presenting a few recent projects, the variations of distributed systems design show how various configurations can address needs for flexibility, open architecture, and integration of technological advancements in instrumentation and control technology. Architectural considerations for optimal integration of the plant process computer and plant process instrumentation ampersand control are evident from variations of design features

  8. Study on 'Safety qualification of process computers used in safety systems of nuclear power plants'

    International Nuclear Information System (INIS)

    Bertsche, K.; Hoermann, E.

    1991-01-01

    The study aims at developing safety standards for hardware and software of computer systems which are increasingly used also for important safety systems in nuclear power plants. The survey of the present state-of-the-art of safety requirements and specifications for safety-relevant systems and, additionally, for process computer systems has been compiled from national and foreign rules. In the Federal Republic of Germany the KTA safety guides and the BMI/BMU safety criteria have to be observed. For the design of future computer-aided systems in nuclear power plants it will be necessary to apply the guidelines in [DIN-880] and [DKE-714] together with [DIN-192]. With the aid of a risk graph the various functions of a system, or of a subsystem, can be evaluated with regard to their significance for safety engineering. (orig./HP) [de

  9. Core status computing system

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki.

    1982-01-01

    Purpose: To calculate power distribution, flow rate and the like in the reactor core with high accuracy in a BWR type reactor. Constitution: Total flow rate signals, traverse incore probe (TIP) signals as the neutron detector signals, thermal power signals and pressure signals are inputted into a process computer, where the power distribution and the flow rate distribution in the reactor core are calculated. A function generator connected to the process computer calculates the absolute flow rate passing through optional fuel assemblies using, as variables, flow rate signals from the introduction part for fuel assembly flow rate signals, data signals from the introduction part for the geometrical configuration data at the flow rate measuring site of fuel assemblies, total flow rate signals for the reactor core and the signals from the process computer. Numerical values thus obtained are given to the process computer as correction signals to perform correction for the experimental data. (Moriyama, K.)

  10. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  11. High-power sputtering employed for film deposition

    International Nuclear Information System (INIS)

    Shapovalov, V I

    2017-01-01

    The features of high-power magnetron sputtering employed for the films’ deposition are reviewed. The main physical phenomena accompanying high-power sputtering including ion-electron emission, gas rarefaction, ionization of sputtered atoms, self-sputtering, ion sound waves and the impact of the target heating are described. (paper)

  12. GPS synchronized power system phase angle measurements

    Science.gov (United States)

    Wilson, Robert E.; Sterlina, Patrick S.

    1994-09-01

    This paper discusses the use of Global Positioning System (GPS) synchronized equipment for the measurement and analysis of key power system quantities. Two GPS synchronized phasor measurement units (PMU) were installed before testing. It was indicated that PMUs recorded the dynamic response of the power system phase angles when the northern California power grid was excited by the artificial short circuits. Power system planning engineers perform detailed computer generated simulations of the dynamic response of the power system to naturally occurring short circuits. The computer simulations use models of transmission lines, transformers, circuit breakers, and other high voltage components. This work will compare computer simulations of the same event with field measurement.

  13. Gingin High Optical Power Test Facility

    International Nuclear Information System (INIS)

    Zhao, C; Blair, D G; Barrigo, P

    2006-01-01

    The Australian Consortium for Gravitational Wave Astronomy (ACIGA) in collaboration with LIGO is developing a high optical power research facility at the AIGO site, Gingin, Western Australia. Research at the facility will provide solutions to the problems that advanced gravitational wave detectors will encounter with extremely high optical power. The problems include thermal lensing and parametric instabilities. This article will present the status of the facility and the plan for the future experiments

  14. Development of a Computational Tool for Measuring Organizational Competitiveness in the Photovoltaic Power Plants

    Directory of Open Access Journals (Sweden)

    Carmen B. Rosa

    2018-04-01

    Full Text Available Photovoltaic (PV power generation is embedded in a globally competitive environment. This characteristic forces PV power plants to perform most processes relevant for their competitiveness with maximum efficiency. From managers’ point of view, the evaluation of solar energy performance from installed plants is justified to indicate their level of organizational competitiveness, which supports the decision-making process. This manuscript purposes a computational tool that graphically presents the level of competitiveness of PV power plants units based on performance indicators. This tool was developed by using the Key Performance Indicators (KPIs concept, which represents a set of measures focusing on the most critical aspects for the success of the organizations. The KPIs encompass four Fundamental Viewpoints (FV: Strategic Alliances, Solar Energy Monitoring, Management and Strategic Processes, and Power Generation Innovations. These four FVs were deployed on 26 Critical Success Factors (CSFs and 39 KPIs. Sequentially, the tool was applied in four solar generation plants, where three presented an organizational competitiveness global level “potentially competitive”. The proposed computational tool allows managers to assess the degree of organization competitiveness as well as aid in prospecting of future scenarios and decision-making.

  15. E-beam high voltage switching power supply

    Science.gov (United States)

    Shimer, Daniel W.; Lange, Arnold C.

    1997-01-01

    A high power, solid state power supply is described for producing a controllable, constant high voltage output under varying and arcing loads suitable for powering an electron beam gun or other ion source. The present power supply is most useful for outputs in a range of about 100-400 kW or more. The power supply is comprised of a plurality of discrete switching type dc-dc converter modules, each comprising a voltage regulator, an inductor, an inverter for producing a high frequency square wave current of alternating polarity, an improved inverter voltage clamping circuit, a step up transformer, and an output rectifier for producing a dc voltage at the output of each module. The inputs to the converter modules are fed from a common dc rectifier/filter and are linked together in parallel through decoupling networks to suppress high frequency input interactions. The outputs of the converter modules are linked together in series and connected to the input of the transmission line to the load through a decoupling and line matching network. The dc-dc converter modules are phase activated such that for n modules, each module is activated equally 360.degree./n out of phase with respect to a successive module. The phased activation of the converter modules, combined with the square current waveforms out of the step up transformers, allows the power supply to operate with greatly reduced output capacitance values which minimizes the stored energy available for discharge into an electron beam gun or the like during arcing. The present power supply also provides dynamic response to varying loads by controlling the voltage regulator duty cycle using simulated voltage feedback signals and voltage feedback loops. Circuitry is also provided for sensing incipient arc currents reflected at the output of the power supply and for simultaneously decoupling the power supply circuitry from the arcing load.

  16. E-beam high voltage switching power supply

    International Nuclear Information System (INIS)

    Shimer, D.W.; Lange, A.C.

    1997-01-01

    A high power, solid state power supply is described for producing a controllable, constant high voltage output under varying and arcing loads suitable for powering an electron beam gun or other ion source. The present power supply is most useful for outputs in a range of about 100-400 kW or more. The power supply is comprised of a plurality of discrete switching type dc-dc converter modules, each comprising a voltage regulator, an inductor, an inverter for producing a high frequency square wave current of alternating polarity, an improved inverter voltage clamping circuit, a step up transformer, and an output rectifier for producing a dc voltage at the output of each module. The inputs to the converter modules are fed from a common dc rectifier/filter and are linked together in parallel through decoupling networks to suppress high frequency input interactions. The outputs of the converter modules are linked together in series and connected to the input of the transmission line to the load through a decoupling and line matching network. The dc-dc converter modules are phase activated such that for n modules, each module is activated equally 360 degree/n out of phase with respect to a successive module. The phased activation of the converter modules, combined with the square current waveforms out of the step up transformers, allows the power supply to operate with greatly reduced output capacitance values which minimizes the stored energy available for discharge into an electron beam gun or the like during arcing. The present power supply also provides dynamic response to varying loads by controlling the voltage regulator duty cycle using simulated voltage feedback signals and voltage feedback loops. Circuitry is also provided for sensing incipient arc currents reflected at the output of the power supply and for simultaneously decoupling the power supply circuitry from the arcing load. 7 figs

  17. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  18. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    Science.gov (United States)

    Lu, Lu; Yu, Hua

    2018-05-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  19. DR2DI: a powerful computational tool for predicting novel drug-disease associations

    Science.gov (United States)

    Lu, Lu; Yu, Hua

    2018-04-01

    Finding the new related candidate diseases for known drugs provides an effective method for fast-speed and low-risk drug development. However, experimental identification of drug-disease associations is expensive and time-consuming. This motivates the need for developing in silico computational methods that can infer true drug-disease pairs with high confidence. In this study, we presented a novel and powerful computational tool, DR2DI, for accurately uncovering the potential associations between drugs and diseases using high-dimensional and heterogeneous omics data as information sources. Based on a unified and extended similarity kernel framework, DR2DI inferred the unknown relationships between drugs and diseases using Regularized Kernel Classifier. Importantly, DR2DI employed a semi-supervised and global learning algorithm which can be applied to uncover the diseases (drugs) associated with known and novel drugs (diseases). In silico global validation experiments showed that DR2DI significantly outperforms recent two approaches for predicting drug-disease associations. Detailed case studies further demonstrated that the therapeutic indications and side effects of drugs predicted by DR2DI could be validated by existing database records and literature, suggesting that DR2DI can be served as a useful bioinformatic tool for identifying the potential drug-disease associations and guiding drug repositioning. Our software and comparison codes are freely available at https://github.com/huayu1111/DR2DI.

  20. High Power laser power conditioning system new discharge circuit research

    CERN Document Server

    Li Yi; Peng Han Sheng; Zhou Pei Zhang; Zheng Wan Guo; Guo Lang Fu; Chen Li Hua; Chen De Hui; Lai Gui You; Luan Yong Ping

    2002-01-01

    The new discharge circuit of power conditioning system for high power laser is studied. The theoretical model of the main discharge circuit is established. The pre-ionization circuit is studied in experiment. In addition, the explosion energy of the new large xenon lamp is successfully measured. The conclusion has been applied to 4 x 2 amplifier system

  1. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  2. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  3. A computer study of radionuclide production in high power accelerators for medical and industrial applications

    Science.gov (United States)

    Van Riper, K. A.; Mashnik, S. G.; Wilson, W. B.

    2001-05-01

    Methods for radionuclide production calculation in a high power proton accelerator have been developed and applied to study production of 22 isotopes by high-energy protons and neutrons. These methods are readily applicable to accelerator, and reactor, environments other than the particular model we considered and to the production of other radioactive and stable isotopes. We have also developed methods for evaluating cross sections from a wide variety of sources into a single cross section set and have produced an evaluated library covering about a third of all natural elements. These methods also are applicable to an expanded set of reactions. A 684 page detailed report on this study, with 37 tables and 264 color figures is available on the Web at http://t2.lanl.gov/publications/publications.html, or, if not accessible, in hard copy from the authors.

  4. Development of distributed computer systems for future nuclear power plants

    International Nuclear Information System (INIS)

    Yan, G.; L'Archeveque, J.V.R.

    1978-01-01

    Dual computers have been used for direct digital control in CANDU power reactors since 1963. However, as reactor plants have grown in size and complexity, some drawbacks to centralized control appear such as, for example, the surprisingly large amount of cabling required for information transmission. Dramatic changes in costs of components and a desire to improve system performance have stimulated a broad-based research and development effort in distribution systems. This paper outlines work in this area

  5. Integrated Computing, Communication, and Distributed Control of Deregulated Electric Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bajura, Richard; Feliachi, Ali

    2008-09-24

    Restructuring of the electricity market has affected all aspects of the power industry from generation to transmission, distribution, and consumption. Transmission circuits, in particular, are stressed often exceeding their stability limits because of the difficulty in building new transmission lines due to environmental concerns and financial risk. Deregulation has resulted in the need for tighter control strategies to maintain reliability even in the event of considerable structural changes, such as loss of a large generating unit or a transmission line, and changes in loading conditions due to the continuously varying power consumption. Our research efforts under the DOE EPSCoR Grant focused on Integrated Computing, Communication and Distributed Control of Deregulated Electric Power Systems. This research is applicable to operating and controlling modern electric energy systems. The controls developed by APERC provide for a more efficient, economical, reliable, and secure operation of these systems. Under this program, we developed distributed control algorithms suitable for large-scale geographically dispersed power systems and also economic tools to evaluate their effectiveness and impact on power markets. Progress was made in the development of distributed intelligent control agents for reliable and automated operation of integrated electric power systems. The methodologies employed combine information technology, control and communication, agent technology, and power systems engineering in the development of intelligent control agents for reliable and automated operation of integrated electric power systems. In the event of scheduled load changes or unforeseen disturbances, the power system is expected to minimize the effects and costs of disturbances and to maintain critical infrastructure operational.

  6. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations

    Science.gov (United States)

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-12-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  7. High-z objects and cold dark matter cosmogonies - Constraints on the primordial power spectrum on small scales

    Science.gov (United States)

    Kashlinsky, A.

    1993-01-01

    Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.

  8. The readout performance evaluation of PowerPC

    International Nuclear Information System (INIS)

    Chu Yuanping; Zhang Hongyu; Zhao Jingwei; Ye Mei; Tao Ning; Zhu Kejun; Tang Suqiu; Guo Yanan

    2003-01-01

    PowerPC, as a powerful low-cost embedded computer, is one of the very important research objects in recent years in the project of BESIII data acquisition system. The researches on the embedded system and embedded computer have achieved many important results in the field of High Energy Physics especially in the data acquisition system. The one of the key points to design an acquisition system using PowerPC is to evaluate the readout ability of PowerPC correctly. The paper introduce some tests for the PowerPC readout performance. (authors)

  9. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  10. Software for computer based systems important to safety in nuclear power plants. Safety guide

    International Nuclear Information System (INIS)

    2004-01-01

    Computer based systems are of increasing importance to safety in nuclear power plants as their use in both new and older plants is rapidly increasing. They are used both in safety related applications, such as some functions of the process control and monitoring systems, as well as in safety critical applications, such as reactor protection or actuation of safety features. The dependability of computer based systems important to safety is therefore of prime interest and should be ensured. With current technology, it is possible in principle to develop computer based instrumentation and control systems for systems important to safety that have the potential for improving the level of safety and reliability with sufficient dependability. However, their dependability can be predicted and demonstrated only if a systematic, fully documented and reviewable engineering process is followed. Although a number of national and international standards dealing with quality assurance for computer based systems important to safety have been or are being prepared, internationally agreed criteria for demonstrating the safety of such systems are not generally available. It is recognized that there may be other ways of providing the necessary safety demonstration than those recommended here. The basic requirements for the design of safety systems for nuclear power plants are provided in the Requirements for Design issued in the IAEA Safety Standards Series.The IAEA has issued a Technical Report to assist Member States in ensuring that computer based systems important to safety in nuclear power plants are safe and properly licensed. The report provides information on current software engineering practices and, together with relevant standards, forms a technical basis for this Safety Guide. The objective of this Safety Guide is to provide guidance on the collection of evidence and preparation of documentation to be used in the safety demonstration for the software for computer based

  11. Software for computer based systems important to safety in nuclear power plants. Safety guide

    International Nuclear Information System (INIS)

    2005-01-01

    Computer based systems are of increasing importance to safety in nuclear power plants as their use in both new and older plants is rapidly increasing. They are used both in safety related applications, such as some functions of the process control and monitoring systems, as well as in safety critical applications, such as reactor protection or actuation of safety features. The dependability of computer based systems important to safety is therefore of prime interest and should be ensured. With current technology, it is possible in principle to develop computer based instrumentation and control systems for systems important to safety that have the potential for improving the level of safety and reliability with sufficient dependability. However, their dependability can be predicted and demonstrated only if a systematic, fully documented and reviewable engineering process is followed. Although a number of national and international standards dealing with quality assurance for computer based systems important to safety have been or are being prepared, internationally agreed criteria for demonstrating the safety of such systems are not generally available. It is recognized that there may be other ways of providing the necessary safety demonstration than those recommended here. The basic requirements for the design of safety systems for nuclear power plants are provided in the Requirements for Design issued in the IAEA Safety Standards Series.The IAEA has issued a Technical Report to assist Member States in ensuring that computer based systems important to safety in nuclear power plants are safe and properly licensed. The report provides information on current software engineering practices and, together with relevant standards, forms a technical basis for this Safety Guide. The objective of this Safety Guide is to provide guidance on the collection of evidence and preparation of documentation to be used in the safety demonstration for the software for computer based

  12. Software for computer based systems important to safety in nuclear power plants. Safety guide

    International Nuclear Information System (INIS)

    2000-01-01

    Computer based systems are of increasing importance to safety in nuclear power plants as their use in both new and older plants is rapidly increasing. They are used both in safety related applications, such as some functions of the process control and monitoring systems, as well as in safety critical applications, such as reactor protection or actuation of safety features. The dependability of computer based systems important to safety is therefore of prime interest and should be ensured. With current technology, it is possible in principle to develop computer based instrumentation and control systems for systems important to safety that have the potential for improving the level of safety and reliability with sufficient dependability. However, their dependability can be predicted and demonstrated only if a systematic, fully documented and reviewable engineering process is followed. Although a number of national and international standards dealing with quality assurance for computer based systems important to safety have been or are being prepared, internationally agreed criteria for demonstrating the safety of such systems are not generally available. It is recognized that there may be other ways of providing the necessary safety demonstration than those recommended here. The basic requirements for the design of safety systems for nuclear power plants are provided in the Requirements for Design issued in the IAEA Safety Standards Series.The IAEA has issued a Technical Report to assist Member States in ensuring that computer based systems important to safety in nuclear power plants are safe and properly licensed. The report provides information on current software engineering practices and, together with relevant standards, forms a technical basis for this Safety Guide. The objective of this Safety Guide is to provide guidance on the collection of evidence and preparation of documentation to be used in the safety demonstration for the software for computer based

  13. On energy efficient power allocation for power-constrained systems

    KAUST Repository

    Sboui, Lokman

    2014-09-01

    Recently, the energy efficiency (EE) has become an important factor when designing new wireless communication systems. Due to economic and environmental challenges, new trends and efforts are oriented toward “green” communication especially for energy-constrained applications such as wireless sensors network and cognitive radio. To this end, we analyze the power allocation scheme that maximizes the EE defined as rate over the total power including circuit power. We derive an explicit expression of the optimal power with instantaneous channel gain based on EE criterion. We show that the relation between the EE and the spectral efficiency (SE) when the optimal power is adopted is strictly increasing in contrast with the SE-EE trade-off discussed in the literature. We also solve a non-convex problem and compute explicitly the optimal power for ergodic EE under either a peak or an average power constraint. When the instantaneous channel is not available, we provide the optimal power equation and compute simple sub-optimal power. In the numerical results, we show that the sup-optimal solution is very close to the optimal solution. In addition, we show that the absence of the channel state information (CSI) only affects the EE and the SE performances at high power regime compared to the full CSI case.

  14. Atmospheric Propagation and Combining of High-Power Lasers

    Science.gov (United States)

    2015-09-08

    Brightness-scaling potential of actively phase- locked solid state laser arrays,” IEEE J. Sel. Topics Quantum Electron., vol. 13, no. 3, pp. 460–472, May...attempting to phase- lock high-power lasers, which is not encountered when phase- locking low-power lasers, for example mW power levels. Regardless, we...technology does not currently exist. This presents a challenging problem when attempting to phase- lock high-power lasers, which is not encountered when

  15. Computations in plasma physics

    International Nuclear Information System (INIS)

    Cohen, B.I.; Killeen, J.

    1984-01-01

    A review of computer application in plasma physics is presented. Computer contribution to the investigation of magnetic and inertial confinement of a plasma and charged particle beam propagation is described. Typical utilization of computer for simulation and control of laboratory and cosmic experiments with a plasma and for data accumulation in these experiments is considered. Basic computational methods applied in plasma physics are discussed. Future trends of computer utilization in plasma reseaches are considered in terms of an increasing role of microprocessors and high-speed data plotters and the necessity of more powerful computer application

  16. A computer - aided system for the the E.D.F. 1400 MW. Nuclear power plants control

    International Nuclear Information System (INIS)

    Beltranda, G.; Philipps, C.

    1988-01-01

    The future E.D.F. 1400 MW nuclear power plants (due to be commissioned in 1991 at CHOOZ) are provided with a control and instrumentation system including the following levels: - sensors and actuators (LEVEL 0): this is the interface of the elementary acquisition and control signals; - the programmable logical and numerical controllers (LEVEL 1) for the logical control sequences and analog adjustment sequences for the whole equipment of the facilities; - the control room (LEVEL 2) including the computer-aided operation system as well as the wall mimic diagram and the auxiliary panel directly connected to the controllers. This is the processing and control conversational level; - the maintenance and site computer-aided systems (LEVEL 3). This paper aims at describing the computer-aided operation system (called KIC N4), its main functions, its architecture and the solutions retained as regards its softwares and the high-quality of data required. The achievement of this system has been entrusted by EDF to the SEMA. METRA/CIMSA-SINTRA grouping, among which SEMA.METRA is the leading company

  17. Transient analysis of the output short-circuit fault of high power and high voltage DC power supply

    International Nuclear Information System (INIS)

    Yang Zhigang; Zhang Jian; Huang Yiyun; Hao Xu; Sun Haozhang; Guo Fei

    2014-01-01

    The transient conditions of output short-circuit fault of high voltage DC power supply was introduced, and the energy of power supply injecting into klystron during the protection process of three-electrode gas switch were analyzed and calculated in detail when klystron load happening electrode arc faults. The results of calculation and simulation are consistent with the results of the experiment. When the output short-circuit fault of high voltage power supply occurs, switch can be shut off in the microsecond, and the short circuit current can be controlled in 200 A. It has verified the rapidity and reliability of the three-electrode gas switch protection, and it has engineering application value. (authors)

  18. Design and development of power supplies for high power IOT based RF amplifier

    International Nuclear Information System (INIS)

    Kumar, Yashwant; Kumari, S.; Ghosh, M.K.; Bera, A.; Sadhukhan, A.; Pal, S.S.; Khare, V.K.; Tiwari, T.P.; Thakur, S.K.; Saha, S.

    2013-01-01

    Design, development, circuit topology, function of system components and key system specifications of different power supplies for biasing electrodes of Thales Inductive Output Tube (IOT) based high power RF amplifier are presented in this paper. A high voltage power supply (-30 kV, 3.2A dc) with fast (∼microsecond) crowbar protection circuit is designed, developed and commissioned at VECC for testing the complete setup. Other power supplies for biasing grid electrode (300V, 0.5A dc) and Ion Pump (3 kV, 0.1mA dc) of IOT are also designed, developed and tested with actual load. A HV Deck (60kV Isolation) is specially designed in house to place these power supplies which are floating at 30 kV. All these power supplies are powered by an Isolation Transformer (5 kVA, 60 kV isolation) designed and developed in VECC. (author)

  19. Index extraction for electromagnetic field evaluation of high power wireless charging system.

    Science.gov (United States)

    Park, SangWook

    2017-01-01

    This paper presents the precise dosimetry for highly resonant wireless power transfer (HR-WPT) system using an anatomically realistic human voxel model. The dosimetry for the HR-WPT system designed to operate at 13.56 MHz frequency, which one of the ISM band frequency band, is conducted in the various distances between the human model and the system, and in the condition of alignment and misalignment between transmitting and receiving circuits. The specific absorption rates in the human body are computed by the two-step approach; in the first step, the field generated by the HR-WPT system is calculated and in the second step the specific absorption rates are computed with the scattered field finite-difference time-domain method regarding the fields obtained in the first step as the incident fields. The safety compliance for non-uniform field exposure from the HR-WPT system is discussed with the international safety guidelines. Furthermore, the coupling factor concept is employed to relax the maximum allowable transmitting power. Coupling factors derived from the dosimetry results are presented. In this calculation, the external magnetic field from the HR-WPT system can be relaxed by approximately four times using coupling factor in the worst exposure scenario.

  20. Driver Circuit For High-Power MOSFET's

    Science.gov (United States)

    Letzer, Kevin A.

    1991-01-01

    Driver circuit generates rapid-voltage-transition pulses needed to switch high-power metal oxide/semiconductor field-effect transistor (MOSFET) modules rapidly between full "on" and full "off". Rapid switching reduces time of overlap between appreciable current through and appreciable voltage across such modules, thereby increasing power efficiency.