WorldWideScience

Sample records for high computational power

  1. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  2. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  3. Power/energy use cases for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  4. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  5. High performance computing in power and energy systems

    CERN Document Server

    Khaitan, Siddhartha Kumar

    2012-01-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would  need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, casc

  6. Computer Modeling and Simulation Evaluation of High Power LED Sources for Secondary Optical Design

    Institute of Scientific and Technical Information of China (English)

    SU Hong-dong; WANG Ya-jun; DONG Ji-yang; CHEN Zhong

    2007-01-01

    Proposed and demonstrated is a novel computer modeling method for high power light emitting diodes(LEDs). It contains geometrical structure and optical property of high power LED as well as LED dies definition with its spatial and angular distribution. Merits and non-merits of traditional modeling methods when applied to high power LEDs based on secondary optical design are discussed. Two commercial high power LEDs are simulated using the proposed computer modeling method. Correlation coefficient is proposed to compare and analyze the simulation results and manufacturing specifications. The source model is precisely demonstrated by obtaining above 99% in correlation coefficient with different surface incident angle intervals.

  7. Task scheduling for high performance low power embedded computing

    Science.gov (United States)

    Deniziak, Stanislaw; Dzitkowski, Albert

    2016-12-01

    In this paper we present a method of task scheduling for low-power real-time embedded systems. We assume that the system is specified as a task graph, then it is implemented using multi-core embedded processor with low-power processing capabilities. We propose a new scheduling method to create the optimal schedule. The goal of optimization is to minimize the power consumption while all time constraints will be satisfied. We present experimental results, obtained for some standard benchmarks, showing advantages of our method.

  8. Computer simulation of effect of conditions on discharge-excited high power gas flow CO laser

    Science.gov (United States)

    Ochiai, Ryo; Iyoda, Mitsuhiro; Taniwaki, Manabu; Sato, Shunichi

    2017-01-01

    The authors have developed the computer simulation codes to analyze the effect of conditions on the performances of discharge excited high power gas flow CO laser. The six be analyzed. The simulation code described and executed by Macintosh computers consists of some modules to calculate the kinetic processes. The detailed conditions, kinetic processes, results and discussions are described in this paper below.

  9. High Performance Computing - Power Application Programming Interface Specification Version 2.0.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ward, H. Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  10. Power grid simulation applications developed using the GridPACK™ high performance computing framework

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Diao, Ruisheng; Huang, Zhenyu (Henry); Perkins, William; Palmer, Bruce

    2016-12-01

    This paper describes the GridPACK™ software framework for developing power grid simulations that can run on high performance computing platforms, with several example applications (dynamic simulation, static contingency analysis, and dynamic contingency analysis) that have been developed using GridPACK.

  11. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng; Wu, Di; Chen, Yousu

    2017-05-01

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP). These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.

  12. Nonlinear dynamics of high-power ultrashort laser pulses: exaflop computations on a laboratory computer station and subcycle light bullets

    Science.gov (United States)

    Voronin, A. A.; Zheltikov, A. M.

    2016-09-01

    The propagation of high-power ultrashort light pulses involves intricate nonlinear spatio-temporal dynamics where various spectral-temporal field transformation effects are strongly coupled to the beam dynamics, which, in turn, varies from the leading to the trailing edge of the pulse. Analysis of this nonlinear dynamics, accompanied by spatial instabilities, beam breakup into multiple filaments, and unique phenomena leading to the generation of extremely short optical field waveforms, is equivalent in its computational complexity to a simulation of the time evolution of a few billion-dimensional physical system. Such an analysis requires exaflops of computational operations and is usually performed on high-performance supercomputers. Here, we present methods of physical modeling and numerical analysis that allow problems of this class to be solved on a laboratory computer boosted by a cluster of graphic accelerators. Exaflop computations performed with the application of these methods reveal new unique phenomena in the spatio-temporal dynamics of high-power ultrashort laser pulses. We demonstrate that unprecedentedly short light bullets can be generated as a part of that dynamics, providing optical field localization in both space and time through a delicate balance between dispersion and nonlinearity with simultaneous suppression of diffraction-induced beam divergence due to the joint effect of Kerr and ionization nonlinearities.

  13. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang; Dong, Zhaoyang; Khaitan, Siddhartha; Min, Liang; Taylor, Gary

    2017-05-01

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models in the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.

  14. High Performance Power Spectrum Analysis Using a FPGA Based Reconfigurable Computing Platform

    CERN Document Server

    Abhyankar, Yogindra; Agarwal, Yogesh; Subrahmanya, C R; Prasad, Peeyush; 10.1109/RECONF.2006.307786

    2011-01-01

    Power-spectrum analysis is an important tool providing critical information about a signal. The range of applications includes communication-systems to DNA-sequencing. If there is interference present on a transmitted signal, it could be due to a natural cause or superimposed forcefully. In the latter case, its early detection and analysis becomes important. In such situations having a small observation window, a quick look at power-spectrum can reveal a great deal of information, including frequency and source of interference. In this paper, we present our design of a FPGA based reconfigurable platform for high performance power-spectrum analysis. This allows for the real-time data-acquisition and processing of samples of the incoming signal in a small time frame. The processing consists of computation of power, its average and peak, over a set of input values. This platform sustains simultaneous data streams on each of the four input channels.

  15. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  16. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Laros III, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeBonis, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  17. CATARACT: Computer code for improving power calculations at NREL's high-flux solar furnace

    Science.gov (United States)

    Scholl, K.; Bingham, C.; Lewandowski, A.

    1994-01-01

    The High-Flux Solar Furnace (HFSF), operated by the National Renewable Energy Laboratory, uses a camera-based, flux-mapping system to analyze the distribution and to determine total power at the focal point. The flux-mapping system consists of a diffusively reflecting plate with seven circular foil calorimeters, a charge-coupled device (CCD) camera, an IBM-compatible personal computer with a frame-grabber board, and commercial image analysis software. The calorimeters provide flux readings that are used to scale the image captured from the plate by the camera. The image analysis software can estimate total power incident on the plate by integrating under the 3-dimensional image. Because of the physical layout of the HFSF, the camera is positioned at a 20 angle to the flux mapping plate normal. The foreshortening of the captured images that results represents a systematic error in the power calculations because the software incorrectly assumes the image is parallel to the camera's array. We have written a FORTRAN computer program called CATARACT (camera/target angle correction) that we use to transform the original flux-mapper image to a plane that is normal to the camera's optical axis. A description of the code and the results of experiments performed to verify it are presented. Also presented are comparisons of the total power available from the HFSF as determined from the flux mapping system and theoretical considerations.

  18. GridPACK™ : A Framework for Developing Power Grid Simulations on High-Performance Computing Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Palmer, Bruce J.; Perkins, William A.; Chen, Yousu; Jin, Shuangshuang; Callahan, David; Glass, Kevin A.; Diao, Ruisheng; Rice, Mark J.; Elbert, Stephen T.; Vallem, Mallikarjuna R.; Huang, Zhenyu

    2016-05-01

    This paper describes the GridPACK™ framework, which is designed to help power grid engineers develop modeling software capable of running on high performance computers. The framework makes extensive use of software templates to provide high level functionality while at the same time allowing developers the freedom to express whatever models and algorithms they are using. GridPACK™ contains modules for setting up distributed power grid networks, assigning buses and branches with arbitrary behaviors to the network, creating distributed matrices and vectors and using parallel linear and non-linear solvers to solve algebraic equations. It also provides mappers to create matrices and vectors based on properties of the network and functionality to support IO and to mana

  19. A computer control system for the PNC high power cw electron linac. Concept and hardware

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, T.; Hirano, K.; Takei, Hayanori; Nomura, Masahiro; Tani, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Kato, Y.; Ishikawa, Y.

    1998-06-01

    Design and construction of a high power cw (Continuous Wave) electron linac for studying feasibility of nuclear waste transmutation was started in 1989 at PNC. The PNC accelerator (10 MeV, 20 mA average current, 4 ms pulse width, 50 Hz repetition) is dedicated machine for development of the high current acceleration technology in future need. The computer control system is responsible for accelerator control and supporting the experiment for high power operation. The feature of the system is the measurements of accelerator status simultaneously and modularity of software and hardware for easily implemented for modification or expansion. The high speed network (SCRAM Net {approx} 15 MB/s), Ethernet, and front end processors (Digital Signal Processor) were employed for the high speed data taking and control. The system was designed to be standard modules and software implemented man machine interface. Due to graphical-user-interface and object-oriented-programming, the software development environment is effortless programming and maintenance. (author)

  20. Platform computing powers enterprise grid

    CERN Multimedia

    2002-01-01

    Platform Computing, today announced that the Stanford Linear Accelerator Center is using Platform LSF 5, to carry out groundbreaking research into the origins of the universe. Platform LSF 5 will deliver the mammoth computing power that SLAC's Linear Accelerator needs to process the data associated with intense high-energy physics research (1 page).

  1. Powered Tate Pairing Computation

    Science.gov (United States)

    Kang, Bo Gyeong; Park, Je Hong

    In this letter, we provide a simple proof of bilinearity for the eta pairing. Based on it, we show an efficient method to compute the powered Tate pairing as well. Although efficiency of our method is equivalent to that of the Tate pairing on the eta pairing approach, but ours is more general in principle.

  2. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming

    2017-02-02

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  3. Computer Simulation Of A CO2 High Power Laser With Folded Resonator

    Science.gov (United States)

    Meisterhofer, E.; Lippitsch, M. E.

    1984-03-01

    Based on the iterative solution of a generalized Kirchhoff-Fresnel integral equation we have developed a computer model for realistic simulation of arbitrary linear or folded resonators. With known parameters of the active medium (small signal gain, saturation intensity, volume) we can determine the optimal parameters for the resonator (e.g. out-put mirror transmission, radius of curvature of mirrors, diameter and place of diaphragms, length of resonator) to get highest output power with a certain mode pattern. The model is tested for linear as well as folded resonators.

  4. Piezoelectronics: a novel, high-performance, low-power computer switching technology

    Science.gov (United States)

    Newns, D. M.; Martyna, G. J.; Elmegreen, B. G.; Liu, X.-H.; Theis, T. N.; Trolier-McKinstry, S.

    2012-06-01

    Current switching speeds in CMOS technology have saturated since 2003 due to power constraints arising from the inability of line voltage to be further lowered in CMOS below about 1V. We are developing a novel switching technology based on piezoelectrically transducing the input or gate voltage into an acoustic wave which compresses a piezoresistive (PR) material forming the device channel. Under pressure the PR undergoes an insulator-to-metal transition which makes the channel conducting, turning on the device. A piezoelectric (PE) transducer material with a high piezoelectric coefficient, e.g. a domain-engineered relaxor piezoelectric, is needed to achieve low voltage operation. Suitable channel materials manifesting a pressure-induced metal-insulator transition can be found amongst rare earth chalcogenides, transition metal oxides, etc.. Mechanical requirements include a high PE/PR area ratio to step up pressure, a rigid surround material to constrain the PE and PR external boundaries normal to the strain axis, and a void space to enable free motion of the component side walls. Using static mechanical modeling and dynamic electroacoustic simulations, we optimize device structure and materials and predict performance. The device, termed a PiezoElectronic Transistor (PET) can be used to build complete logic circuits including inverters, flip-flops, and gates. This "Piezotronic" logic is predicted to have a combination of low power and high speed operation.

  5. Fundamental algorithm and computational codes for the light beam propagation in high power laser system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The fundamental algorithm of light beam propagation in high powerlaser system is investigated and the corresponding computational codes are given. It is shown that the number of modulation ring due to the diffraction is related to the size of the pinhole in spatial filter (in terms of the times of diffraction limitation, i.e. TDL) and the Fresnel number of the laser system; for the complex laser system with multi-spatial filters and free space, the system can be investigated by the reciprocal rule of operators.

  6. High-power graphic computers for visual simulation: a real-time--rendering revolution

    Science.gov (United States)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  7. Computational fluid dynamic modeling of gas flow characteristics of the high-power CW CO2 laser

    Institute of Scientific and Technical Information of China (English)

    Hongyau Huang; Youqing Wang

    2011-01-01

    @@ To increase the photoelectronic conversion efficiency of the single discharge tube and to meet the requirements of the laser cutting system, optimization of the discharge tube structure and gas flow field is necessary. We present a computational fluid dynamic model to predict the gas flow characteristics of high-power fast-axial flow CO2 laser. A set of differential equations is used to describe the operation of the laser. Gas flow characteristics, are calculated. The effects of gas velocity and turbulence intensity on discharge stability are studied. Computational results are compared with experimental values, and a good agreement is observed. The method presented and the results obtained can make the design process more efficient.%To increase the photoelectronic conversion efficiency of the single discharge tube and to meet the requirements of the laser cutting system, optimization of the discharge tube structure and gas flow field is necessary. We present a computational fluid dynamic model to predict the gas flow characteristics of high-power fast-axial flow CO2 laser. A set of differential equations is used to describe the operation of the laser. Gas flow characteristics, are calculated. The effects of gas velocity and turbulence intensity on discharge stability are studied. Computational results are compared with experimental values, and a good agreement is observed. The method presented and the results obtained can make the design process more efficient.

  8. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  9. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  10. Power throttling of collections of computing elements

    Science.gov (United States)

    Bellofatto, Ralph E.; Coteus, Paul W.; Crumley, Paul G.; Gara, Alan G.; Giampapa, Mark E.; Gooding; Thomas M.; Haring, Rudolf A.; Megerian, Mark G.; Ohmacht, Martin; Reed, Don D.; Swetz, Richard A.; Takken, Todd

    2011-08-16

    An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

  11. Computational intelligence in power engineering

    Energy Technology Data Exchange (ETDEWEB)

    Panigrahi, Bijaya Ketan [Indian Institute of Technology, New Delhi (India). Dept. of Electronical Engineering; Abraham, Ajith [Norwegian Univ. of Science and Technology, Trondheim (Norway). Center of Excellence for Quantifiable Quality of Service; Das, Swagatam (eds.) [Jadavpur Univ. Calcutta (IN). Dept. of Electronics and Telecommunication Engineering (ETCE)

    2010-07-01

    Computational Intelligence (CI) is one of the most important powerful tools for research in the diverse fields of engineering sciences ranging from traditional fields of civil, mechanical engineering to vast sections of electrical, electronics and computer engineering and above all the biological and pharmaceutical sciences. The existing field has its origin in the functioning of the human brain in processing information, recognizing pattern, learning from observations and experiments, storing and retrieving information from memory, etc. In particular, the power industry being on the verge of epoch changing due to deregulation, the power engineers require Computational intelligence tools for proper planning, operation and control of the power system. Most of the CI tools are suitably formulated as some sort of optimization or decision making problems. These CI techniques provide the power utilities with innovative solutions for efficient analysis, optimal operation and control and intelligent decision making. This edited volume deals with different CI techniques for solving real world Power Industry problems. The technical contents will be extremely helpful for the researchers as well as the practicing engineers in the power industry. (orig.)

  12. Kinetic modeling of a high power fast-axial-flow CO2 laser with computational fluid dynamics method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A new computational fluid dynamics (CFD) method for the simulation of fast-axial-flow CO2 laser is developed.The model which is solved by CFD software uses a set of dynamic differential equations to describe the dynamic process in one discharge tube.The velocity,temperature,pressure and turbulence energy distributions in discharge passage are presented.There is a good agreement between the theoretical prediction and the experimental results.This result indicates that the parameters of the laser have significant effect on the flow distribution in the discharge passage.It is helpful to optimize the output of high power CO2 laser by mastering its kinetic characteristics.

  13. Computer Aided Modeling and Analysis of Five-Phase PMBLDC Motor Drive for Low Power High Torque Application

    Directory of Open Access Journals (Sweden)

    M. A. Inayathullaah

    2014-01-01

    Full Text Available In order to achieve high torque at low power with high efficiency, a new five-phase permanent magnet brushless DC (PMBLDC motor design was analyzed and optimized. A similar three-phase motor having the same D/L ratio (inner diameter (D and length of the stator (L is compared for maximum torque and torque ripple of the designed five-phase PMBLDC motor. Maxwell software was used to build finite element simulation model of the motor. The internal complicated magnetic field distribution and dynamic performance simulation were obtained in different positions. No load and load characteristics of the five-phase PMBLDC motor were simulated, and the power consumption of materials was computed. The conformity of the final simulation results indicates that this method can be used to provide a theoretical basis for further optimal design of this new type of motor with its drive so as to improve the starting torque and reduce torque ripple of the motor.

  14. Changing computing paradigms towards power efficiency.

    Science.gov (United States)

    Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro

    2014-06-28

    Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.

  15. Cambridge-Cranfield High Performance Computing Facility (HPCF) purchases ten Sun Fire(TM) 15K servers to dramatically increase power of eScience research

    CERN Multimedia

    2002-01-01

    "The Cambridge-Cranfield High Performance Computing Facility (HPCF), a collaborative environment for data and numerical intensive computing privately run by the University of Cambridge and Cranfield University, has purchased 10 Sun Fire(TM) 15K servers from Sun Microsystems, Inc.. The total investment, which includes more than $40 million in Sun technology, will dramatically increase the computing power, reliability, availability and scalability of the HPCF" (1 page).

  16. Re-Form: FPGA-Powered True Codesign Flow for High-Performance Computing In The Post-Moore Era

    Energy Technology Data Exchange (ETDEWEB)

    Cappello, Franck; Yoshii, Kazutomo; Finkel, Hal; Cong, Jason

    2016-11-14

    Multicore scaling will end soon because of practical power limits. Dark silicon is becoming a major issue even more than the end of Moore’s law. In the post-Moore era, the energy efficiency of computing will be a major concern. FPGAs could be a key to maximizing the energy efficiency. In this paper we address severe challenges in the adoption of FPGA in HPC and describe “Re-form,” an FPGA-powered codesign flow.

  17. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    Science.gov (United States)

    Kawalia, Amit; Motameny, Susanne; Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files.

  18. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    Directory of Open Access Journals (Sweden)

    Amit Kawalia

    Full Text Available Next generation sequencing (NGS has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files.

  19. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  20. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  1. Power-aware applications for scientific cluster and distributed computing

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton U...

  2. Computing power on the move

    CERN Multimedia

    Joannah Caborn Wengler

    2012-01-01

    You might sit right next to your computer as you work, use the GRID’s computing power sitting in another part of the world or share CPU time with the Cloud: actual and virtual machines communicate and exchange information, and the place where they are located is a detail of only marginal importance. CERN’s new remote computer centre will open in Hungary in 2013.   Artist's impression of the new Wigner Data Centre. (Image: Wigner). CERN’s computing department has been aiming to minimise human contact with the machines for a while now. “The problem is that people going in creates dust, and simply touching things may cause damage,” explains Wayne Salter, Leader of the IT Computing Facilities Group. A first remote centre on the other side of Geneva was opened in June 2010 and a new one will open in Hungary next year. “Once the centre in Budapest is running, we will not be going there to operate it. As far as possible, w...

  3. Research and implementation of power supply for high-performance computer system%高性能计算机系统电源研究与实现

    Institute of Scientific and Technical Information of China (English)

    姚信安; 宋飞; 胡世平

    2013-01-01

    针对某高性能计算机系统的供电要求,采用了12V母线直流分布式供电系统,详细介绍了计算机柜和计算主板的电源设计方案,并对计算主板上的处理器电源进行了详细分析;重点介绍了回路增益、补偿网络、输出滤波器的参数设计方法.应用结果表明,该电源完全满足高性能计算机系统的供电要求.%To meet the power supply requirement of high-performance computer system, 12V DC bus distributed power system is adopted in this paper, and the power supply schemes of computing cabinet and motherboard are described. The power supply for processor in computing motherboard is analyzed in detail, and the parameters of loop gain, compensation network and output filter are presented. The application results show that this power supply can fully meet the requirement of high-performance computer system.

  4. Computations of longitudinal electron dynamics in the recirculating cw RF accelerator-recuperator for the high average power FEL

    Science.gov (United States)

    Sokolov, A. S.; Vinokurov, N. A.

    1994-03-01

    The use of optimal longitudinal phase-energy motion conditions for bunched electrons in a recirculating RF accelerator gives the possibility to increase the final electron peak current and, correspondingly, the FEL gain. The computer code RECFEL, developed for simulations of the longitudinal compression of electron bunches with high average current, essentially loading the cw RF cavities of the recirculator-recuperator, is briefly described and illustrated by some computational results.

  5. High Power Factor Power Design

    Directory of Open Access Journals (Sweden)

    Zhang Jing-yi

    2013-07-01

    Full Text Available The PFC circuit takes UCC28019 made by TI Company as the core of system control, realize the power factor correction circuit functions, and the circuit power factor can be measured. Through a variety of detection circuit, with the support SCM control. And 30V~36V output voltage regulator can be set; with over-current protection circuits function, and be able to automatically back. Output current, voltage, and little significant value are displayed by display modules.

  6. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    Energy Technology Data Exchange (ETDEWEB)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  7. High Performance Computing on Fast Lock Delay Locked Loop with Low Power State and Simultanoeus Switching Noise Reduction

    Directory of Open Access Journals (Sweden)

    V. J.S. Kumar

    2012-01-01

    Full Text Available Problem statement: In any multimedia processor, controller may consume most of the on-chip memory resources. The memory requirement directly depends on algorithm shared by different blocks, so leads to failure in the system models. Approach: This study presents the implementation of DLL unit used for memory optimization. Various aspects of the underlying coarse lock detector are explored and modifications are made with software reference implementation. The whole system is implemented in 0.18 μm CMOS technology, where an input reference clock to an outgoing data clock monitors and true locking is initialized with 50% duty cycle correction. Results: From the measurement result of DLL operation, the output clock jitter is analysed. Power consumption of DLL including large size output buffer is about a few mW. Conclusion: The great challenge in this implementation is communication bandwidth, which has brought process variation and power state reduction techniques. In addition, inefficiency of computing capacity and simultaneous switching noise is reduced in the real time applications.

  8. High power fiber lasers

    Institute of Scientific and Technical Information of China (English)

    LOU Qi-hong; ZHOU Jun

    2007-01-01

    In this review article, the development of the double cladding optical fiber for high power fiber lasers is reviewed. The main technology for high power fiber lasers, including laser diode beam shaping, fiber laser pumping techniques, and amplification systems, are discussed in de-tail. 1050 W CW output and 133 W pulsed output are ob-tained in Shanghai Institute of Optics and Fine Mechanics, China. Finally, the applications of fiber lasers in industry are also reviewed.

  9. Resonant High Power Combiners

    CERN Document Server

    Langlois, Michel; Peillex-Delphe, Guy

    2005-01-01

    Particle accelerators need radio frequency sources. Above 300 MHz, the amplifiers mostly used high power klystrons developed for this sole purpose. As for military equipment, users are drawn to buy "off the shelf" components rather than dedicated devices. IOTs have replaced most klystrons in TV transmitters and find their way in particle accelerators. They are less bulky, easier to replace, more efficient at reduced power. They are also far less powerful. What is the benefit of very compact sources if huge 3 dB couplers are needed to combine the power? To alleviate this drawback, we investigated a resonant combiner, operating in TM010 mode, able to combine 3 to 5 IOTs. Our IOTs being able to deliver 80 kW C.W. apiece, combined power would reach 400 kW minus the minor insertion loss. Values for matching and insertion loss are given. The behavior of the system in case of IOT failure is analyzed.

  10. High power microwaves

    CERN Document Server

    Benford, James; Schamiloglu, Edl

    2016-01-01

    Following in the footsteps of its popular predecessors, High Power Microwaves, Third Edition continues to provide a wide-angle, integrated view of the field of high power microwaves (HPMs). This third edition includes significant updates in every chapter as well as a new chapter on beamless systems that covers nonlinear transmission lines. Written by an experimentalist, a theorist, and an applied theorist, respectively, the book offers complementary perspectives on different source types. The authors address: * How HPM relates historically and technically to the conventional microwave field * The possible applications for HPM and the key criteria that HPM devices have to meet in order to be applied * How high power sources work, including their performance capabilities and limitations * The broad fundamental issues to be addressed in the future for a wide variety of source types The book is accessible to several audiences. Researchers currently in the field can widen their understanding of HPM. Present or pot...

  11. Computation of bicycle wheel power

    Institute of Scientific and Technical Information of China (English)

    尚寿亭; 吴龙; 薛立军; 徐吉杰

    2001-01-01

    Presents the model on the drag resistance to overcome discusses the equations used for calculation of spoked and solid wheel power and force, and gives a table of power output under a certain condition for comparison of two types of wheels, and suggests a scheme to estimate power on a specific track, and the speed and the time spent on a certain track are compared to illustrate the functions of parameters.

  12. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  13. Switching power converters medium and high power

    CERN Document Server

    Neacsu, Dorin O

    2013-01-01

    An examination of all of the multidisciplinary aspects of medium- and high-power converter systems, including basic power electronics, digital control and hardware, sensors, analog preprocessing of signals, protection devices and fault management, and pulse-width-modulation (PWM) algorithms, Switching Power Converters: Medium and High Power, Second Edition discusses the actual use of industrial technology and its related subassemblies and components, covering facets of implementation otherwise overlooked by theoretical textbooks. The updated Second Edition contains many new figures, as well as

  14. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  15. High Power Cryogenic Targets

    Energy Technology Data Exchange (ETDEWEB)

    Gregory Smith

    2011-08-01

    The development of high power cryogenic targets for use in parity violating electron scattering has been a crucial ingredient in the success of those experiments. As we chase the precision frontier, the demands and requirements for these targets have grown accordingly. We discuss the state of the art, and describe recent developments and strategies in the design of the next generation of these targets.

  16. High-frequency power within the QRS complex in ischemic cardiomyopathy patients with ventricular arrhythmias: Insights from a clinical study and computer simulation of cardiac fibrous tissue.

    Science.gov (United States)

    Tsutsumi, Takeshi; Okamoto, Yoshiwo; Takano, Nami; Wakatsuki, Daisuke; Tomaru, Takanobu; Nakajima, Toshiaki

    2017-08-01

    The distribution of frequency power (DFP) within the QRS complex (QRS) is unclear. This study aimed to investigate the DFP within the QRS in ischemic cardiomyopathy (ICM) with lethal ventricular arrhythmias (L-VA). A computer simulation was performed to explore the mechanism of abnormal frequency power. The study included 31 ICM patients with and without L-VA (n = 10 and 21, respectively). We applied the continuous wavelet transform to measure the time-frequency power within the QRS. Integrated time-frequency power (ITFP) was measured within the frequency range of 5-300 Hz. The simulation model consisted of two-dimensional myocardial tissues intermingled with fibroblasts. We examined the relation between frequency power calculated from the simulated QRS and the fibroblast-to-myocyte ratio (r) of the model. The frequency powers significantly increased from 180 to 300 Hz and from 5 to 15 Hz, and also decreased from 45 to 80 Hz in patients with ICM and L-VA compared with the normal individuals. They increased from 110 Hz to 250 Hz in ICM alone. In the simulation, the high-frequency power increased when the ratio (r) were 2.0-2.5. Functional reentry was initiated if the ratio (r) increased to 2.0. Abnormal higher-frequency power (180-300 Hz) may provide arrhythmogenic signals in ICM with L-VA that may be associated with the fibrous tissue proliferation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Exploring human inactivity in computer power consumption

    Science.gov (United States)

    Candrawati, Ria; Hashim, Nor Laily Binti

    2016-08-01

    Managing computer power consumption has become an important challenge in computer society and this is consistent with a trend where a computer system is more important to modern life together with a request for increased computing power and functions continuously. Unfortunately, previous approaches are still inadequately designed to handle the power consumption problem due to unpredictable workload of a system caused by unpredictable human behaviors. This is happens due to lack of knowledge in a software system and the software self-adaptation is one approach in dealing with this source of uncertainty. Human inactivity is handled by adapting the behavioral changes of the users. This paper observes human inactivity in the computer usage and finds that computer power usage can be reduced if the idle period can be intelligently sensed from the user activities. This study introduces Control, Learn and Knowledge model that adapts the Monitor, Analyze, Planning, Execute control loop integrates with Q Learning algorithm to learn human inactivity period to minimize the computer power consumption. An experiment to evaluate this model was conducted using three case studies with same activities. The result show that the proposed model obtained those 5 out of 12 activities shows the power decreasing compared to others.

  18. High Power Switching Transistor

    Science.gov (United States)

    Hower, P. L.; Kao, Y. C.; Carnahan, D. C.

    1983-01-01

    Improved switching transistors handle 400-A peak currents and up to 1,200 V. Using large diameter silicon wafers with twice effective area as D60T, form basis for D7 family of power switching transistors. Package includes npn wafer, emitter preform, and base-contact insert. Applications are: 25to 50-kilowatt high-frequency dc/dc inverters, VSCF converters, and motor controllers for electrical vehicles.

  19. Computer-aided design of the RF-cavity for a high-power S-band klystron

    Science.gov (United States)

    Kant, D.; Bandyopadhyay, A. K.; Pal, D.; Meena, R.; Nangru, S. C.; Joshi, L. M.

    2012-08-01

    This article describes the computer-aided design of the RF-cavity for a S-band klystron operating at 2856 MHz. State-of-the-art electromagnetic simulation tools SUPERFISH, CST Microwave studio, HFSS and MAGIC have been used for cavity design. After finalising the geometrical details of the cavity through simulation, it has been fabricated and characterised through cold testing. Detailed results of the computer-aided simulation and cold measurements are presented in this article.

  20. BiForce Toolbox: powerful high-throughput computational analysis of gene-gene interactions in genome-wide association studies.

    Science.gov (United States)

    Gyenesei, Attila; Moody, Jonathan; Laiho, Asta; Semple, Colin A M; Haley, Chris S; Wei, Wen-Hua

    2012-07-01

    Genome-wide association studies (GWAS) have discovered many loci associated with common disease and quantitative traits. However, most GWAS have not studied the gene-gene interactions (epistasis) that could be important in complex trait genetics. A major challenge in analysing epistasis in GWAS is the enormous computational demands of analysing billions of SNP combinations. Several methods have been developed recently to address this, some using computers equipped with particular graphical processing units, most restricted to binary disease traits and all poorly suited to general usage on the most widely used operating systems. We have developed the BiForce Toolbox to address the demand for high-throughput analysis of pairwise epistasis in GWAS of quantitative and disease traits across all commonly used computer systems. BiForce Toolbox is a stand-alone Java program that integrates bitwise computing with multithreaded parallelization and thus allows rapid full pairwise genome scans via a graphical user interface or the command line. Furthermore, BiForce Toolbox incorporates additional tests of interactions involving SNPs with significant marginal effects, potentially increasing the power of detection of epistasis. BiForce Toolbox is easy to use and has been applied in multiple studies of epistasis in large GWAS data sets, identifying interesting interaction signals and pathways.

  1. High-power electronics

    CERN Document Server

    Kapitsa, Petr Leonidovich

    1966-01-01

    High-Power Electronics, Volume 2 presents the electronic processes in devices of the magnetron type and electromagnetic oscillations in different systems. This book explores the problems of electronic energetics.Organized into 11 chapters, this volume begins with an overview of the motion of electrons in a flat model of the magnetron, taking into account the in-phase wave and the reverse wave. This text then examines the processes of transmission of electromagnetic waves of various polarization and the wave reflection from grids made of periodically distributed infinite metal conductors. Other

  2. High Power Dye Lasers

    Science.gov (United States)

    1975-09-30

    art capabilities for developmental models of hydrogen thyratrons and solid state thyristors. Table II-l is a list of switches that have been... thyratron Table II-l Switch Ignitron GE, GL - 37207 Hydrogen Thyratron High Power Switches Peak Cur. (kA) RMS Cm. (A) 300 120 Max. Rep Rate...for 2 usec Pulse Cli„) 8 1. EG&G HY-5 2. EW. GHT9 3. EG&G Develop- mental model Thyristors 5 7.5 15 125 335 350 300 1000 300 RCA

  3. High power coaxial ubitron

    Science.gov (United States)

    Balkcum, Adam J.

    In the ubitron, also known as the free electron laser, high power coherent radiation is generated from the interaction of an undulating electron beam with an electromagnetic signal and a static periodic magnetic wiggler field. These devices have experimentally produced high power spanning the microwave to x-ray regimes. Potential applications range from microwave radar to the study of solid state material properties. In this dissertation, the efficient production of high power microwaves (HPM) is investigated for a ubitron employing a coaxial circuit and wiggler. Designs for the particular applications of an advanced high gradient linear accelerator driver and a directed energy source are presented. The coaxial ubitron is inherently suited for the production of HPM. It utilizes an annular electron beam to drive the low loss, RF breakdown resistant TE01 mode of a large coaxial circuit. The device's large cross-sectional area greatly reduces RF wall heat loading and the current density loading at the cathode required to produce the moderate energy (500 keV) but high current (1-10 kA) annular electron beam. Focusing and wiggling of the beam is achieved using coaxial annular periodic permanent magnet (PPM) stacks without a solenoidal guide magnetic field. This wiggler configuration is compact, efficient and can propagate the multi-kiloampere electron beams required for many HPM applications. The coaxial PPM ubitron in a traveling wave amplifier, cavity oscillator and klystron configuration is investigated using linear theory and simulation codes. A condition for the dc electron beam stability in the coaxial wiggler is derived and verified using the 2-1/2 dimensional particle-in-cell code, MAGIC. New linear theories for the cavity start-oscillation current and gain in a klystron are derived. A self-consistent nonlinear theory for the ubitron-TWT and a new nonlinear theory for the ubitron oscillator are presented. These form the basis for simulation codes which, along

  4. Associative Memory computing power and its simulation.

    CERN Document Server

    Ancu, L S; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2015-01-01

    An important step in the ATLAS upgrade program is the installation of a tracking processor, the Fast Tracker (FTK), with the goal to identify the tracks generated from charged tracks originated by the LHC 14 TeV proton-proton. The collisions will generate thousands of hits in each layer of the silicon tracker detector and track identification is a very challenging computational problem. At the core of the FTK there is associative memory (AM) system, made with hundreds of AM ASICs chips, specifically designed to allow pattern identification in high density environments at very high speed. This component is able to organize the following steps of the track identification providing a huge computing power for a specific application. The AM system will in fact being able to reconstruct tracks in 10s of microseconds. Within the FTK team there has also been a constant effort to maintain a detailed emulation of the system, to predict the impact of single component features in the final performance and in the ATLAS da...

  5. Computer Simulation of Interactions between High-Power Electromagnetic Fields and Electronic Systems in a Complex Environment.

    Science.gov (United States)

    1997-05-01

    protrusion. The radius of the circular cylinder is 5A and the protrusion is 1A wide and 1A high. The monostatic radar cross section (RCS) is given in...15] J. Baldauf, S. W. Lee, L. Lin, S. K. Jeng, S. M. Scarborough, and C. L. Yu, "High frequency scattering from trihedral corner reflectors and...cylindrically conformal waveguide-fed slot arrays, such as the effects of curvature, slot thickness, and waveguide termination on the radar cross section

  6. High power beam analysis

    Science.gov (United States)

    Aharon, Oren

    2014-02-01

    In various modern scientific and industrial laser applications, beam-shaping optics manipulates the laser spot size and its intensity distribution. However the designed laser spot frequently deviates from the design goal due to real life imperfections and effects, such as: input laser distortions, optical distortion, heating, overall instabilities, and non-linear effects. Lasers provide the ability to accurately deliver large amounts of energy to a target area with very high accuracy. Thus monitoring beam size power and beam location is of high importance for high quality results and repeatability. Depending on the combination of wavelength, beam size and pulse duration , laser energy is absorbed by the material surface, yielding into processes such as cutting, welding, surface treatment, brazing and many other applications. This article will cover the aspect of laser beam measurements, especially at the focal point where it matters the most. A brief introduction to the material processing interactions will be covered, followed by fundamentals of laser beam propagation, novel measurement techniques, actual measurement and brief conclusions.

  7. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  8. Cloud Computing and the Power to Choose

    Science.gov (United States)

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  9. Cloud Computing and the Power to Choose

    Science.gov (United States)

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  10. High-throughput computing in the sciences.

    Science.gov (United States)

    Morgan, Mark; Grimshaw, Andrew

    2009-01-01

    While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.

  11. Wirelessly powered sensor networks and computational RFID

    CERN Document Server

    2013-01-01

    The Wireless Identification and Sensing Platform (WISP) is the first of a new class of RF-powered sensing and computing systems.  Rather than being powered by batteries, these sensor systems are powered by radio waves that are either deliberately broadcast or ambient.  Enabled by ongoing exponential improvements in the energy efficiency of microelectronics, RF-powered sensing and computing is rapidly moving along a trajectory from impossible (in the recent past), to feasible (today), toward practical and commonplace (in the near future). This book is a collection of key papers on RF-powered sensing and computing systems including the WISP.  Several of the papers grew out of the WISP Challenge, a program in which Intel Corporation donated WISPs to academic applicants who proposed compelling WISP-based projects.  The book also includes papers presented at the first WISP Summit, a workshop held in Berkeley, CA in association with the ACM Sensys conference, as well as other relevant papers. The book provides ...

  12. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  13. High power evaluation of X-band high power loads

    CERN Document Server

    Matsumoto, Shuji; Syratchev, Igor; Riddone, Germana; Wuensch, Walter

    2010-01-01

    Several types of X-band high power loads developed for several tens of MW range were designed, fabricated and used for high power tests at X-band facility of KEK. Some of them have been used for many years and few units showed possible deterioration of RF performance. Recently revised-design loads were made by CERN and the high power evaluation was performed at KEK. In this paper, the main requirements are recalled, together with the design features. The high power test results are analysed and presented

  14. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    , and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. A review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning and cost are often......The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based...... determined by the performance at the system worst case operating point which is usually at minimum input voltage and maximum power. Except for the non-regulating V6 converters, all published solutions exhibit a very significant drop in conversion efficiency at minimum input voltage and maximum output power...

  15. Future Computing Platforms for Science in a Power Constrained Era

    Science.gov (United States)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert

    2015-12-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  16. Future Computing Platforms for Science in a Power Constrained Era

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Knight, Robert

    2015-01-01

    Power consumption will be a key constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics (HEP). This makes performance-per-watt a crucial metric for selecting cost-efficient computing solutions. For this paper, we have done a wide survey of current and emerging architectures becoming available on the market including x86-64 variants, ARMv7 32-bit, ARMv8 64-bit, Many-Core and GPU solutions, as well as newer System-on-Chip (SoC) solutions. We compare performance and energy efficiency using an evolving set of standardized HEP-related benchmarks and power measurement techniques we have been developing. We evaluate the potential for use of such computing solutions in the context of DHTC systems, such as the Worldwide LHC Computing Grid (WLCG).

  17. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    , and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. In chapter 2, a review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning...

  18. Shifted power method for computing tensor eigenpairs.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Kolda, Tamara Gibson

    2010-10-01

    Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.

  19. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  20. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    J C Travers

    2010-11-01

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.

  1. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  2. High-powered manoeuvres

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This week, CERN received the latest new transformers for the SPS. Stored in pairs in 24-tonne steel containers, these transformers will replace the old models, which have been in place since 1981.     The transformers arrive at SPS's access point 4 (BA 4). During LS1, the TE-EPC Group will be replacing all of the transformers for the main converters of the SPS. This renewal campaign is being carried out as part of the accelerator consolidation programme, which began at the start of April and will come to an end in November. It involves 80 transformers: 64 with a power of 2.6 megavolt-amperes (MVA) for the dipole magnets, and 16 with 1.9 MVA for the quadrupoles. These new transformers were manufactured by an Italian company and are being installed outside the six access points of the SPS by the EN-HE Group, using CERN's 220-tonne crane. They will contribute to the upgrade of the SPS, which should thus continue to operate as the injector for the LHC until 2040....

  3. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  4. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  5. TRANSISTOR HIGH VOLTAGE POWER SUPPLY

    Science.gov (United States)

    Driver, G.E.

    1958-07-15

    High voltage, direct current power supplies are described for use with battery powered nuclear detection equipment. The particular advantages of the power supply described, are increased efficiency and reduced size and welght brought about by the use of transistors in the circuit. An important feature resides tn the employment of a pair of transistors in an alternatefiring oscillator circuit having a coupling transformer and other circuit components which are used for interconnecting the various electrodes of the transistors.

  6. Computer controlled MHD power consolidation and pulse generation system

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Marcotte, K.; Donnelly, M.

    1990-01-01

    The major goal of this research project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility has been established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a Faraday connected MHD generator which may be viewed as a multi-terminal dc source and is simulated for the purpose of this demonstration by a set of dc power supplies. This consolidation/inversion (CI), process will be referred to subsequently as Pulse Amplitude Synthesis and Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible phase II prototype system. This report period work summarizes the accomplishments and covers the high points of the two year project. 6 refs., 41 figs.

  7. Modular High Voltage Power Supply

    Energy Technology Data Exchange (ETDEWEB)

    Newell, Matthew R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-18

    The goal of this project is to develop a modular high voltage power supply that will meet the needs of safeguards applications and provide a modular plug and play supply for use with standard electronic racks.

  8. High Power Betavoltaic Technology Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation will dramatically improve the performance of tritium-powered betavoltaic batteries through the development of a high-aspect ratio, expanded...

  9. EURISOL High Power Targets

    CERN Document Server

    Kadi, Y; Lindroos, M; Ridikas, D; Stora, T; Tecchio, L; CERN. Geneva. BE Department

    2009-01-01

    Modern Nuclear Physics requires access to higher yields of rare isotopes, that relies on further development of the In-flight and Isotope Separation On-Line (ISOL) production methods. The limits of the In-Flight method will be applied via the next generation facilities FAIR in Germany, RIKEN in Japan and RIBF in the USA. The ISOL method will be explored at facilities including ISAC-TRIUMF in Canada, SPIRAL-2 in France, SPES in Italy, ISOLDE at CERN and eventually at the very ambitious multi-MW EURISOL facility. ISOL and in-flight facilities are complementary entities. While in-flight facilities excel in the production of very short lived radioisotopes independently of their chemical nature, ISOL facilities provide high Radioisotope Beam (RIB) intensities and excellent beam quality for 70 elements. Both production schemes are opening vast and rich fields of nuclear physics research. In this article we will introduce the targets planned for the EURISOL facility and highlight some of the technical and safety cha...

  10. Low Power Dynamic Scheduling for Computing Systems

    CERN Document Server

    Neely, Michael J

    2011-01-01

    This paper considers energy-aware control for a computing system with two states: "active" and "idle." In the active state, the controller chooses to perform a single task using one of multiple task processing modes. The controller then saves energy by choosing an amount of time for the system to be idle. These decisions affect processing time, energy expenditure, and an abstract attribute vector that can be used to model other criteria of interest (such as processing quality or distortion). The goal is to optimize time average system performance. Applications of this model include a smart phone that makes energy-efficient computation and transmission decisions, a computer that processes tasks subject to rate, quality, and power constraints, and a smart grid energy manager that allocates resources in reaction to a time varying energy price. The solution methodology of this paper uses the theory of optimization for renewal systems developed in our previous work. This paper is written in tutorial form and devel...

  11. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  12. Flash on disk for low-power multimedia computing

    Science.gov (United States)

    Singleton, Leo; Nathuji, Ripal; Schwan, Karsten

    2007-01-01

    Mobile multimedia computers require large amounts of data storage, yet must consume low power in order to prolong battery life. Solid-state storage offers low power consumption, but its capacity is an order of magnitude smaller than the hard disks needed for high-resolution photos and digital video. In order to create a device with the space of a hard drive, yet the low power consumption of solid-state storage, hardware manufacturers have proposed using flash memory as a write buffer on mobile systems. This paper evaluates the power savings of such an approach and also considers other possible flash allocation algorithms, using both hardware- and software-level flash management. Its contributions also include a set of typical multimedia-rich workloads for mobile systems and power models based upon current disk and flash technology. Based on these workloads, we demonstrate an average power savings of 267 mW (53% of disk power) using hardware-only approaches. Next, we propose another algorithm, termed Energy-efficient Virtual Storage using Application-Level Framing (EVS-ALF), which uses both hardware and software for power management. By collecting information from the applications and using this metadata to perform intelligent flash allocation and prefetching, EVS-ALF achieves an average power savings of 307 mW (61%), another 8% improvement over hardware-only techniques.

  13. High power ferrite microwave switch

    Science.gov (United States)

    Bardash, I.; Roschak, N. K.

    1975-01-01

    A high power ferrite microwave switch was developed along with associated electronic driver circuits for operation in a spaceborne high power microwave transmitter in geostationary orbit. Three units were built and tested in a space environment to demonstrate conformance to the required performance characteristics. Each unit consisted of an input magic-tee hybrid, two non-reciprocal latching ferrite phase shifters, an out short-slot 3 db quadrature coupler, a dual driver electronic circuit, and input logic interface circuitry. The basic mode of operation of the high power ferrite microwave switch is identical to that of a four-port, differential phase shift, switchable circulator. By appropriately designing the phase shifters and electronic driver circuits to operate in the flux-transfer magnetization mode, power and temperature insensitive operation was achieved. A list of the realized characteristics of the developed units is given.

  14. Energy and Power Aware Computing Through Management of Computational Entropy

    Science.gov (United States)

    2008-01-01

    unit of SNR degradation. These savings stem from a novel method of voltage scaling, which we refer to as biased voltage scaling (or BIVOS), that is...PowerPC cores High performance, reconfigurable, scalable, real-time • CBEA ( Cell Broadband Engine Architecture) Eight Synergistic Processor Elements...Supercomputing IBM Blue Gene/L (IBM): Fundamental science simulation • Head End AVC HD Encoder (Scientific Atlanta): Real-time HD encoder for HDTV

  15. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  16. High Power Amplifier and Power Supply

    Science.gov (United States)

    Duong, Johnny; Stride, Scot; Harvey, Wayne; Haque, Inam; Packard, Newton; Ng, Quintin; Ispirian, Julie Y.; Waian, Christopher; Janes, Drew

    2008-01-01

    A document discusses the creation of a high-voltage power supply (HVPS) that is able to contain voltages up to -20 kV, keep electrical field strengths to below 200 V/mil (approximately equal to 7.87 kV/mm), and can provide a 200-nanosecond rise/fall time focus modulator swinging between cathode potential of 16.3 kV and -19.3 kV. This HVPS can protect the 95-GHz, pulsed extended interaction klystron (EIK) from arcs/discharges from all sources, including those from within the EIK fs vacuum envelope. This innovation has a multi-winding pulse transformer design, which uses new winding techniques to provide the same delays and rise/fall times (less than 10 nanoseconds) at different potential levels ranging from -20 kV to -16 kV. Another feature involves a high-voltage printed-wiring board that was corona-free at -20 kV DC with a 3- kV AC swing. The corona-free multilayer high-voltage board is used to simulate fields of less than 200 V/mil (approximately equal to 7.87 kV/mm) at 20 kV DC. Drive techniques for the modulator FETs (field-effect transistors) (four to 10 in a series) were created to change states (3,000-V swing) without abrupt steps, while still maintaining required delays and transition times. The packing scheme includes a potting mold to house a ten-stage modulator in the space that, in the past, only housed a four-stage modulator. Problems keeping heat down were solved using aluminum oxide substrate in the high-voltage section to limit temperature rise to less than 10 while withstanding -20 kV DC voltage and remaining corona-free.

  17. Computer system for monitoring power boiler operation

    Energy Technology Data Exchange (ETDEWEB)

    Taler, J.; Weglowski, B.; Zima, W.; Duda, P.; Gradziel, S.; Sobota, T.; Cebula, A.; Taler, D. [Cracow University of Technology, Krakow (Poland). Inst. for Process & Power Engineering

    2008-02-15

    The computer-based boiler performance monitoring system was developed to perform thermal-hydraulic computations of the boiler working parameters in an on-line mode. Measurements of temperatures, heat flux, pressures, mass flowrates, and gas analysis data were used to perform the heat transfer analysis in the evaporator, furnace, and convection pass. A new construction technique of heat flux tubes for determining heat flux absorbed by membrane water-walls is also presented. The current paper presents the results of heat flux measurement in coal-fired steam boilers. During changes of the boiler load, the necessary natural water circulation cannot be exceeded. A rapid increase of pressure may cause fading of the boiling process in water-wall tubes, whereas a rapid decrease of pressure leads to water boiling in all elements of the boiler's evaporator - water-wall tubes and downcomers. Both cases can cause flow stagnation in the water circulation leading to pipe cracking. Two flowmeters were assembled on central downcomers, and an investigation of natural water circulation in an OP-210 boiler was carried out. On the basis of these measurements, the maximum rates of pressure change in the boiler evaporator were determined. The on-line computation of the conditions in the combustion chamber allows for real-time determination of the heat flowrate transferred to the power boiler evaporator. Furthermore, with a quantitative indication of surface cleanliness, selective sootblowing can be directed at specific problem areas. A boiler monitoring system is also incorporated to provide details of changes in boiler efficiency and operating conditions following sootblowing, so that the effects of a particular sootblowing sequence can be analysed and optimized at a later stage.

  18. High-Average Power Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Dowell, David H.; /SLAC; Power, John G.; /Argonne

    2012-09-05

    There has been significant progress in the development of high-power facilities in recent years yet major challenges remain. The task of WG4 was to identify which facilities were capable of addressing the outstanding R&D issues presently preventing high-power operation. To this end, information from each of the facilities represented at the workshop was tabulated and the results are presented herein. A brief description of the major challenges is given, but the detailed elaboration can be found in the other three working group summaries.

  19. Power coal plasma gasification. Computation and experiment

    Energy Technology Data Exchange (ETDEWEB)

    N.A. Bastyrev; V.I. Golysh; M.A. Gorokhovski; Yu.E. Karpenko; V.G. Lukiaschenko; V.E. Messerle; A.O. Nagibin; E.F. Osadchaya; S.F. Osadchy; I.G. Stepanov; K.A. Umbetkaliev; A.B. Ustimenko [Combustion Problems Institute, Almaty (Kazakhstan)

    2005-07-01

    Results of complex experimental and numerical investigation of coal plasma gasification in steam and air are presented. To analyse numerically the universal thermodynamic calculation code TERRA was used. The data base of it contains thermodynamic properties for 3500 individual components in temperature interval from 300 to 6000 K. Experiments were fulfilled at an original installation for coal plasma gasification. Nominal power of the plasma gasifier is 100 kW and sum consumption of the reagents is up to 25 kg/h. High integral indexes of the gasification processes were achieved. The numerical and experimental results comparison showed their satisfied agreement. 7 refs., 7 figs., 3 tabs.

  20. Factors Affecting Computer Anxiety in High School Computer Science Students.

    Science.gov (United States)

    Hayek, Linda M.; Stephens, Larry

    1989-01-01

    Examines factors related to computer anxiety measured by the Computer Anxiety Index (CAIN). Achievement in two programing courses was inversely related to computer anxiety. Students who had a home computer and had computer experience before high school had lower computer anxiety than those who had not. Lists 14 references. (YP)

  1. High power neutron production targets

    Energy Technology Data Exchange (ETDEWEB)

    Wender, S. [Los Alamos National Lab., NM (United States)

    1996-06-01

    The author describes issues of concern in the design of targets and associated systems for high power neutron production facilities. The facilities include uses for neutron scattering, accelerator driven transmutation, accelerator production of tritium, short pulse spallation sources, and long pulse spallation sources. Each of these applications requires a source with different design needs and consequently different implementation in practise.

  2. Associative Memory computing power and its simulation

    CERN Document Server

    Ancu, L S; The ATLAS collaboration; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  3. Associative Memory Computing Power and Its Simulation

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  4. Computation of loss allocation in electric power networks using loss ...

    African Journals Online (AJOL)

    Computation of loss allocation in electric power networks using loss vector. ... The losses to be allocated are derived from load flow of a specified power network and operating conditions. Loss vectors associated with demand ... Article Metrics.

  5. 高功耗嵌入式单板计算机的电源设计与实现%Design and implementation of power supply for embedded single board computer with high consumption

    Institute of Scientific and Technical Information of China (English)

    刘宝明; 苏培培

    2012-01-01

    In order to satisfy the power supply of embedded single board computer based on high-powered dual cores processor MPC8641D, the feature of the MPC8641D and function of computer are analyzed, the circuits of power supply based on various DC-DC chips is designed, and by design the controller of power on sequence based on CPLD, the sequence control of all kinds of power supply and the management of reset are implemented. The practice indicated the stability and flexibility of the design.%为了满足基于高性能双核MPC8641D处理器的高功耗嵌入式单板计算机的电源设计,分析了处理器的供电要求和单板计算机的整体电路功能,开展了基于多种电源转换芯片的供电设计,解决了高功耗嵌入式单板计算机的供电问题.通过设计基于可编程逻辑器件CPLD的上电时序控制器,实现了多种电源之间的加电时序控制以及复位管理.单板计算机的实际应用结果表明该电源设计稳定可靠、灵活通用.

  6. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  7. High temperature, high power piezoelectric composite transducers.

    Science.gov (United States)

    Lee, Hyeong Jae; Zhang, Shujun; Bar-Cohen, Yoseph; Sherrit, Stewart

    2014-08-08

    Piezoelectric composites are a class of functional materials consisting of piezoelectric active materials and non-piezoelectric passive polymers, mechanically attached together to form different connectivities. These composites have several advantages compared to conventional piezoelectric ceramics and polymers, including improved electromechanical properties, mechanical flexibility and the ability to tailor properties by using several different connectivity patterns. These advantages have led to the improvement of overall transducer performance, such as transducer sensitivity and bandwidth, resulting in rapid implementation of piezoelectric composites in medical imaging ultrasounds and other acoustic transducers. Recently, new piezoelectric composite transducers have been developed with optimized composite components that have improved thermal stability and mechanical quality factors, making them promising candidates for high temperature, high power transducer applications, such as therapeutic ultrasound, high power ultrasonic wirebonding, high temperature non-destructive testing, and downhole energy harvesting. This paper will present recent developments of piezoelectric composite technology for high temperature and high power applications. The concerns and limitations of using piezoelectric composites will also be discussed, and the expected future research directions will be outlined.

  8. High Temperature, High Power Piezoelectric Composite Transducers

    Directory of Open Access Journals (Sweden)

    Hyeong Jae Lee

    2014-08-01

    Full Text Available Piezoelectric composites are a class of functional materials consisting of piezoelectric active materials and non-piezoelectric passive polymers, mechanically attached together to form different connectivities. These composites have several advantages compared to conventional piezoelectric ceramics and polymers, including improved electromechanical properties, mechanical flexibility and the ability to tailor properties by using several different connectivity patterns. These advantages have led to the improvement of overall transducer performance, such as transducer sensitivity and bandwidth, resulting in rapid implementation of piezoelectric composites in medical imaging ultrasounds and other acoustic transducers. Recently, new piezoelectric composite transducers have been developed with optimized composite components that have improved thermal stability and mechanical quality factors, making them promising candidates for high temperature, high power transducer applications, such as therapeutic ultrasound, high power ultrasonic wirebonding, high temperature non-destructive testing, and downhole energy harvesting. This paper will present recent developments of piezoelectric composite technology for high temperature and high power applications. The concerns and limitations of using piezoelectric composites will also be discussed, and the expected future research directions will be outlined.

  9. High-Power, Computer-Controlled, Light-Emitting Diode–Based Light Sources for Fluorescence Imaging and Image-Guided Surgery

    Directory of Open Access Journals (Sweden)

    Sylvain Gioux

    2009-05-01

    Full Text Available Optical imaging requires appropriate light sources. For image-guided surgery, in particular fluorescence-guided surgery, a high fluence rate, a long working distance, computer control, and precise control of wavelength are required. In this article, we describe the development of light-emitting diode (LED-based light sources that meet these criteria. These light sources are enabled by a compact LED module that includes an integrated linear driver, heat dissipation technology, and real-time temperature monitoring. Measuring only 27 mm wide by 29 mm high and weighing only 14.7 g, each module provides up to 6,500 lx of white (400–650 nm light and up to 157 mW of filtered fluorescence excitation light while maintaining an operating temperature ≤ 50°C. We also describe software that can be used to design multimodule light housings and an embedded processor that permits computer control and temperature monitoring. With these tools, we constructed a 76-module, sterilizable, three-wavelength surgical light source capable of providing up to 40,000 lx of white light, 4.0 mW/cm2 of 670 nm near-infrared (NIR fluorescence excitation light, and 14.0 mW/cm2 of 760 nm NIR fluorescence excitation light over a 15 cm diameter field of view. Using this light source, we demonstrated NIR fluorescence–guided surgery in a large-animal model.

  10. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  11. Frequency control in power systems with high wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Tarnowski, German Claudio [Technical Univ. of Denmark (Denmark). Centre for Electric Technology; Vestas Wind Systems A/S, Alsve (Denmark); Kjaer, Philip Carne [Vestas Wind Systems A/S, Alsve (Denmark); Oestergaard, Jacob [Technical Univ. of Denmark (Denmark). Centre for Electric Technology; Soerensen, Poul E. [Risoe National Laboratory for Sustainable Energy, Roskilde (Denmark). Wind Energy Dept.

    2010-07-01

    The fluctuating nature of wind power introduces several challenges to reliable operation of power system. With high wind power penetration, conventional power plants are displaced and wind speed fluctuations introduce large power imbalances which lead to power system frequency control and operational problems. This paper analysis the impact of wind power in the frequency control of power systems for different amount of controllable variable speed wind turbines. Real measurements from short term wind power penetration tests in a power system are shown and used to study the amount of total regulating power needed from conventional power plants. Dynamic simulations with validated model of the power system support the studies. The paper also presents control concepts for wind power plants necessary to achieve characteristic of frequency response and active power balancing similarly to conventional power plants, therefore allowing higher wind power penetration. As the power system dependency on wind power increases, wind power generation has to contribute with dynamic response and control actions similarly to conventional power plants. (orig.)

  12. GRID : unlimited computing power on your desktop Conference MT17

    CERN Document Server

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  13. Computer memory power control for the Galileo spacecraft

    Science.gov (United States)

    Detwiler, R. C.

    1983-01-01

    The developmental history, major design drives, and final topology of the computer memory power system on the Galileo spacecraft are described. A unique method of generating memory backup power directly from the fault current drawn during a spacecraft power overload or fault condition allows this system to provide continuous memory power. This concept provides a unique solution to the problem of volatile memory loss without the use of a battery of other large energy storage elements usually associated with uninterrupted power supply designs.

  14. Computer aided power flow software engineering and code generation

    Energy Technology Data Exchange (ETDEWEB)

    Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)

    1996-02-01

    In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.

  15. Computer aided power flow software engineering and code generation

    Energy Technology Data Exchange (ETDEWEB)

    Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)

    1995-12-31

    In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality Newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.

  16. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    Science.gov (United States)

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  17. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  18. The Computational Power of Minkowski Spacetime

    CERN Document Server

    Biamonte, Jacob D

    2009-01-01

    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an $n$th order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic \\emph{Grover speedup} from quantum computing and an $n=2$ speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation.

  19. IBM Cloud Computing Powering a Smarter Planet

    Science.gov (United States)

    Zhu, Jinzy; Fang, Xing; Guo, Zhe; Niu, Meng Hua; Cao, Fan; Yue, Shuang; Liu, Qin Yu

    With increasing need for intelligent systems supporting the world's businesses, Cloud Computing has emerged as a dominant trend to provide a dynamic infrastructure to make such intelligence possible. The article introduced how to build a smarter planet with cloud computing technology. First, it introduced why we need cloud, and the evolution of cloud technology. Secondly, it analyzed the value of cloud computing and how to apply cloud technology. Finally, it predicted the future of cloud in the smarter planet.

  20. Development of computer science in the power industry

    Energy Technology Data Exchange (ETDEWEB)

    Klos, A.; Nowakowski, R.; Staniszewska, E.

    1987-02-01

    This report discusses development of computerized control systems and computer calculations in the Polish power industry from 1960 to 1985. Three development periods are comparatively evaluated: 1960-1965 (pioneer period), 1965-1975 (period of intensive development), 1975-1985 (period of stagnation). From 1980 to 1985 the number of computers used in the power industry only slightly increased. The following computer types were in use in 1985: 2 units of the Odra 1204, 4 units of the Odra 1304, 21 units of the Odra 1305, 16 units of the Odra 1325, 5 units of the R 32 Ryad computers, 5 on-line control systems. The computers were used for planning, design optimization, computerized power system control, and computer calculations in management. Types of control systems used in the power industry and names of research team members are given.

  1. Power Load Management as a Computational Market

    NARCIS (Netherlands)

    Ygge, Fredrik; Akkermans, Hans; Akkermans, J.M.

    1997-01-01

    Power load management enables energy utilities to reduce peak loads and thereby save money. Due to the large number of different loads, power load management is a complicated optimization problem. We present a new decentralized approach to this problem by modeling direct load management as a

  2. Power Load Management as a Computational Market

    NARCIS (Netherlands)

    Ygge, F.; Akkermans, J.M.

    1996-01-01

    Power load management enables energy utilities to reduce peak loads and thereby save money. Due to the large number of different loads, power load management is a complicated optimization problem. We present a new decentralized approach to this problem by modeling direct load management as a computa

  3. Power Load Management as a Computational Market

    NARCIS (Netherlands)

    Ygge, Fredrik; Akkermans, Hans

    1997-01-01

    Power load management enables energy utilities to reduce peak loads and thereby save money. Due to the large number of different loads, power load management is a complicated optimization problem. We present a new decentralized approach to this problem by modeling direct load management as a computa

  4. Abstraction Power in Computer Science Education

    DEFF Research Database (Denmark)

    Bennedsen, Jens Benned; Caspersen, Michael Edelgaard

    2006-01-01

    The paper is a discussion of the hypothesis that a person’s abstraction power (or ability) has a positive influence on their ability to program.......The paper is a discussion of the hypothesis that a person’s abstraction power (or ability) has a positive influence on their ability to program....

  5. Computer technology in education and issues of power and equity

    Directory of Open Access Journals (Sweden)

    Alper Kesten

    2010-05-01

    Full Text Available This study aims to use ‘techniques of power’ classified (based on Foucault’s work by Gore in order to illustrate power relations between supporters (or non-supporters of computer technology and teachers. For this purpose, six out of eight techniques of power (surveillance, normalization, exclusion, classification, distribution and regulation is used in formulating thoughts about computer technology and issues of power and equity. In this study, these techniques of power were discussed more detailed both to exemplify how supporters (or non-supporters of computer technology exercise power over teachers (preservice or inservice by using of major techniques of power and to show how they are related to the issue of equity.

  6. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  7. The Power of Language in Computer-Mediated Groups.

    Science.gov (United States)

    Adkins, Mark; Brashers, Dale E.

    1995-01-01

    Discusses an experiment to find the effects of "powerful" and "powerless" language on small computer-mediated groups. Explains that subjects were asked to communicate via computer in a decision-making context. Describes the three conditions. Finds that language style has significant impact on impression formation in computer groups and that…

  8. High-power, computer-controlled, light-emitting diode-based light sources for fluorescence imaging and image-guided surgery.

    Science.gov (United States)

    Gioux, Sylvain; Kianzad, Vida; Ciocan, Razvan; Gupta, Sunil; Oketokoun, Rafiou; Frangioni, John V

    2009-01-01

    Optical imaging requires appropriate light sources. For image-guided surgery, in particular fluorescence-guided surgery, a high fluence rate, a long working distance, computer control, and precise control of wavelength are required. In this article, we describe the development of light-emitting diode (LED)-based light sources that meet these criteria. These light sources are enabled by a compact LED module that includes an integrated linear driver, heat dissipation technology, and real-time temperature monitoring. Measuring only 27 mm wide by 29 mm high and weighing only 14.7 g, each module provides up to 6,500 lx of white (400-650 nm) light and up to 157 mW of filtered fluorescence excitation light while maintaining an operating temperature mW/cm2 of 670 nm near-infrared (NIR) fluorescence excitation light, and 14.0 mW/cm2 of 760 nm NIR fluorescence excitation light over a 15 cm diameter field of view. Using this light source, we demonstrated NIR fluorescence-guided surgery in a large-animal model.

  9. High power, high beam quality regenerative amplifier

    Science.gov (United States)

    Hackel, L.A.; Dane, C.B.

    1993-08-24

    A regenerative laser amplifier system generates high peak power and high energy per pulse output beams enabling generation of X-rays used in X-ray lithography for manufacturing integrated circuits. The laser amplifier includes a ring shaped optical path with a limited number of components including a polarizer, a passive 90 degree phase rotator, a plurality of mirrors, a relay telescope, and a gain medium, the components being placed close to the image plane of the relay telescope to reduce diffraction or phase perturbations in order to limit high peak intensity spiking. In the ring, the beam makes two passes through the gain medium for each transit of the optical path to increase the amplifier gain to loss ratio. A beam input into the ring makes two passes around the ring, is diverted into an SBS phase conjugator and proceeds out of the SBS phase conjugator back through the ring in an equal but opposite direction for two passes, further reducing phase perturbations. A master oscillator inputs the beam through an isolation cell (Faraday or Pockels) which transmits the beam into the ring without polarization rotation. The isolation cell rotates polarization only in beams proceeding out of the ring to direct the beams out of the amplifier. The diffraction limited quality of the input beam is preserved in the amplifier so that a high power output beam having nearly the same diffraction limited quality is produced.

  10. High-power pulsed lasers

    Energy Technology Data Exchange (ETDEWEB)

    Holzrichter, J.F.

    1980-04-02

    The ideas that led to the successful construction and operation of large multibeam fusion lasers at the Lawrence Livermore Laboratory are reviewed. These lasers are based on the use of Nd:glass laser materials. However, most of the concepts are applicable to any laser being designed for fusion experimentation. This report is a summary of lectures given by the author at the 20th Scottish University Summer School in Physics, on Laser Plasma Interaction. This report includes basic concepts of the laser plasma system, a discussion of lasers that are useful for short-pulse, high-power operation, laser design constraints, optical diagnostics, and system organization.

  11. Debugging on High-voltage Power Supply,Focusing Power Supply and Magnetic Field Power Supply

    Institute of Scientific and Technical Information of China (English)

    TU; Rui

    2015-01-01

    High-voltage power supply,focusing power supply and magnetic field power supply are the main parts of the power supply system of the EMIS(Electro-Magnetic Isotope Separator)supplying the ion source.In 2015,a high-voltage power supply,power supply for focusing and

  12. The computational power of interactive recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2012-04-01

    In classical computation, rational- and real-weighted recurrent neural networks were shown to be respectively equivalent to and strictly more powerful than the standard Turing machine model. Here, we study the computational power of recurrent neural networks in a more biologically oriented computational framework, capturing the aspects of sequential interactivity and persistence of memory. In this context, we prove that so-called interactive rational- and real-weighted neural networks show the same computational powers as interactive Turing machines and interactive Turing machines with advice, respectively. A mathematical characterization of each of these computational powers is also provided. It follows from these results that interactive real-weighted neural networks can perform uncountably many more translations of information than interactive Turing machines, making them capable of super-Turing capabilities.

  13. The computational power of Benenson automata

    OpenAIRE

    Soloveichik, David; Winfree, Erik

    2005-01-01

    The development of autonomous molecular computers capable of making independent decisions in vivo regarding local drug administration may revolutionize medical science. Recently Benenson et al. [An autonomous molecular computer for logical control of gene expression, Nature 429 (2004) 423–429.] have envisioned one form such a “smart drug” may take by implementing an in vitro scheme, in which a long DNA state molecule is cut repeatedly by a restriction enzyme in a manner dependent upon the pre...

  14. Computing lifetimes for battery-powered devices

    OpenAIRE

    Jongerden, Marijn; Haverkort, Boudewijn

    2010-01-01

    The battery lifetime of mobile devices depends on the usage pattern of the battery, next to the discharge rate and the battery capacity. Therefore, it is important to include the usage pattern in battery lifetime computations. We do this by combining a stochastic workload, modeled as a continuous-time Markov model, with a well-known battery model. For this combined model, we provide new algorithms to efficiently compute the expected lifetime and the distribution and expected value of the deli...

  15. Reducing Total Power Consumption Method in Cloud Computing Environments

    CERN Document Server

    Kuribayashi, Shin-ichi

    2012-01-01

    The widespread use of cloud computing services is expected to increase the power consumed by ICT equipment in cloud computing environments rapidly. This paper first identifies the need of the collaboration among servers, the communication network and the power network, in order to reduce the total power consumption by the entire ICT equipment in cloud computing environments. Five fundamental policies for the collaboration are proposed and the algorithm to realize each collaboration policy is outlined. Next, this paper proposes possible signaling sequences to exchange information on power consumption between network and servers, in order to realize the proposed collaboration policy. Then, in order to reduce the power consumption by the network, this paper proposes a method of estimating the volume of power consumption by all network devices simply and assigning it to an individual user.

  16. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  17. Gird Computing- A Tool For Enhancing The Computing Power

    Directory of Open Access Journals (Sweden)

    Manjula K A

    2010-06-01

    Full Text Available With the enormous increase in the demand for computing capacities, solutions with least investment have to found out. In this direction Grid technology is finding its way out of the academic incubator and entering into commercial environments. Here geographically distributed resources, such as storage devices, data sources, and supercomputers, are interconnected and exploited by users around the world as single, unified resource. This helps in use the idle time of these resources, which is otherwise lost. This article discusses briefly about grid computing and its benefits. Application of this technology is increasing and this article also looks at some of these applications.

  18. Computational Power of Symmetry-Protected Topological Phases

    Science.gov (United States)

    Stephen, David T.; Wang, Dong-Sheng; Prakash, Abhishodh; Wei, Tzu-Chieh; Raussendorf, Robert

    2017-07-01

    We consider ground states of quantum spin chains with symmetry-protected topological (SPT) order as resources for measurement-based quantum computation (MBQC). We show that, for a wide range of SPT phases, the computational power of ground states is uniform throughout each phase. This computational power, defined as the Lie group of executable gates in MBQC, is determined by the same algebraic information that labels the SPT phase itself. We prove that these Lie groups always contain a full set of single-qubit gates, thereby affirming the long-standing conjecture that general SPT phases can serve as computationally useful phases of matter.

  19. Computer Controlled High Precise,High Voltage Pules Generator

    Institute of Scientific and Technical Information of China (English)

    但果; 邹积岩; 丛吉远; 董恩源

    2003-01-01

    High precise, high voltage pulse generator made up of high-power IGBT and pulse transformers controlled by a computer are described. A simple main circuit topology employed in this pulse generator can reduce the cost meanwhile it still meets special requirements for pulsed electric fields (PEFs) in food process. The pulse generator utilizes a complex programmable logic device (CPLD) to generate trigger signals. Pulse-frequency, pulse-width and pulse-number are controlled via RS232 bus by a computer. The high voltage pulse generator well suits to the application for fluid food non-thermal effect in pulsed electric fields, for it can increase and decrease by the step length 1.

  20. Computing lifetimes for battery-powered devices

    NARCIS (Netherlands)

    Jongerden, M.R.; Haverkort, Boudewijn R.H.M.

    The battery lifetime of mobile devices depends on the usage pattern of the battery, next to the discharge rate and the battery capacity. Therefore, it is important to include the usage pattern in battery lifetime computations. We do this by combining a stochastic workload, modeled as a

  1. Computing lifetimes for battery-powered devices

    NARCIS (Netherlands)

    Jongerden, Marijn; Haverkort, Boudewijn

    2010-01-01

    The battery lifetime of mobile devices depends on the usage pattern of the battery, next to the discharge rate and the battery capacity. Therefore, it is important to include the usage pattern in battery lifetime computations. We do this by combining a stochastic workload, modeled as a continuous-ti

  2. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  3. Computer controlled MHD power consolidation and pulse-generation system

    Science.gov (United States)

    Johnson, R.

    The major goal of this project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility will be established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a magnetohydrodynamic (MHD) Faraday connected generator which may be viewed as a multi-terminal d.c. source. This consolidation/inversion process is referred to subsequently as Pulse-Amplitude-Synthesis-and-Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible Phase 2 prototype system. This report covers the initial six months portion of the project and includes discussions on the following areas: (1) selection of a control computer with software tool kit for development of the PASC controller contract requirement; (2) problem formulation considerations for simulation of the PASC technique on digital computers; (3) initial simulation results for the PASC transformer, including simulation results obtained using SPICE and the INTEG program; (4) a survey of available gate-turn-off (GTO's), power semiconductors, power field effect transistors (PFET's), and fiber optics signal cabling and transducers.

  4. Optics assembly for high power laser tools

    Science.gov (United States)

    Fraze, Jason D.; Faircloth, Brian O.; Zediker, Mark S.

    2016-06-07

    There is provided a high power laser rotational optical assembly for use with, or in high power laser tools for performing high power laser operations. In particular, the optical assembly finds applications in performing high power laser operations on, and in, remote and difficult to access locations. The optical assembly has rotational seals and bearing configurations to avoid contamination of the laser beam path and optics.

  5. Optics assembly for high power laser tools

    Energy Technology Data Exchange (ETDEWEB)

    Fraze, Jason D.; Faircloth, Brian O.; Zediker, Mark S.

    2016-06-07

    There is provided a high power laser rotational optical assembly for use with, or in high power laser tools for performing high power laser operations. In particular, the optical assembly finds applications in performing high power laser operations on, and in, remote and difficult to access locations. The optical assembly has rotational seals and bearing configurations to avoid contamination of the laser beam path and optics.

  6. Low Power Dendritic Computation for Wordspotting

    Directory of Open Access Journals (Sweden)

    Stephen Nease

    2013-05-01

    Full Text Available In this paper, we demonstrate how a network of dendrites can be used to build the state decoding block of a wordspotter similar to a Hidden Markov Model (HMM classifier structure. We present simulation and experimental data for a single line dendrite and also experimental results for a dendrite-based classifier structure. This work builds on previously demonstrated building blocks of a neural network: the channel, synapses and dendrites using CMOS circuits. These structures can be used for speech and pattern recognition. The computational efficiency of such a system is >10 MMACs/μW as compared to Digital Systems which perform 10 MMACs/mW.

  7. Coefficient of variation and Power Pen's parade computation

    OpenAIRE

    Sadefo Kamdem, Jules

    2011-01-01

    Under the the assumption that income y is a power function of its rank among n individuals, we approximate the coefficient of variation and gini index as functions of the power degree of the Pen's parade. Reciprocally, for a given coefficient of variation or gini index, we propose the analytic expression of the degree of the power Pen's parade; we can then compute the Pen's parade.

  8. Review of Power System Stability with High Wind Power Penetration

    DEFF Research Database (Denmark)

    Hu, Rui; Hu, Weihao; Chen, Zhe

    2015-01-01

    analyzing methods and stability improvement approaches. With increasing wind power penetration, system balancing and the reduced inertia may cause a big threaten for stable operation of power systems. To mitigate or eliminate the wind impacts for high wind penetration systems, although the practical......This paper presents an overview of researches on power system stability with high wind power penetration including analyzing methods and improvement approaches. Power system stability issues can be classified diversely according to different considerations. Each classified issue has special...... and reliable choices currently are the strong outside connections or sufficient reserve capacity constructions, many novel theories and approaches are invented to investigate the stability issues, looking forward to an extra-high penetration or totally renewable resource based power systems. These analyzing...

  9. ULTRA HIGH POWER TRANSMISSION LINE TECHNIQUES

    Science.gov (United States)

    The ultra-high power transmission line techniques including both failure mechanisms and component design are discussed. Failures resulting from...a waveguide. In view of the many advantages of the low loss mode in circular waveguide for ultra-high power levels, a mode transducer and a two...percent of the peak power of a standard rectangular wave guide. Water cooling is provided for high average power operation. Analysis of mode sup pression

  10. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  11. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  12. Minimizing Power Consumption by Personal Computers: A Technical Survey

    Directory of Open Access Journals (Sweden)

    P. K. Gupta

    2012-09-01

    Full Text Available Recently, the demand of “Green Computing”, which represents an environmentally responsible way of reducing power consumption, and involves various environmental issues such as waste management and greenhouse gases is increasing explosively. We have laid great emphasis on the need to minimize power consumption and heat dissipation by computer systems, as well as the requirement for changing the current power scheme options in their operating systems (OS. In this paper, we have provided a comprehensive technical review of the existing, though challenging, work on minimizing power consumption by computer systems, by utilizing various approaches, and emphasized on the software approach by making use of dynamic power management as it is used by most of the OSs in their power scheme configurations, seeking a better understanding of the power management schemes and current issues, and future directions in this field. Herein, we review the various approaches and techniques, including hardware, software, the central processing unit (CPU usage and algorithmic approaches for power economy. On the basis of analysis and observations, we found that this area still requires a lot of work, and needs to be focused towards some new intelligent approaches so that human inactivity periods for computer systems could be reduced intelligently.

  13. Electronic DC transformer with high power density

    NARCIS (Netherlands)

    Pavlovský, M.

    2006-01-01

    This thesis is concerned with the possibilities of increasing the power density of high-power dc-dc converters with galvanic isolation. Three cornerstones for reaching high power densities are identified as: size reduction of passive components, reduction of losses particularly in active components

  14. Aeroelastic modelling without the need for excessive computing power

    Energy Technology Data Exchange (ETDEWEB)

    Infield, D. [Loughborough Univ., Centre for Renewable Energy Systems Technology, Dept. of Electronic and Electrical Engineering, Loughborough (United Kingdom)

    1996-09-01

    The aeroelastic model presented here was developed specifically to represent a wind turbine manufactured by Northern Power Systems which features a passive pitch control mechanism. It was considered that this particular turbine, which also has low solidity flexible blades, and is free yawing, would provide a stringent test of modelling approaches. It was believed that blade element aerodynamic modelling would not be adequate to properly describe the combination of yawed flow, dynamic inflow and unsteady aerodynamics; consequently a wake modelling approach was adopted. In order to keep computation time limited, a highly simplified, semi-free wake approach (developed in previous work) was used. a similarly simple structural model was adopted with up to only six degrees of freedom in total. In order to take account of blade (flapwise) flexibility a simple finite element sub-model is used. Good quality data from the turbine has recently been collected and it is hoped to undertake model validation in the near future. (au)

  15. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  16. Low Power Floating Point Computation Sharing Multiplier for Signal Processing Applications

    OpenAIRE

    Sivanantham S; Jagannadha Naidu K; Balamurugan S; Bhuvana Phaneendra D

    2013-01-01

    Design of low power, higher performance digital signal processing elements are the major requirements in ultra deep sub-micron technology. This paper presents an IEEE-754 standard compatible single precision Floating-point Computation SHaring Multiplier (FCSHM) scheme suitable for low-power and high-speed signal processing applications. The floating-point multiplier used at thefilter taps effectively uses the computation re-use concept. Experimental results on a 10-tap programmable FIR filter...

  17. High Power Fiber Laser Test Bed

    Data.gov (United States)

    Federal Laboratory Consortium — This facility, unique within DoD, power-combines numerous cutting-edge fiber-coupled laser diode modules (FCLDM) to integrate pumping of high power rare earth-doped...

  18. High power RF solid state power amplifier system

    Science.gov (United States)

    Sims, III, William Herbert (Inventor); Chavers, Donald Gregory (Inventor); Richeson, James J. (Inventor)

    2011-01-01

    A high power, high frequency, solid state power amplifier system includes a plurality of input multiple port splitters for receiving a high-frequency input and for dividing the input into a plurality of outputs and a plurality of solid state amplifier units. Each amplifier unit includes a plurality of amplifiers, and each amplifier is individually connected to one of the outputs of multiport splitters and produces a corresponding amplified output. A plurality of multiport combiners combine the amplified outputs of the amplifiers of each of the amplifier units to a combined output. Automatic level control protection circuitry protects the amplifiers and maintains a substantial constant amplifier power output.

  19. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    UNCLASSIFIED UNCLASSIFIED AD-E403 815 Technical Report AREIS-TR-16006 HIGH ENERGY COMPUTED TOMOGRAPHIC INSPECTION OF MUNITIONS...REPORT DATE (DD-MM-YYYY) November 2016 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HIGH ENERGY COMPUTED...otherwise be accomplished by other nondestructive testing methods. 15. SUBJECT TERMS Radiography High energy Computed tomography (CT

  20. High Power Performance of Rod Fiber Amplifiers

    DEFF Research Database (Denmark)

    Johansen, Mette Marie; Michieletto, Mattia; Kristensen, Torben

    2015-01-01

    An improved version of the DMF rod fiber is tested in a high power setup delivering 360W of stable signal power. Multiple testing degrades the fiber and transverse modal instability threshold from >360W to ~290W.......An improved version of the DMF rod fiber is tested in a high power setup delivering 360W of stable signal power. Multiple testing degrades the fiber and transverse modal instability threshold from >360W to ~290W....

  1. Associative Memory computing power and its simulation.

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) chip is ASIC device specifically designed to perform ``pattern matching'' at very high speed and with parallel access to memory locations. The most extensive use for such device will be the ATLAS Fast Tracker (FTK) processor, where more than 8000 chips will be installed in 128 VME boards, specifically designed for high throughput in order to exploit the chip's features. Each AM chip will store a database of about 130000 pre-calculated patterns, allowing FTK to use about 1 billion patterns for the whole system, with any data inquiry broadcast to all memory elements simultaneously within the same clock cycle (10 ns), thus data retrieval time is independent of the database size. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS FTK processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 $\\mathrm{\\mu s}$. The simulation of such a parallelized system is an extremely complex task when executed in comm...

  2. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  3. High-speed Power Line Communications

    Directory of Open Access Journals (Sweden)

    Matthew N. O. Sadiku,

    2015-11-01

    Full Text Available This is the idea of using existing power lines for communication purposes. Power line communications (PLC enables network communication of voice, data, and video over direct power lines. High-speed PLC involves data rates in excess of 10 Mbps. PLC has attracted a lot of attention and has become an interesting subject of research lately.

  4. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    Science.gov (United States)

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-09

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  5. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications

    Science.gov (United States)

    Merced-Grafals, Emmanuelle J.; Dávila, Noraica; Ge, Ning; Williams, R. Stanley; Strachan, John Paul

    2016-09-01

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 106 cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  6. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  7. Computer- Aided Design in Power Engineering Application of Software Tools

    CERN Document Server

    Stojkovic, Zlatan

    2012-01-01

    This textbooks demonstrates the application of software tools in solving a series of problems from the field of designing power system structures and systems. It contains four chapters: The first chapter leads the reader through all the phases necessary in the procedures of computer aided modeling and simulation. It guides through the complex problems presenting on the basis of eleven original examples. The second chapter presents  application of software tools in power system calculations of power systems equipment design. Several design example calculations are carried out using engineering standards like MATLAB, EMTP/ATP, Excel & Access, AutoCAD and Simulink. The third chapters focuses on the graphical documentation using a collection of software tools (AutoCAD, EPLAN, SIMARIS SIVACON, SIMARIS DESIGN) which enable the complete automation of the development of graphical documentation of a power systems. In the fourth chapter, the application of software tools in the project management in power systems ...

  8. Load flow computations in hybrid transmission - distributed power systems

    NARCIS (Netherlands)

    Wobbes, E.D.; Lahaye, D.J.P.

    2013-01-01

    We interconnect transmission and distribution power systems and perform load flow computations in the hybrid network. In the largest example we managed to build, fifty copies of a distribution network consisting of fifteen nodes is connected to the UCTE study model, resulting in a system consisting

  9. Low Power Floating Point Computation Sharing Multiplier for Signal Processing Applications

    Directory of Open Access Journals (Sweden)

    Sivanantham S

    2013-04-01

    Full Text Available Design of low power, higher performance digital signal processing elements are the major requirements in ultra deep sub-micron technology. This paper presents an IEEE-754 standard compatible single precision Floating-point Computation SHaring Multiplier (FCSHM scheme suitable for low-power and high-speed signal processing applications. The floating-point multiplier used at thefilter taps effectively uses the computation re-use concept. Experimental results on a 10-tap programmable FIR filter show that the proposed multiplier scheme can provide a power reduction of 39.7% and significant improvements in the performance compared to conventional floating-point carry save array multiplier implementations.

  10. High Average Power Yb:YAG Laser

    Energy Technology Data Exchange (ETDEWEB)

    Zapata, L E; Beach, R J; Payne, S A

    2001-05-23

    We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.

  11. High Power/High Temperature Battery Development

    Science.gov (United States)

    1992-09-01

    dcruit stand The bipolar conliguration permits cello be closely packed, share cell walls, and combine the functions of wall and intercell connector. The...LUthco Symp. on Lithium Chem., Ed. R. Bach , John Wiley and Sons, N.Y. M. Wliams, st. al., Proc. 32nd Power Sources Conf., p 658 If (1986). C.D...on Electron Devices ATTN: Documents 2011 Crystal Drive, Suite 307 002 Adington, VA 22202 Page 43 Plop 2 O nlPage Elstronlo Technooyand DvcsLaomtory

  12. Very High Power THz Radiation Sources

    OpenAIRE

    Carr, G.L.; Martin, M. C.; McKinney, W.R.; Jordan, K.; Neil, G. R.; Williams, G. P.

    2003-01-01

    We report the production of high power (20watts average, ∼ 1 Megawatt peak) broadbandTHz light based on coherent emission fromrelativistic electrons. Such sources areideal for imaging, for high power damagestudies and for studies of non-linearphenomena in this spectral range. Wedescribe the source, presenting theoreticalcalculations and their experimentalverification. For clarity we compare thissource with one based on ultrafast lasertechniques.

  13. Packaging of high power semiconductor lasers

    CERN Document Server

    Liu, Xingsheng; Xiong, Lingling; Liu, Hui

    2014-01-01

    This book introduces high power semiconductor laser packaging design. The characteristics and challenges of the design and various packaging, processing, and testing techniques are detailed by the authors. New technologies, in particular thermal technologies, current applications, and trends in high power semiconductor laser packaging are described at length and assessed.

  14. High power laser perforating tools and systems

    Science.gov (United States)

    Zediker, Mark S; Rinzler, Charles C; Faircloth, Brian O; Koblick, Yeshaya; Moxley, Joel F

    2014-04-22

    ystems devices and methods for the transmission of 1 kW or more of laser energy deep into the earth and for the suppression of associated nonlinear phenomena. Systems, devices and methods for the laser perforation of a borehole in the earth. These systems can deliver high power laser energy down a deep borehole, while maintaining the high power to perforate such boreholes.

  15. Evolution of Very High Frequency Power Supplies

    DEFF Research Database (Denmark)

    Knott, Arnold; Andersen, Toke Meyer; Kamby, Peter

    2013-01-01

    in radio frequency transmission equipment helps to overcome those. However those circuits were not designed to meet the same requirements as power converters. This paper summarizes the contributions in recent years in application of very high frequency (VHF) technologies in power electronics, shows results......The ongoing demand for smaller and lighter power supplies is driving the motivation to increase the switching frequencies of power converters. Drastic increases however come along with new challenges, namely the increase of switching losses in all components. The application of power circuits used...

  16. High-power optics lasers and applications

    CERN Document Server

    Apollonov, Victor V

    2015-01-01

    This book covers the basics, realization and materials for high power laser systems and high power radiation interaction with  matter. The physical and technical fundamentals of high intensity laser optics and adaptive optics and the related physical processes in high intensity laser systems are explained. A main question discussed is: What is power optics? In what way is it different from ordinary optics widely used in cameras, motion-picture projectors, i.e., for everyday use? An undesirable consequence of the thermal deformation of optical elements and surfaces was discovered during studies of the interaction with powerful incident laser radiation. The requirements to the fabrication, performance and quality of optical elements employed within systems for most practical applications are also covered. The high-power laser performance is generally governed by the following: (i) the absorption of incident optical radiation (governed primarily by various absorption mechanisms), (ii) followed by a temperature ...

  17. High temperature power electronics for space

    Science.gov (United States)

    Hammoud, Ahmad N.; Baumann, Eric D.; Myers, Ira T.; Overton, Eric

    1991-01-01

    A high temperature electronics program at NASA Lewis Research Center focuses on dielectric and insulating materials research, development and testing of high temperature power components, and integration of the developed components and devices into a demonstrable 200 C power system, such as inverter. An overview of the program and a description of the in-house high temperature facilities along with experimental data obtained on high temperature materials are presented.

  18. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  19. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  20. Powering the High-Luminosity Triplets

    CERN Document Server

    Ballarino, A

    2015-01-01

    The powering of the magnets in the LHC High-Luminosity Triplets requires production and transfer of more than 150 kA of DC current. High precision power converters will be adopted, and novel High Temperature Superconducting (HTS) current leads and MgB2 based transfer lines will provide the electrical link between the power converters and the magnets. This chapter gives an overview of the systems conceived in the framework of the LHC High-Luminosity upgrade for feeding the superconducting magnet circuits. The focus is on requirements, challenges and novel developments.

  1. High power solid state switches

    Science.gov (United States)

    Gundersen, Martin

    1991-11-01

    We have successfully produced an optically triggered thyristor based in Gallium Arsenide, developed a model for breakdown, and are developing two related devices, including a Gallium Arsenide based static inductor thyristor. We are getting at the basic limitations of Gallium Arsenide for these applications, and are developing models for the physical processes that will determine device limitations. The previously supported gas phase work - resulting in the back-lighted thyratron (BLT) - has actually resulted in a very changed view of how switching can be accomplished, and this is impacting the design of important machines. The BLT is being studied internationally: in Japan for laser fusion and laser isotope separation. ITT has built a BLT that has switched 30 kA at 60 kV in testing at NSWC Dahlgren and the device is being commercialized by another American company. Versions of the switch are now being tested for excimer laser and other applications. Basically, the switch, which arose from pulse power physics studies at USC, can switch more current faster (higher di/dt), with less housekeeping, and with other advantageous properties. There are a large number of other new applications, include kinetic energy weapons, pulsed microwave sources and R.F. accelerators.

  2. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  3. High-Temperature Passive Power Electronics

    Science.gov (United States)

    1997-01-01

    In many future NASA missions - such as deep-space exploration, the National AeroSpace Plane, minisatellites, integrated engine electronics, and ion or arcjet thrusters - high-power electrical components and systems must operate reliably and efficiently in high-temperature environments. The high-temperature power electronics program at the NASA Lewis Research Center focuses on dielectric and insulating material research, the development and characterization of high-temperature components, and the integration of the developed components into a demonstrable 200 C power system - such as an inverter. NASA Lewis has developed high-temperature power components through collaborative efforts with the Air Force Wright Laboratory, Northrop Grumman, and the University of Wisconsin. Ceramic and film capacitors, molypermalloy powder inductors, and a coaxially wound transformer were designed, developed, and evaluated for high-temperature operation.

  4. Low Power Design with High-Level Power Estimation and Power-Aware Synthesis

    CERN Document Server

    Ahuja, Sumit; Shukla, Sandeep Kumar

    2012-01-01

    Low-power ASIC/FPGA based designs are important due to the need for extended battery life, reduced form factor, and lower packaging and cooling costs for electronic devices. These products require fast turnaround time because of the increasing demand for handheld electronic devices such as cell-phones, PDAs and high performance machines for data centers. To achieve short time to market, design flows must facilitate a much shortened time-to-product requirement. High-level modeling, architectural exploration and direct synthesis of design from high level description enable this design process. This book presents novel research techniques, algorithms,methodologies and experimental results for high level power estimation and power aware high-level synthesis. Readers will learn to apply such techniques to enable design flows resulting in shorter time to market and successful low power ASIC/FPGA design. Integrates power estimation and reduction for high level synthesis, with low-power, high-level design; Shows spec...

  5. High Power Picosecond Laser Pulse Recirculation

    Energy Technology Data Exchange (ETDEWEB)

    Shverdin, M Y; Jovanovic, I; Semenov, V A; Betts, S M; Brown, C; Gibson, D J; Shuttlesworth, R M; Hartemann, F V; Siders, C W; Barty, C P

    2010-04-12

    We demonstrate a nonlinear crystal-based short pulse recirculation cavity for trapping the second harmonic of an incident high power laser pulse. This scheme aims to increase the efficiency and flux of Compton-scattering based light sources. We demonstrate up to 36x average power enhancement of frequency doubled sub-millijoule picosecond pulses, and 17x average power enhancement of 177 mJ, 10 ps, 10 Hz pulses.

  6. High-power picosecond laser pulse recirculation.

    Science.gov (United States)

    Shverdin, M Y; Jovanovic, I; Semenov, V A; Betts, S M; Brown, C; Gibson, D J; Shuttlesworth, R M; Hartemann, F V; Siders, C W; Barty, C P J

    2010-07-01

    We demonstrate a nonlinear crystal-based short pulse recirculation cavity for trapping the second harmonic of an incident high-power laser pulse. This scheme aims to increase the efficiency and flux of Compton-scattering-based light sources. We demonstrate up to 40x average power enhancement of frequency-doubled submillijoule picosecond pulses, and 17x average power enhancement of 177 mJ, 10 ps, 10 Hz pulses.

  7. On Computational Power of Quantum Read-Once Branching Programs

    Directory of Open Access Journals (Sweden)

    Farid Ablayev

    2011-03-01

    Full Text Available In this paper we review our current results concerning the computational power of quantum read-once branching programs. First of all, based on the circuit presentation of quantum branching programs and our variant of quantum fingerprinting technique, we show that any Boolean function with linear polynomial presentation can be computed by a quantum read-once branching program using a relatively small (usually logarithmic in the size of input number of qubits. Then we show that the described class of Boolean functions is closed under the polynomial projections.

  8. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  9. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Adel Sarofim; Connie Senior

    2004-12-22

    , immersive environment. The Virtual Engineering Framework (VEF), in effect a prototype framework, was developed through close collaboration with NETL supported research teams from Iowa State University Virtual Reality Applications Center (ISU-VRAC) and Carnegie Mellon University (CMU). The VEF is open source, compatible across systems ranging from inexpensive desktop PCs to large-scale, immersive facilities and provides support for heterogeneous distributed computing of plant simulations. The ability to compute plant economics through an interface that coupled the CMU IECM tool to the VEF was demonstrated, and the ability to couple the VEF to Aspen Plus, a commercial flowsheet modeling tool, was demonstrated. Models were interfaced to the framework using VES-Open. Tests were performed for interfacing CAPE-Open-compliant models to the framework. Where available, the developed models and plant simulations have been benchmarked against data from the open literature. The VEF has been installed at NETL. The VEF provides simulation capabilities not available in commercial simulation tools. It provides DOE engineers, scientists, and decision makers with a flexible and extensible simulation system that can be used to reduce the time, technical risk, and cost to develop the next generation of advanced, coal-fired power systems that will have low emissions and high efficiency. Furthermore, the VEF provides a common simulation system that NETL can use to help manage Advanced Power Systems Research projects, including both combustion- and gasification-based technologies.

  10. Multi-Functional Micro Projection Device as Screen Substitute for Low Power Consumption Computing

    Directory of Open Access Journals (Sweden)

    Zeev Zalevsky

    2012-03-01

    Full Text Available One of the major power consuming components in a computer is its display unit. On average the screen consumes ten times more power than the DSP processor itself. Thus, reducing the power consumption should be one of the most important tasks in the development of low power consumption computing systems. In this paper we present one possible solution involving micro projection device based upon lasers and a digital light processing (DLP matrix which is a matrix of electrically controllable mirrors capable of translating electrical signal to a time varying projected image. It can serve to substitute a screen and consume ten times less power than a conventional screen. The described device is a multifunctional highly efficient customized DLP light engine being capable of serving as an image projector and simultaneously to support range and topography estimation measurements.

  11. High Power Co-Axial Coupler

    Energy Technology Data Exchange (ETDEWEB)

    Neubauer, M. [Muons, Inc.; Dudas, A. [Muons, Inc.; Rimmer, Robert A. [JLAB; Guo, Jiquan [JLAB; Williams, R. Scott [JLAB

    2013-12-01

    A very high power Coax RF Coupler (MW-Level) is very desirable for a number of accelerator and commercial applications. For example, the development of such a coupler operating at 1.5 GHz may permit the construction of a higher-luminosity version of the Electron-Ion Collider (EIC) being planned at JLab. Muons, Inc. is currently funded by a DOE STTR grant to develop a 1.5-GHz high-power doublewindowcoax coupler with JLab (about 150 kW). Excellent progress has been made on this R&D project, so we propose an extension of this development to build a very high power coax coupler (MW level peak power and a max duty factor of about 4%). The dimensions of the current coax coupler will be scaled up to provide higher power capability.

  12. High-power atomic xenon laser

    NARCIS (Netherlands)

    Witteman, W.J.; Peters, P.J.M.; Botma, H.; Botma, H.; Tskhai, S.N.; Udalov, Yu.B.; Mei, Q.C.; Mei, Qi-Chu; Ochkin, V.N.

    1995-01-01

    The high pressure atomic xenon laser is becoming the most promising light source in the wavelength region of a few microns. The merits are high efficiency (so far up to 8 percent), high output energies (15 J/liter at 9 bar), high continuous output power (more than 200 W/liter), no gas dissociation a

  13. High average-power induction linacs

    Energy Technology Data Exchange (ETDEWEB)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.

    1989-03-15

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.

  14. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  15. Advances in Very High Frequency Power Conversion

    DEFF Research Database (Denmark)

    Kovacevic, Milovan

    . Excellent performance and small size of magnetic components and capacitors at very high frequencies, along with constant advances in performance of power semiconductor devices, suggests a sizable shift in consumer power supplies market into this area in the near future. To operate dc-dc converter power...... to be applied, especially at low power levels where gating loss becomes a significant percentage of the total loss budget. Various resonant gate drive methods have been proposed to address this design challenge, with varying size, cost, and complexity. This dissertation presents a self-oscillating resonant gate...

  16. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  17. Highly-efficient high-power pumps for fiber lasers

    Science.gov (United States)

    Gapontsev, V.; Moshegov, N.; Berezin, I.; Komissarov, A.; Trubenko, P.; Miftakhutdinov, D.; Berishev, I.; Chuyanov, V.; Raisky, O.; Ovtchinnikov, A.

    2017-02-01

    We report on high efficiency multimode pumps that enable ultra-high efficiency high power ECO Fiber Lasers. We discuss chip and packaged pump design and performance. Peak out-of-fiber power efficiency of ECO Fiber Laser pumps was reported to be as high as 68% and was achieved with passive cooling. For applications that do not require Fiber Lasers with ultimate power efficiency, we have developed passively cooled pumps with out-of-fiber power efficiency greater than 50%, maintained at operating current up to 22A. We report on approaches to diode chip and packaged pump design that possess such performance.

  18. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  19. Overview of space power electronic's technology under the CSTI High Capacity Power Program

    Science.gov (United States)

    Schwarze, Gene E.

    The Civilian Space Technology Initiative (CSTI) is a NASA Program targeted at the development of specific technologies in the areas of transportation, operations and science. Each of these three areas consists of major elements and one of the operation's elements is the High Capacity Power element. The goal of this element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA initiatives. The High Capacity Power element is broken down into several subelements that includes energy conversion in the areas of the free piston Stirling power converter and thermoelectrics, thermal management, power management, system diagnostics, and environmental compatibility and system's lifetime. A recent overview of the CSTI High capacity Power element and a description of each of the program's subelements is given by Winter (1989). The goals of the Power Management subelement are twofold. The first is to develop, test, and demonstrate high temperature, radiation-resistant power and control components and circuits that will be needed in the Power Conditioning, Control and Transmission (PCCT) subsystem of a space nuclear power system. The results obtained under this goal will also be applicable to the instrumentation and control subsystem of a space nuclear reactor. These components and circuits must perform reliably for lifetimes of 7-10 years. The second goal is to develop analytical models for use in computer simulations of candidate PCCT subsystems. Circuits which will be required for a specific PCCT subsystem will be designed and built to demonstrate their performance and, also, to validate the analytical models and simulations. The tasks under the Power Management subelement will now be described in terms of objectives, approach and present status of work.

  20. Overview of space power electronic's technology under the CSTI High Capacity Power Program

    Science.gov (United States)

    Schwarze, Gene E.

    1994-01-01

    The Civilian Space Technology Initiative (CSTI) is a NASA Program targeted at the development of specific technologies in the areas of transportation, operations and science. Each of these three areas consists of major elements and one of the operation's elements is the High Capacity Power element. The goal of this element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA initiatives. The High Capacity Power element is broken down into several subelements that includes energy conversion in the areas of the free piston Stirling power converter and thermoelectrics, thermal management, power management, system diagnostics, and environmental compatibility and system's lifetime. A recent overview of the CSTI High capacity Power element and a description of each of the program's subelements is given by Winter (1989). The goals of the Power Management subelement are twofold. The first is to develop, test, and demonstrate high temperature, radiation-resistant power and control components and circuits that will be needed in the Power Conditioning, Control and Transmission (PCCT) subsystem of a space nuclear power system. The results obtained under this goal will also be applicable to the instrumentation and control subsystem of a space nuclear reactor. These components and circuits must perform reliably for lifetimes of 7-10 years. The second goal is to develop analytical models for use in computer simulations of candidate PCCT subsystems. Circuits which will be required for a specific PCCT subsystem will be designed and built to demonstrate their performance and, also, to validate the analytical models and simulations. The tasks under the Power Management subelement will now be described in terms of objectives, approach and present status of work.

  1. High power density carbonate fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Yuh, C.; Johnsen, R.; Doyon, J.; Allen, J. [Energy Research Corp., Danbury, CT (United States)

    1996-12-31

    Carbonate fuel cell is a highly efficient and environmentally clean source of power generation. Many organizations worldwide are actively pursuing the development of the technology. Field demonstration of multi-MW size power plant has been initiated in 1996, a step toward commercialization before the turn of the century, Energy Research Corporation (ERC) is planning to introduce a 2.85MW commercial fuel cell power plant with an efficiency of 58%, which is quite attractive for distributed power generation. However, to further expand competitive edge over alternative systems and to achieve wider market penetration, ERC is exploring advanced carbonate fuel cells having significantly higher power densities. A more compact power plant would also stimulate interest in new markets such as ships and submarines where space limitations exist. The activities focused on reducing cell polarization and internal resistance as well as on advanced thin cell components.

  2. A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid

    Science.gov (United States)

    Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)

    2001-01-01

    NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.

  3. High Power Density Power Electronic Converters for Large Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk

    In large wind turbines (in MW and multi-MW ranges), which are extensively utilized in wind power plants, full-scale medium voltage (MV) multi-level (ML) voltage source converters (VSCs) are being more preferably employed nowadays for interfacing these wind turbines with electricity grids...... assessments of these specific VSCs so that their power densities and reliabilities are quantitatively determined, which requires extensive utilization of the electro-thermal models of the VSCs under investigation. In this thesis, the three-level neutral-point-clamped VSCs (3L-NPC-VSCs), which are classified......-HB-VSCs). As the switch technology for realizing these 3L-VSCs, press-pack IGBTs are chosen to ensure high power density and reliability. Based on the selected 3L-VSCs and switch technology, the converter electro-thermal models are developed comprehensively, implemented practically, and validated via a full-scale 3L...

  4. High Power Helicon Plasma Propulsion Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work seeks to develop and optimize an electrode-less plasma propulsion system that is based on a high power helicon (HPH) that is being developed...

  5. High Power Helicon Plasma Propulsion Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A new thruster has been conceived and tested that is based on a high power helicon (HPH) plasma wave. In this new method of propulsion, an antenna generates and...

  6. Drivers for High Power Laser Diodes

    Institute of Scientific and Technical Information of China (English)

    Yankov P; Todorov D; Saramov E

    2006-01-01

    During the last year the high power laser diodes jumped over the 1 kW level of CW power for a stack,and the commercial 1 cm bars reached 100 W output optical power at the standard wavelengths around 800 nm and 980 nm. The prices are reaching the industry acceptable levels. All Nd:YAG and fiber industrial lasers manufacturers have developed kW prototypes. Those achievements have set new requirements for the power supplies manufactuers-high and stable output current, and possibilities for fast control of the driving current, keeping safe the expensive laser diode. The fast switching frequencies also allow long range free space communications and optical range finding. The high frequencies allow the design of a 3D laser radar with high resolution and other military applications. The prospects for direct laser diode micro machining are also attractive.

  7. Coupling output of multichannel high power microwaves

    Science.gov (United States)

    Li, Guolin; Shu, Ting; Yuan, Chengwei; Zhang, Jun; Yang, Jianhua; Jin, Zhenxing; Yin, Yi; Wu, Dapeng; Zhu, Jun; Ren, Heming; Yang, Jie

    2010-12-01

    The coupling output of multichannel high power microwaves is a promising technique for the development of high power microwave technologies, as it can enhance the output capacities of presently studied devices. According to the investigations on the spatial filtering method and waveguide filtering method, the hybrid filtering method is proposed for the coupling output of multichannel high power microwaves. As an example, a specific structure is designed for the coupling output of S/X/X band three-channel high power microwaves and investigated with the hybrid filtering method. In the experiments, a pulse of 4 GW X band beat waves and a pulse of 1.8 GW S band microwave are obtained.

  8. High power regenerative laser amplifier

    Science.gov (United States)

    Miller, J.L.; Hackel, L.A.; Dane, C.B.; Zapata, L.E.

    1994-02-08

    A regenerative amplifier design capable of operating at high energy per pulse, for instance, from 20-100 Joules, at moderate repetition rates, for instance from 5-20 Hertz is provided. The laser amplifier comprises a gain medium and source of pump energy coupled with the gain medium; a Pockels cell, which rotates an incident beam in response to application of a control signal; an optical relay system defining a first relay plane near the gain medium and a second relay plane near the rotator; and a plurality of reflectors configured to define an optical path through the gain medium, optical relay and Pockels cell, such that each transit of the optical path includes at least one pass through the gain medium and only one pass through the Pockels cell. An input coupler, and an output coupler are provided, implemented by a single polarizer. A control circuit coupled to the Pockels cell generates the control signal in timed relationship with the input pulse so that the input pulse is captured by the input coupler and proceeds through at least one transit of the optical path, and then the control signal is applied to cause rotation of the pulse to a polarization reflected by the polarizer, after which the captured pulse passes through the gain medium at least once more and is reflected out of the optical path by the polarizer before passing through the rotator again to provide an amplified pulse. 7 figures.

  9. High Voltage Power Transmission for Wind Energy

    Science.gov (United States)

    Kim, Young il

    The high wind speeds and wide available area at sea have recently increased the interests on offshore wind farms in the U.S.A. As offshore wind farms become larger and are placed further from the shore, the power transmission to the onshore grid becomes a key feature. Power transmission of the offshore wind farm, in which good wind conditions and a larger installation area than an onshore site are available, requires the use of submarine cable systems. Therefore, an underground power cable system requires unique design and installation challenges not found in the overhead power cable environment. This paper presents analysis about the benefit and drawbacks of three different transmission solutions: HVAC, LCC/VSC HVDC in the grid connecting offshore wind farms and also analyzed the electrical characteristics of underground cables. In particular, loss of HV (High Voltage) subsea power of the transmission cables was evaluated by the Brakelmann's theory, taking into account the distributions of current and temperature.

  10. Efficient Capacity Computation and Power Optimization for Relay Networks

    CERN Document Server

    Parvaresh, Farzad

    2011-01-01

    The capacity or approximations to capacity of various single-source single-destination relay network models has been characterized in terms of the cut-set upper bound. In principle, a direct computation of this bound requires evaluating the cut capacity over exponentially many cuts. We show that the minimum cut capacity of a relay network under some special assumptions can be cast as a minimization of a submodular function, and as a result, can be computed efficiently. We use this result to show that the capacity, or an approximation to the capacity within a constant gap for the Gaussian, wireless erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be computed in polynomial time. We present some empirical results showing that computing constant-gap approximations to the capacity of Gaussian relay networks with around 300 nodes can be done in order of minutes. For Gaussian networks, cut-set capacities are also functions of the powers assigned to the nodes. We consider a family of power o...

  11. High Power Short Wavelength Laser Development

    Science.gov (United States)

    1977-11-01

    Unlimited güä^äsjäsiiiüüü X NRTC-77-43R P I High Power Short Wavelength Laser Development November 1977 D. B. Cohn and W. B. Lacina...NO NRTC-77-43R, «. TITLE fana »uetjjitj BEFORE COMPLETING FORM CIPIENT’S CATALOO NUMBER KIGH.POWER SHORT WAVELENGTH LASER DEVELOPMENT , 7...fWhtn Data Enterte NRTC-77-43R HIGH POWER SHORT WAVELENGTH LASER DEVELOPMENT ARPA Order Number: Program Code Number: Contract Number: Principal

  12. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  14. Toward High-Power Klystrons With RF Power Conversion Efficiency on the Order of 90%

    CERN Document Server

    Baikov, Andrey Yu; Syratchev, Igor

    2015-01-01

    The increase in efficiency of RF power generation for future large accelerators is considered a high priority issue. The vast majority of the existing commercial high-power RF klystrons operates in the electronic efficiency range between 40% and 55%. Only a few klystrons available on the market are capable of operating with 65% efficiency or above. In this paper, a new method to achieve 90% RF power conversion efficiency in a klystron amplifier is presented. The essential part of this method is a new bunching technique - bunching with bunch core oscillations. Computer simulations confirm that the RF production efficiency above 90% can be reached with this new bunching method. The results of a preliminary study of an L-band, 20-MW peak RF power multibeam klystron for Compact Linear Collider with the efficiency above 85% are presented.

  15. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  16. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  17. Small high cooling power space cooler

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, T. V.; Raab, J.; Durand, D.; Tward, E. [Northrop Grumman Aerospace Systems Redondo Beach, Ca, 90278 (United States)

    2014-01-29

    The small High Efficiency pulse tube Cooler (HEC) cooler, that has been produced and flown on a number of space infrared instruments, was originally designed to provide cooling of 10 W @ 95 K. It achieved its goal with >50% margin when limited by the 180 W output ac power of its flight electronics. It has also been produced in 2 stage configurations, typically for simultaneously cooling of focal planes to temperatures as low as 35 K and optics at higher temperatures. The need for even higher cooling power in such a low mass cryocooler is motivated by the advent of large focal plane arrays. With the current availability at NGAS of much larger power cryocooler flight electronics, reliable long term operation in space with much larger cooling powers is now possible with the flight proven 4 kg HEC mechanical cooler. Even though the single stage cooler design can be re-qualified for those larger input powers without design change, we redesigned both the linear and coaxial version passive pulse tube cold heads to re-optimize them for high power cooling at temperatures above 130 K while rejecting heat to 300 K. Small changes to the regenerator packing, the re-optimization of the tuned inertance and no change to the compressor resulted in the increased performance at 150 K. The cooler operating at 290 W input power achieves 35 W@ 150 K corresponding to a specific cooling power at 150 K of 8.25 W/W and a very high specific power of 72.5 W/Kg. At these powers the cooler still maintains large stroke, thermal and current margins. In this paper we will present the measured data and the changes to this flight proven cooler that were made to achieve this increased performance.

  18. Advances in high power semiconductor diode lasers

    Science.gov (United States)

    Ma, Xiaoyu; Zhong, Li

    2008-03-01

    High power semiconductor lasers have broad applications in the fields of military and industry. Recent advances in high power semiconductor lasers are reviewed mainly in two aspects: improvements of diode lasers performance and optimization of packaging architectures of diode laser bars. Factors which determine the performance of diode lasers, such as power conversion efficiency, temperature of operation, reliability, wavelength stabilization etc., result from a combination of new semiconductor materials, new diode structures, careful material processing of bars. The latest progress of today's high-power diode lasers at home and abroad is briefly discussed and typical data are presented. The packaging process is of decisive importance for the applicability of high-power diode laser bars, not only technically but also economically. The packaging techniques include the material choosing and the structure optimizing of heat-sinks, the bonding between the array and the heat-sink, the cooling and the fiber coupling, etc. The status of packaging techniques is stressed. There are basically three different diode package architectural options according to the integration grade. Since the package design is dominated by the cooling aspect, different effective cooling techniques are promoted by different package architectures and specific demands. The benefit and utility of each package are strongly dependent upon the fundamental optoelectronic properties of the individual diode laser bars. Factors which influence these properties are outlined and comparisons of packaging approaches for these materials are made. Modularity of package for special application requirements is an important developing tendency for high power diode lasers.

  19. Reducing power consumption during execution of an application on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  20. Profiling an application for power consumption during execution on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2012-08-21

    Methods, apparatus, and products are disclosed for profiling an application for power consumption during execution on a compute node that include: receiving an application for execution on a compute node; identifying a hardware power consumption profile for the compute node, the hardware power consumption profile specifying power consumption for compute node hardware during performance of various processing operations; determining a power consumption profile for the application in dependence upon the application and the hardware power consumption profile for the compute node; and reporting the power consumption profile for the application.

  1. Silver based batteries for high power applications

    Science.gov (United States)

    Karpinski, A. P.; Russell, S. J.; Serenyi, J. R.; Murphy, J. P.

    The present status of silver oxide-zinc technology and applications has been described by Karpinski et al. [A.P. Karpinski, B. Makovetski, S.J. Russell, J.R. Serenyi, D.C. Williams, Silver-Zinc: status of technology and applications, Journal of Power Sources, 80 (1999) 53-60], where the silver-zinc couple is still the preferred choice where high specific energy/energy density, coupled with high specific power/power density are important for high-rate, weight or size/configuration sensitive applications. Perhaps the silver oxide cathode can be considered one of the most versatile electrode materials. When coupled with other anodes and corresponding electrolyte management system, the silver electrode provides for a wide array of electrochemical systems that can be tailored to meet the most demanding, high power requirements. Besides zinc, the most notable include cadmium, iron, metal hydride, and hydrogen electrode for secondary systems, while primary systems include lithium and aluminum. Alloys including silver are also available, such as silver chloride, which when coupled with magnesium or aluminum are primarily used in many seawater applications. The selection and use of these couples is normally the result of a trade-off of many factors. These include performance, safety, risk, reliability, and cost. When high power is required, silver oxide-zinc, silver oxide-aluminum, and silver oxide-lithium are the most energetic. For moderate performance (i.e., lower power), silver oxide-zinc or silver-cadmium would be the system of choice. This paper summarizes the suitability of the silver-based couples, with an emphasis on the silver-zinc system, as primary or rechargeable power sources for high energy/power applications.

  2. Challenges of high dam construction to computational mechanics

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chuhan

    2007-01-01

    The current situations and growing prospects of China's hydro-power development and high dam construction are reviewed,giving emphasis to key issues for safety evaluation of large dams and hydro-power plants,especially those associated with application of state-of-the-art computational mechanics.These include but are not limited to:stress and stability analysis of dam foundations under external loads;earthquake behavior of dam-foundation-reservoir systems,mechanical properties of mass concrete for dams,high velocity flow and energy dissipation for high dams,scientific and technical problems of hydro-power plants and underground structures,and newly developed types of dam-Roll Compacted Concrete (RCC) dams and Concrete Face Rock-fill (CFR)dams.Some examples demonstrating successful utilizations of computational mechanics in high dam engineering are given,including seismic nonlinear analysis for arch dam foundations,nonlinear fracture analysis of arch dams under reservoir loads,and failure analysis of arch dam-foundations.To make more use of the computational mechanics in high dam engineering,it is pointed out that much research including different computational methods,numerical models and solution schemes,and verifications through experimental tests and filed measurements is necessary in the future.

  3. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  4. Protection Related to High-power Targets

    CERN Document Server

    Plum, M.A.

    2016-01-01

    Target protection is an important part of machine protection. The beam power in high-intensity accelerators is high enough that a single wayward pulse can cause serious damage. Today's high-power targets operate at the limit of available technology, and are designed for a very narrow range of beam parameters. If the beam pulse is too far off centre, or if the beam size is not correct, or if the beam density is too high, the target can be seriously damaged. We will start with a brief introduction to high-power targets and then move to a discussion of what can go wrong, and what are the risks. Next we will discuss how to control the beam-related risk, followed by examples from a few different accelerator facilities. We will finish with a detailed example of the Oak Ridge Spallation Neutron Source target tune up and target protection.

  5. Computational Simulation of Explosively Generated Pulsed Power Devices

    Science.gov (United States)

    2013-03-21

    physics models for magnetohydrodynamics, and ALEGRA-HEDP, which builds on the ALEGRA- MHD version and adds physics model that allow simulation of high energy...development, there is a genuine need for more theory-based research and an accurate computer modeling capability. One of the programs that has done...developed by Sandia National Laboratories, to develop a computer model that can accurately represent an FEG and that can be verified against existing

  6. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  8. High Power Test for Klystron Stability

    Energy Technology Data Exchange (ETDEWEB)

    Seol, Kyung-Tae; Kim, Seong-Gu; Kwon, Hyeok-Jung; Kim, Han-Sung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Gyeongju (Korea, Republic of)

    2015-10-15

    The 100-MeV linac consists of a 50-keV proton injector based on a microwave ion source, a 3-MeV RFQ with a four-vane structure, and a 100-MeV DTL. Nine sets of 1MW klystrons have been operated for the 100-MeV proton linac. The klystron filament heating time was approximately 5700 hours in 2014. During the high power operation of the klystron, unstable RF waveforms appeared at the klystron output, and we have checked and performed cavity frequency adjustments, magnet and heater current, reflection from a circulator, klystron test without a circulator, and the frequency spectrum measurement. The problems may be from harmonic power stay between the klystron and the circulator. A harmonic filter of waveguide type is designed to eliminate the harmonic power. Nine sets of the klystrons have been operated for the KOMAC 100-MeV proton linac. Some klystrons have unstable RF waveforms at specific power level. We have checked and tested the cavity frequency adjustment, reflection from a circulator, high power test without a circulator, and frequency spectrum at the unstable RF. The unstable RF may be from harmonic power stay between the klystron and the circulator. To eliminate the harmonic power, a harmonic filter of waveguide type is designed.

  9. High power, high efficiency millimeter wavelength traveling wave tubes for high rate communications from deep space

    Science.gov (United States)

    Dayton, James A., Jr.

    1991-01-01

    The high-power transmitters needed for high data rate communications from deep space will require a new class of compact, high efficiency traveling wave tubes (TWT's). Many of the recent TWT developments in the microwave frequency range are generically applicable to mm wave devices, in particular much of the technology of computer aided design, cathodes, and multistage depressed collectors. However, because TWT dimensions scale approximately with wavelength, mm wave devices will be physically much smaller with inherently more stringent fabrication tolerances and sensitivity to thermal dissipation.

  10. Diagnostics for High Power Targets and Dumps

    CERN Document Server

    Gschwendtner, E

    2012-01-01

    High power targets are generally used for neutrino, antiproton, neutron and secondary beam production whereas dumps are needed in beam waste management. In order to guarantee an optimized and safe use of these targets and dumps, reliable instrumentation is needed; the diagnostics in high power beams around targets and dumps is reviewed. The suite of beam diagnostics devices used in such extreme environments is discussed, including their role in commissioning and operation. The handling and maintenance of the instrumentation components in high radiation areas is also addressed.

  11. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  12. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  13. Advanced High Voltage Power Device Concepts

    CERN Document Server

    Baliga, B Jayant

    2012-01-01

    Advanced High Voltage Power Device Concepts describes devices utilized in power transmission and distribution equipment, and for very high power motor control in electric trains and steel-mills. Since these devices must be capable of supporting more than 5000-volts in the blocking mode, this books covers operation of devices rated at 5,000-V, 10,000-V and 20,000-V. Advanced concepts (the MCT, the BRT, and the EST) that enable MOS-gated control of power thyristor structures are described and analyzed in detail. In addition, detailed analyses of the silicon IGBT, as well as the silicon carbide MOSFET and IGBT, are provided for comparison purposes. Throughout the book, analytical models are generated to give a better understanding of the physics of operation for all the structures. This book provides readers with: The first comprehensive treatment of high voltage (over 5000-volts) power devices suitable for the power distribution, traction, and motor-control markets;  Analytical formulations for all the device ...

  14. Advances in industrial high-power lasers

    Science.gov (United States)

    Schlueter, Holger

    2005-03-01

    Four major types of laser sources are used for material processing. Excluding Excimer lasers, this paper focuses on advances in High Power CO2 lasers, Solid State Lasers and Diode Lasers. Because of their unrivaled cost to brightness relationship the fast axial flow CO2 laser remains unrivaled for flat-sheet laser cutting. Adding approximately a kW of output power ever four years, this laser type has been propelling the entire sheet metal fabrication industry for the last two decades. Very robust, diffusion cooled annular discharge CO2 lasers with 2kW output power have enabled robot mounted lasers for 3D applications. Solid State Lasers are chosen mainly because of the option of fiber delivery. Industrial applications still rely on lamp-pumped Nd:YAG lasers with guaranteed output powers of 4.5 kW at the workpiece. The introduction of the diode pumped Thin Disc Laser 4.5 kW laser enables new applications such as the Programmable Focus Optics. Pumping the Thin Disc Laser requires highly reliable High Power Diode Lasers. The necessary reliability can only be achieved in a modern, automated semiconductor manufacturing facility. For Diode Lasers, electro-optical efficiencies above 65% are as important as the passivation of the facets to avoid Burn-In power degradation.

  15. High power infrared QCLs: advances and applications

    Science.gov (United States)

    Patel, C. Kumar N.

    2012-01-01

    QCLs are becoming the most important sources of laser radiation in the midwave infrared (MWIR) and longwave infrared (LWIR) regions because of their size, weight, power and reliability advantages over other laser sources in the same spectral regions. The availability of multiwatt RT operation QCLs from 3.5 μm to >16 μm with wall plug efficiency of 10% or higher is hastening the replacement of traditional sources such as OPOs and OPSELs in many applications. QCLs can replace CO2 lasers in many low power applications. Of the two leading groups in improvements in QCL performance, Pranalytica is the commercial organization that has been supplying the highest performance QCLs to various customers for over four year. Using a new QCL design concept, the non-resonant extraction [1], we have achieved CW/RT power of >4.7 W and WPE of >17% in the 4.4 μm - 5.0 μm region. In the LWIR region, we have recently demonstrated QCLs with CW/RT power exceeding 1 W with WPE of nearly 10 % in the 7.0 μm-10.0 μm region. In general, the high power CW/RT operation requires use of TECs to maintain QCLs at appropriate operating temperatures. However, TECs consume additional electrical power, which is not desirable for handheld, battery-operated applications, where system power conversion efficiency is more important than just the QCL chip level power conversion efficiency. In high duty cycle pulsed (quasi-CW) mode, the QCLs can be operated without TECs and have produced nearly the same average power as that available in CW mode with TECs. Multiwatt average powers are obtained even in ambient T>70°C, with true efficiency of electrical power-to-optical power conversion being above 10%. Because of the availability of QCLs with multiwatt power outputs and wavelength range covering a spectral region from ~3.5 μm to >16 μm, the QCLs have found instantaneous acceptance for insertion into multitude of defense and homeland security applications, including laser sources for infrared

  16. The Jefferson Lab High Power Light Source

    Energy Technology Data Exchange (ETDEWEB)

    James R. Boyce

    2006-01-01

    Jefferson Lab has designed, built and operated two high average power free-electron lasers (FEL) using superconducting RF (SRF) technology and energy recovery techniques. Between 1999-2001 Jefferson Lab operated the IR Demo FEL. This device produced over 2 kW in the mid-infrared, in addition to producing world record average powers in the visible (50 W), ultraviolet (10 W) and terahertz range (50 W) for tunable, short-pulse (< ps) light. This FEL was the first high power demonstration of an accelerator configuration that is being exploited for a number of new accelerator-driven light source facilities that are currently under design or construction. The driver accelerator for the IR Demo FEL uses an Energy Recovered Linac (ERL) configuration that improves the energy efficiency and lowers both the capital and operating cost of such devices by recovering most of the power in the spent electron beam after optical power is extracted from the beam. The IR Demo FEL was de-commissioned in late 2001 for an upgraded FEL for extending the IR power to over 10 kW and the ultraviolet power to over 1 kW. The FEL Upgrade achieved 10 kW of average power in the mid-IR (6 microns) in July of 2004, and its IR operation currently is being extended down to 1 micron. In addition, we have demonstrated the capability of on/off cycling and recovering over a megawatt of electron beam power without diminishing machine performance. A complementary UV FEL will come on-line within the next year. This paper presents a summary of the FEL characteristics, user community accomplishments with the IR Demo, and planned user experiments.

  17. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Adel Sarofim; Bene Risio

    2002-07-28

    This is the seventh Quarterly Technical Report for DOE Cooperative Agreement No.: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on the development of the IGCC workbench. A series of parametric CFD simulations for single stage and two stage generic gasifier configurations have been performed. An advanced flowing slag model has been implemented into the CFD based gasifier model. A literature review has been performed on published gasification kinetics. Reactor models have been developed and implemented into the workbench for the majority of the heat exchangers, gas clean up system and power generation system for the Vision 21 reference configuration. Modifications to the software infrastructure of the workbench have been commenced to allow interfacing to the workbench reactor models that utilize the CAPE{_}Open software interface protocol.

  18. The Impact of High Speed Machining on Computing and Automation

    Institute of Scientific and Technical Information of China (English)

    KKB Hon; BT Hang Tuah Baharudin

    2006-01-01

    Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD/CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection,look ahead programming, simulation, and tool management.

  19. Application of computed radiography for power plant tube

    Energy Technology Data Exchange (ETDEWEB)

    Chnag, Hee Jun; Kim, Sun Je; Yang, Yun Sick; Kim, Jong Duck; Lim, Sung Hee [Dosan Heavey Industries and Constrution Co.,Ltd, Changwon (Korea, Republic of); Park, Young Ha [Shinki Commercial Co.,Ltd, Seoul (Korea, Republic of)

    2005-05-15

    For radiographic examination, it is imperative that the film characteristics be determined so that optimal test conditions, and the result thereof, can be verified by analyzing the film values with regard to different radiation sources, and for standardizing the characteristics such as sensitivity, exposure thickness range (latitude) and maximum and minimum exposure dose, The research presented in this paper was aimed at evaluating the radioactive sensitivity characteristics of the Image Plate, in lieu of film, as used in Computer Radiography for various radiation sources, in order to apply this newly developed technology in other industrial fields, and in particular, the Power Plant industry. Several possibilities and improvements were discovered with the substitution of Computer Radiography for Conventional Film Radiographic Test.

  20. Nanoelectromechanical Switches for Low-Power Digital Computing

    Directory of Open Access Journals (Sweden)

    Alexis Peschot

    2015-08-01

    Full Text Available The need for more energy-efficient solid-state switches beyond complementary metal-oxide-semiconductor (CMOS transistors has become a major concern as the power consumption of electronic integrated circuits (ICs steadily increases with technology scaling. Nano-Electro-Mechanical (NEM relays control current flow by nanometer-scale motion to make or break physical contact between electrodes, and offer advantages over transistors for low-power digital logic applications: virtually zero leakage current for negligible static power consumption; the ability to operate with very small voltage signals for low dynamic power consumption; and robustness against harsh environments such as extreme temperatures. Therefore, NEM logic switches (relays have been investigated by several research groups during the past decade. Circuit simulations calibrated to experimental data indicate that scaled relay technology can overcome the energy-efficiency limit of CMOS technology. This paper reviews recent progress toward this goal, providing an overview of the different relay designs and experimental results achieved by various research groups, as well as of relay-based IC design principles. Remaining challenges for realizing the promise of nano-mechanical computing, and ongoing efforts to address these, are discussed.

  1. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  2. High Power Diode Lasers Technology and Applications

    CERN Document Server

    Bachmann, Friedrich; Poprawe, Reinhart

    2007-01-01

    In a very comprehensive way this book covers all aspects of high power diode laser technology for materials processing. Basics as well as new application oriented results obtained in a government funded national German research project are described in detail. Along the technological chain after a short introduction in the second chapter diode laser bar technology is discussed regarding structure, manufacturing technology and metrology. The third chapter illuminates all aspects of mounting and cooling, whereas chapter four gives wide spanning details on beam forming, beam guiding and beam combination, which are essential topics for incoherently coupled multi-emitter based high power diode lasers. Metrology, standards and safety aspects are the theme of chapter five. As an outcome of all the knowledge from chapter two to four various system configurations of high power diode lasers are described in chapter six; not only systems focussed on best available beam quality but especially also so called "modular" set...

  3. Power quality in high-tech campus: a case study

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Munoz, A.; Redel, M.; Gonzalez, M. [Universidad de Cordoba (Spain). Departamento de Electrotecnia y Electronica

    2006-07-01

    This paper presents preliminary results from a power-quality audit conducted at a high-tech campus over the last year. Voltage and current were measured at various R and D buildings; it was found that the main problems for the equipment installed were voltage sags and surges. The paper examines the causes and effects of power disturbances that affect computer or any other microprocessor-based equipment and analyses the auto-protection capabilities of modern power supplies. The convenience of 'enhanced power supply' or 'low-cost customer-side' protection solutions is also discussed. Finally, it addresses the role of the standards on the protection of electronic equipment and the implications for the final customer. (author)

  4. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  5. E3000 High Power SADM development

    Science.gov (United States)

    Bamford, Steve G.; McMahon, Paul

    2003-09-01

    Astrium UK has been actively involved in the study, design, development, manufacture and test of Solar Array Drive Mechanisms (SADMs) and Bearing and Power Transfer Assemblies (BAPTAs) since the early 1970s having delivered 105 of these mechanisms to 22 spacecraft programs. As a result Astrium UK has accumulated in excess of 700 years of failure free SADM operation in-orbit. During that period power transfer requirements have grown steadily from below 1kW to 9.9kW and beyond. With this increase in power handling capability comes the associated problem of handling and dissipating the heat being generated within the SADM. The Eurostar 2000 family of SADMs were designed to handle up to 5.6kW for the E2000 family of spacecraft but the High Power SADM was conceived to meet the needs of the much bigger Eurostar 3000 family of spacecraft that could potentially grow to 15kW.

  6. Technology development for high power induction accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Birx, D.L.; Reginato, L.L.

    1985-06-11

    The marriage of Induction Linac technology with Nonlinear Magnetic Modulators has produced some unique capabilities. It appears possible to produce electron beams with average currents measured in amperes, at gradients exceeding 1 MeV/meter, and with power efficiencies approaching 50%. A 2 MeV, 5 kA electron accelerator has been constructed at the Lawrence Livermore National Laboratory (LLNL) to demonstrate these concepts and to provide a test facility for high brightness sources. The pulse drive for the accelerator is based on state-of-the-art magnetic pulse compressors with very high peak power capability, repetition rates exceeding a kilohertz and excellent reliability.

  7. Compact high-power terahertz radiation source

    Directory of Open Access Journals (Sweden)

    G. A. Krafft

    2004-06-01

    Full Text Available In this paper a new type of THz radiation source, based on recirculating an electron beam through a high gradient superconducting radio frequency cavity, and using this beam to drive a standard electromagnetic undulator on the return leg, is discussed. Because the beam is recirculated and not stored, short bunches may be produced that radiate coherently in the undulator, yielding exceptionally high average THz power for relatively low average beam power. Deceleration from the coherent emission, and the detuning it causes, limits the charge-per-bunch possible in such a device.

  8. Operation of Power Grids with High Penetration of Wind Power

    Science.gov (United States)

    Al-Awami, Ali Taleb

    The integration of wind power into the power grid poses many challenges due to its highly uncertain nature. This dissertation involves two main components related to the operation of power grids with high penetration of wind energy: wind-thermal stochastic dispatch and wind-thermal coordinated bidding in short-term electricity markets. In the first part, a stochastic dispatch (SD) algorithm is proposed that takes into account the stochastic nature of the wind power output. The uncertainty associated with wind power output given the forecast is characterized using conditional probability density functions (CPDF). Several functions are examined to characterize wind uncertainty including Beta, Weibull, Extreme Value, Generalized Extreme Value, and Mixed Gaussian distributions. The unique characteristics of the Mixed Gaussian distribution are then utilized to facilitate the speed of convergence of the SD algorithm. A case study is carried out to evaluate the effectiveness of the proposed algorithm. Then, the SD algorithm is extended to simultaneously optimize the system operating costs and emissions. A modified multi-objective particle swarm optimization algorithm is suggested to identify the Pareto-optimal solutions defined by the two conflicting objectives. A sensitivity analysis is carried out to study the effect of changing load level and imbalance cost factors on the Pareto front. In the second part of this dissertation, coordinated trading of wind and thermal energy is proposed to mitigate risks due to those uncertainties. The problem of wind-thermal coordinated trading is formulated as a mixed-integer stochastic linear program. The objective is to obtain the optimal tradeoff bidding strategy that maximizes the total expected profits while controlling trading risks. For risk control, a weighted term of the conditional value at risk (CVaR) is included in the objective function. The CVaR aims to maximize the expected profits of the least profitable scenarios, thus

  9. High-power microwave development in Russia

    Science.gov (United States)

    Gauthier, Sylvain

    1995-03-01

    This is a survey of Russian research and development in high-power microwave (HPM) sources. It emphasizes those sources of nanoseconds pulse duration time which have potential weapon as well as radar applications. It does not cover the whole range of Russian HPM research and development but concentrates on those aspects which may lead to military applications. Russian investigators have achieved many world firsts in HPM generation; for example, a multiwave Cerenkov generator with a peak output power of 15 gigawatts. Their successes are based on their impressive capability in pulsed power technology which has yielded high-current generators of terawatt peak power. They have transformed the energy of these currents into microwave radiation using tubes of both conventional and novel designs exploiting relativistic electron beams. Recently, the development of high-current mini-accelerators has moved relativistic electron-beam (REB) HPM generation out of the laboratory and enabled the development of deployable military systems with peak powers in the gigawatt range. As a result, they now see development of a REB-based radar systems as one of the most promising directions in radar systems. Details of such a system are described and the implications for HPM weapons are considered.

  10. High impact data visualization with Power View, Power Map, and Power BI

    CERN Document Server

    Aspin, Adam

    2014-01-01

    High Impact Data Visualization with Power View, Power Map, and Power BI helps you take business intelligence delivery to a new level that is interactive, engaging, even fun, all while driving commercial success through sound decision-making. Learn to harness the power of Microsoft's flagship, self-service business intelligence suite to deliver compelling and interactive insight with remarkable ease. Learn the essential techniques needed to enhance the look and feel of reports and dashboards so that you can seize your audience's attention and provide them with clear and accurate information. Al

  11. High power collimated diode laser stack

    Institute of Scientific and Technical Information of China (English)

    LIU Yuan-yuan; FANG Gao-zhan; MA Xiao-yu; LIU Su-ping; FENG Xiao-ming

    2006-01-01

    A high power collimated diode laser stack is carried out based on fast-axis collimation and stack packaging techniques.The module includes ten typical continuous wave (cw) bars and the total output power can be up to 368W at 48.6A.Using a cylindrical lens as the collimation elements,we can make the fast-axis divergence and the slow-axis divergence are 0.926 40 and 8.2060 respectively.The light emitting area is limited in a square area of 18.3 mm×11 mm.The module has the advantage of high power density and offers a wide potential applications in pumping and material processing.

  12. High Power Disk Loaded Guide Load

    Energy Technology Data Exchange (ETDEWEB)

    Farkas, Z.D.; /SLAC

    2006-02-22

    A method to design a matching section from a smooth guide to a disk-loaded guide, using a variation of broadband matching, [1, 2] is described. Using this method, we show how to design high power loads. The load consists of a disk-loaded coaxial guide operating in the TE{sub 01}-mode. We use this mode because it has no electric field terminating on a conductor, has no axial currents, and has no current at the cylinder-disk interface. A high power load design that has -35 dB reflection and a 200 MHz, -20 dB bandwidth, is presented. It is expected that it will carry the 600 MW output peak power of the pulse compression network. We use coaxial geometry and stainless steel material to increase the attenuation per cell.

  13. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  14. Inductance effects in the high-power transmitter crowbar system

    Science.gov (United States)

    Daeges, J.; Bhanji, A.

    1987-01-01

    The effective protection of a klystron in a high-power transmitter requires the diversion of all stored energy in the protected circuit through an alternate low-impedance path, the crowbar, such that less than 1 joule of energy is dumped into the klystron during an internal arc. A scheme of adding a bypass inductor in the crowbar-protected circuit of the high-power transmitter was tested using computer simulations and actual measurements under a test load. Although this scheme has several benefits, including less power dissipation in the resistor, the tests show that the presence of inductance in the portion of the circuit to be protected severely hampers effective crowbar operation.

  15. A solar powered wireless computer mouse. Industrial design concepts

    Energy Technology Data Exchange (ETDEWEB)

    Reich, N.H.; Van Sark, W.G.J.H.M.; Alsema, E.A.; Turkenburg, W.C. [Department of Science, Technology and Society, Copernicus Institute, Utrecht University, Heidelberglaan 2, 3584 CS Utrecht (Netherlands); Veefkind, M.; Silvester, S. [Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628 CE Delft (Netherlands)

    2009-02-15

    A solar powered wireless computer mouse (SPM) was chosen to serve as a case study for the evaluation and optimization of industrial design processes of photovoltaic (PV) powered consumer systems. As the design process requires expert knowledge in various technical fields, we assessed and compared the following: appropriate selection of integrated PV type, battery capacity and type, possible electronic circuitries for PV-battery coupling, and material properties concerning mechanical incorporation of PV into the encasing. Besides technical requirements, ergonomic aspects and design aesthetics with respect to good 'sun-harvesting' properties influenced the design process. This is particularly important as simulations show users can positively influence energy balances by 'sun-bathing' the PV mouse. A total of 15 SPM prototypes were manufactured and tested by actual users. Although user satisfaction proved the SPM concept to be feasible, future research still needs to address user acceptance related to product dimensions and user willingness to pro-actively 'sun-bath' PV powered products in greater detail. (author)

  16. An Embedded System for applying High Performance Computing in Educational Learning Activity

    OpenAIRE

    Irene Erlyn Wina Rachmawan; Nurul Fahmi; Edi Wahyu Widodo; Samsul Huda; M. Unggul Pamenang; M. Choirur Roziqin; Andri Permana W.; Stritusta Sukaridhoto; Dadet Pramadihanto

    2016-01-01

    HPC (High Performance Computing) has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing c...

  17. People powerComputer games in the classroom

    Directory of Open Access Journals (Sweden)

    Ivan Hilliard

    2014-03-01

    Full Text Available This article presents a case study in the use of the computer simulation game People Power, developed by the International Center on Nonviolent Conflict. The principal objective of the activity was to offer students an opportunity to understand the dynamics of social conflicts, in a format not possible in a traditional classroom setting. Due to the game complexity, it was decided to play it in a day-long (8 hour workshop format. A computer lab was prepared several weeks beforehand, which meant that each team of four students had access to a number of computers, being able to have the game open on several monitors at the same time, playing on one while using the others to constantly revise information as their strategy and tactics evolved. At the end of the workshop, and after handing in a group report, the 24 participants (6 groups were asked to complete a short survey of the activity. The survey was divided into three areas: the game itself, skill development, and the workshop organization. Results showed a strong relationship between the activity and the course content, skills and competencies development, and practical know-how and leadership, as well as a strong feeling that it works well as a learning tool and is enjoyable. DOI: 10.18870/hlrc.v4i1.200

  18. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Temi Linjewile; Mike Maguire; Adel Sarofim; Connie Senior; Changguan Yang; Hong-Shig Shim

    2004-04-28

    This is the fourteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused primarily on completing a prototype detachable user interface for the framework and on integrating Carnegie Mellon Universities IECM model core with the computational engine. In addition to this work, progress has been made on several other development and modeling tasks for the program. These include: (1) improvements to the infrastructure code of the computational engine, (2) enhancements to the model interfacing specifications, (3) additional development to increase the robustness of all framework components, (4) enhanced coupling of the computational and visualization engine components, (5) a series of detailed simulations studying the effects of gasifier inlet conditions on the heat flux to the gasifier injector, and (6) detailed plans for implementing models for mercury capture for both warm and cold gas cleanup have been created.

  19. Materials for high average power lasers

    Energy Technology Data Exchange (ETDEWEB)

    Marion, J.E.; Pertica, A.J.

    1989-01-01

    Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.

  20. High power Ka band TWT amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Golkowski, C.; Ivers, J.D.; Nation, J.A.; Wang, P.; Schachter, L.

    1999-07-01

    Two high power 35 GHz TWT amplifiers driven by a relativistic pencil, 850 kV, 200A electron beam have been assembled and tested. The first had a dielectric slow wave structure and was primarily used to develop diagnostics, and to gain experience in working with high power systems in Ka band. The source of the input power for the amplifier was a magnetron producing a 30 kW, 200ns long pulse of which 10 kW as delivered to the experiment. The 30 cm long dielectric (Teflon) amplifier produced output power levels of about 1 MW with a gain of about 23 dB. These results are consistent with expectations from PIC code simulations for this arrangement. The second amplifier, which is a single stage disk loaded slow wave structure, has been designed. It consists of one hundred uniform cells with two sets of ten tapered calls at the ends to lower the reflection coefficient. The phase advance per cell is {pi}/2. The amplifier passband extends from 28 to 40 GHz. It is designed to increase the output power to about 20 MW. The amplifier is in construction and will be tested in the near future. Details of the design of both systems will be provided and initial results from the new amplifier presented.

  1. Computer-integrated quality management system for power stations. Computer-integriertes Qualitaetsmanagementsystem fuer Kraftwerke

    Energy Technology Data Exchange (ETDEWEB)

    Durst, K.H.; Scheurer, K.; Meinhardt, H. (Siemens AG, Offenbach (Germany). Abt. Qualitaetssicherung)

    1993-03-01

    Conventional CAQ systems, which were developed for monitoring mass production, are not very suitable for quality assurance in the construction and operation of plant and power stations. Long life products have to be monitored in plant and power station construction, which were manufactured in small batches for individually. So that the quality of these products can be monitored and can be assured economically and reliably by preventive maintenance measures, it is necessary to combine the plant documentation, repeated tests and repair or replacement measures in a 'computer-integrated quality management system'. For large complex plants, such as power stations, an operation guidance system was developed which includes all important plant information and makes it available in a user-friendly way to the concern's management. The article introduces this system. (orig.).

  2. CVD Diamond Sink Application in High Power 3D MCMs

    Institute of Scientific and Technical Information of China (English)

    XIE Kuo-jun; JIANG Chang-shun; LI Cheng-yue

    2005-01-01

    As electronic packages become more compact, run at faster speeds and dissipate more heat, package designers need more effective thermal management materials. CVD diamond, because of its high thermal conductivity, low dielectric loss and its great mechanical strength, is an excellent material for three dimensional (3D) multichip modules (MCMs) in the next generation compact high speed computers and high power microwave components. In this paper, we have synthesized a large area freestanding diamond films and substrates, and polished diamond substrates, which make MCMs diamond film sink becomes a reality.

  3. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  4. High Efficiency Microwave Power Amplifier (HEMPA) Design

    Science.gov (United States)

    Sims, W. Herbert

    2004-01-01

    This paper will focus on developing an exotic switching technique that enhances the DC-to-RF conversion efficiency of microwave power amplifiers. For years, switching techniques implemented in the 10 kHz to 30 MHz region have resulted in DC-to-RF conversion efficiencies of 90-95-percent. Currently amplifier conversion efficiency, in the 2-3 GHz region approaches, 10-20-percent. Using a combination of analytical modeling and hardware testing, a High Efficiency Microwave Power Amplifier was built that demonstrated conversion efficiencies four to five times higher than current state of the art.

  5. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  6. Budget-based power consumption for application execution on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Peters, Amanda E; Ratterman, Joseph D; Smith, Brian E

    2013-02-05

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  7. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  8. High Power Experiments in VX-10

    Science.gov (United States)

    Squire, Jared; Chang-Diaz, Franklin; Araya-Chacon, Gonzalo; Jacobson, Verlin; Glover, Tim; McCaskill, Greg; Vera, Jerry; Baity, Wally; Carter, Mark; Goulding, Rick

    2004-11-01

    In the Advanced Space Propulsion Laboratory VASIMR experiment (VX-10) we have measured a plasma flux to input gas rate ratio near 100power levels up to 10 kW. The plasma source is being developed to supply a dense target with a high degree of ionization for ICRF acceleration of the flow in an expanding magnetic field. An upgrade to 20 kW helicon operations is underway. Recent results at Oak Ridge National Laboratory show enhanced efficiency operation with a high power density, over 5 kW in a 5 cm diameter tube. Our helicon is presently 9 cm in diameter, so comparable power densities will be achieved in VX-10. We have operated with a Boswell double-saddle antenna design with a magnetic cusp just upstream of the antenna. Recently we have converted to a double-helix half-turn antenna. ICRF experiments have been performed as 1.5 kW that show significant plasma flow acceleration, doubling the flow velocity. A 10 kW ICRF upgrade is underway. Results from high total power ( ˜ 30 kW) experiments with this new helicon antenna and ICRF acceleration are presented.

  9. High Power RF Test Facility at the SNS

    CERN Document Server

    Kang, Yoon W; Campisi, Isidoro E; Champion, Mark; Crofford, Mark; Davis, Kirk; Drury, Michael A; Fuja, Ray E; Gurd, Pamela; Kasemir, Kay-Uwe; McCarthy, Michael P; Powers, Tom; Shajedul Hasan, S M; Stirbet, Mircea; Stout, Daniel; Tang, Johnny Y; Vassioutchenko, Alexandre V; Wezensky, Mark

    2005-01-01

    RF Test Facility has been completed in the SNS project at ORNL to support test and conditioning operation of RF subsystems and components. The system consists of two transmitters for two klystrons powered by a common high voltage pulsed converter modulator that can provide power to two independent RF systems. The waveguides are configured with WR2100 and WR1150 sizes for presently used frequencies: 402.5 MHz and 805 MHz. Both 402.5 MHz and 805 MHz systems have circulator protected klystrons that can be powered by the modulator capable of delivering 11 MW peak and 1 MW average power. The facility has been equipped with computer control for various RF processing and complete dual frequency operation. More than forty 805 MHz fundamental power couplers for the SNS superconducting linac (SCL) cavitites have been RF conditioned in this facility. The facility provides more than 1000 ft2 floor area for various test setups. The facility also has a shielded cave area that can support high power tests of normal conducti...

  10. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  12. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  13. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  14. High-Power, High-Thrust Ion Thruster (HPHTion)

    Science.gov (United States)

    Peterson, Peter Y.

    2015-01-01

    Advances in high-power photovoltaic technology have enabled the possibility of reasonably sized, high-specific power solar arrays. At high specific powers, power levels ranging from 50 to several hundred kilowatts are feasible. Ion thrusters offer long life and overall high efficiency (typically greater than 70 percent efficiency). In Phase I, the team at ElectroDynamic Applications, Inc., built a 25-kW, 50-cm ion thruster discharge chamber and fabricated a laboratory model. This was in response to the need for a single, high-powered engine to fill the gulf between the 7-kW NASA's Evolutionary Xenon Thruster (NEXT) system and a notional 25-kW engine. The Phase II project matured the laboratory model into a protoengineering model ion thruster. This involved the evolution of the discharge chamber to a high-performance thruster by performance testing and characterization via simulated and full beam extraction testing. Through such testing, the team optimized the design and built a protoengineering model thruster. Coupled with gridded ion thruster technology, this technology can enable a wide range of missions, including ambitious near-Earth NASA missions, Department of Defense missions, and commercial satellite activities.

  15. Reduced filamentation in high power semiconductor lasers

    DEFF Research Database (Denmark)

    Skovgaard, Peter M. W.; McInerney, John; O'Brien, Peter

    1999-01-01

    High brightness semiconductor lasers have applications in fields ranging from material processing to medicine. The main difficulty associated with high brightness is that high optical power densities cause damage to the laser facet and thus require large apertures. This, in turn, results in spatio...... in the optical field causes spatial hole-burning and thus filamentation. To reduce filamentation we propose a new, relatively simple design based on inhomogeneous pumping in which the injected current has a gradual transverse profile. We confirm the improved laser performance theoretically and experimentally...

  16. High Power UV LED Industrial Curing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Karlicek, Robert, F., Jr; Sargent, Robert

    2012-05-14

    UV curing is a green technology that is largely underutilized because UV radiation sources like Hg Lamps are unreliable and difficult to use. High Power UV LEDs are now efficient enough to replace Hg Lamps, and offer significantly improved performance relative to Hg Lamps. In this study, a modular, scalable high power UV LED curing system was designed and tested, performing well in industrial coating evaluations. In order to achieve mechanical form factors similar to commercial Hg Lamp systems, a new patent pending design was employed enabling high irradiance at long working distances. While high power UV LEDs are currently only available at longer UVA wavelengths, rapid progress on UVC LEDs and the development of new formulations designed specifically for use with UV LED sources will converge to drive more rapid adoption of UV curing technology. An assessment of the environmental impact of replacing Hg Lamp systems with UV LED systems was performed. Since UV curing is used in only a small portion of the industrial printing, painting and coating markets, the ease of use of UV LED systems should increase the use of UV curing technology. Even a small penetration of the significant number of industrial applications still using oven curing and drying will lead to significant reductions in energy consumption and reductions in the emission of green house gases and solvent emissions.

  17. Website Design Guidelines: High Power Distance and High Context Culture

    Directory of Open Access Journals (Sweden)

    Tanveer Ahmed

    2009-06-01

    Full Text Available This paper aims to address the question of offering a culturally adapted website for a local audience. So far, in the website design arena the vast majority of studies examined mainly Western and the American (low power distance and low context culture disregarding possible cultural discrepancies. This study fills this gap and explores the key cultural parameters that are likely to have an impact on local website design for Asian-Eastern culture high power distance and high context correlating with both Hofstede’s and Hall’s cultural dimensions. It also reviews how website localisation may be accomplished more effectively by extracting the guidelines from two different yet compatible cultural dimensions: high power distance and high context.

  18. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  19. A Computational Workbench Environment For Virtual Power Plant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Bockelie, Michael J.; Swensen, David A.; Denison, Martin K.; Sarofim, Adel F.

    2001-11-06

    In this paper we describe our progress toward creating a computational workbench for performing virtual simulations of Vision 21 power plants. The workbench provides a framework for incorporating a full complement of models, ranging from simple heat/mass balance reactor models that run in minutes to detailed models that can require several hours to execute. The workbench is being developed using the SCIRun software system. To leverage a broad range of visualization tools the OpenDX visualization package has been interfaced to the workbench. In Year One our efforts have focused on developing a prototype workbench for a conventional pulverized coal fired power plant. The prototype workbench uses a CFD model for the radiant furnace box and reactor models for downstream equipment. In Year Two and Year Three, the focus of the project will be on creating models for gasifier based systems and implementing these models into an improved workbench. In this paper we describe our work effort for Year One and outline our plans for future work. We discuss the models included in the prototype workbench and the software design issues that have been addressed to incorporate such a diverse range of models into a single software environment. In addition, we highlight our plans for developing the energyplex based workbench that will be developed in Year Two and Year Three.

  20. The computational power of astrocyte mediated synaptic plasticity

    Directory of Open Access Journals (Sweden)

    Rogier eMin

    2012-11-01

    Full Text Available Research in the last two decades has made clear that astrocytes play a crucial role in the brain beyond their functions in energy metabolism and homeostasis. Many studies have shown that astrocytes can dynamically modulate neuronal excitability and synaptic plasticity, and might participate in higher brain functions like learning and memory. With the plethora of astrocyte-mediated signaling processes described in the literature today, the current challenge is to identify which of these processes happen under what physiological condition, and how this shapes information processing and, ultimately, behavior. To answer these questions will require a combination of advanced physiological, genetical and behavioral experiments. Additionally, mathematical modeling will prove crucial for testing predictions on the possible functions of astrocytes in neuronal networks, and to generate novel ideas as to how astrocytes can contribute to the complexity of the brain. Here, we aim to provide an outline of how astrocytes can interact with neurons. We do this by reviewing recent experimental literature on astrocyte-neuron interactions, discussing the dynamic effects of astrocytes on neuronal excitability and short- and long-term synaptic plasticity. Finally, we will outline the potential computational functions that astrocyte-neuron interactions can serve in the brain. We will discuss how astrocytes could govern metaplasticity in the brain, how they might organize the clustering of synaptic inputs, and how they could function as memory elements for neuronal activity. We conclude that astrocytes can enhance the computational power of neuronal networks in previously unexpected ways.

  1. The computational power of astrocyte mediated synaptic plasticity.

    Science.gov (United States)

    Min, Rogier; Santello, Mirko; Nevian, Thomas

    2012-01-01

    Research in the last two decades has made clear that astrocytes play a crucial role in the brain beyond their functions in energy metabolism and homeostasis. Many studies have shown that astrocytes can dynamically modulate neuronal excitability and synaptic plasticity, and might participate in higher brain functions like learning and memory. With the plethora of astrocyte mediated signaling processes described in the literature today, the current challenge is to identify, which of these processes happen under what physiological condition, and how this shapes information processing and, ultimately, behavior. To answer these questions will require a combination of advanced physiological, genetical, and behavioral experiments. Additionally, mathematical modeling will prove crucial for testing predictions on the possible functions of astrocytes in neuronal networks, and to generate novel ideas as to how astrocytes can contribute to the complexity of the brain. Here, we aim to provide an outline of how astrocytes can interact with neurons. We do this by reviewing recent experimental literature on astrocyte-neuron interactions, discussing the dynamic effects of astrocytes on neuronal excitability and short- and long-term synaptic plasticity. Finally, we will outline the potential computational functions that astrocyte-neuron interactions can serve in the brain. We will discuss how astrocytes could govern metaplasticity in the brain, how they might organize the clustering of synaptic inputs, and how they could function as memory elements for neuronal activity. We conclude that astrocytes can enhance the computational power of neuronal networks in previously unexpected ways.

  2. The computational power of astrocyte mediated synaptic plasticity

    Science.gov (United States)

    Min, Rogier; Santello, Mirko; Nevian, Thomas

    2012-01-01

    Research in the last two decades has made clear that astrocytes play a crucial role in the brain beyond their functions in energy metabolism and homeostasis. Many studies have shown that astrocytes can dynamically modulate neuronal excitability and synaptic plasticity, and might participate in higher brain functions like learning and memory. With the plethora of astrocyte mediated signaling processes described in the literature today, the current challenge is to identify, which of these processes happen under what physiological condition, and how this shapes information processing and, ultimately, behavior. To answer these questions will require a combination of advanced physiological, genetical, and behavioral experiments. Additionally, mathematical modeling will prove crucial for testing predictions on the possible functions of astrocytes in neuronal networks, and to generate novel ideas as to how astrocytes can contribute to the complexity of the brain. Here, we aim to provide an outline of how astrocytes can interact with neurons. We do this by reviewing recent experimental literature on astrocyte-neuron interactions, discussing the dynamic effects of astrocytes on neuronal excitability and short- and long-term synaptic plasticity. Finally, we will outline the potential computational functions that astrocyte-neuron interactions can serve in the brain. We will discuss how astrocytes could govern metaplasticity in the brain, how they might organize the clustering of synaptic inputs, and how they could function as memory elements for neuronal activity. We conclude that astrocytes can enhance the computational power of neuronal networks in previously unexpected ways. PMID:23125832

  3. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-04-25

    This is the tenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two gasifier types. An improved process model for simulating entrained flow gasifiers has been implemented into the workbench. Model development has focused on: a pre-processor module to compute global gasification parameters from standard fuel properties and intrinsic rate information; a membrane based water gas shift; and reactors to oxidize fuel cell exhaust gas. The data visualization capabilities of the workbench have been extended by implementing the VTK visualization software that supports advanced visualization methods, including inexpensive Virtual Reality techniques. The ease-of-use, functionality and plug-and-play features of the workbench were highlighted through demonstrations of the workbench at a DOE sponsored coal utilization conference. A white paper has been completed that contains recommendations on the use of component architectures, model interface protocols and software frameworks for developing a Vision 21 plant simulator.

  4. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Connie Senior; Zumao Chen; Temi Linjewile; Adel Sarofim; Bene Risio

    2003-01-25

    This is the eighth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, good progress has been made on all aspects of the project. Calculations for a full Vision 21 plant configuration have been performed for two coal types and two gasifier types. Good agreement with DOE computed values has been obtained for the Vision 21 configuration under ''baseline'' conditions. Additional model verification has been performed for the flowing slag model that has been implemented into the CFD based gasifier model. Comparisons for the slag, wall and syngas conditions predicted by our model versus values from predictive models that have been published by other researchers show good agreement. The software infrastructure of the Vision 21 workbench has been modified to use a recently released, upgraded version of SCIRun.

  5. User manual for PACTOLUS: a code for computing power costs.

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)

  6. Power Supplies for High Energy Particle Accelerators

    Science.gov (United States)

    Dey, Pranab Kumar

    2016-06-01

    The on-going research and the development projects with Large Hadron Collider at CERN, Geneva, Switzerland has generated enormous enthusiasm and interest amongst all to know about the ultimate findings on `God's Particle'. This paper has made an attempt to unfold the power supply requirements and the methodology adopted to provide the stringent demand of such high energy particle accelerators during the initial stages of the search for the ultimate particles. An attempt has also been made to highlight the present status on the requirement of power supplies in some high energy accelerators with a view that, precautionary measures can be drawn during design and development from earlier experience which will be of help for the proposed third generation synchrotron to be installed in India at a huge cost.

  7. High-Power Wind Turbine: Performance Calculation

    Directory of Open Access Journals (Sweden)

    Goldaev Sergey V.

    2015-01-01

    Full Text Available The paper is devoted to high-power wind turbine performance calculation using Pearson’s chi-squared test the statistical hypothesis on distribution of general totality of air velocities by Weibull-Gnedenko. The distribution parameters are found by numerical solution of transcendental equation with the definition of the gamma function interpolation formula. Values of the operating characteristic of the incomplete gamma function are defined by numerical integration using Weddle’s rule. The comparison of the calculated results using the proposed methodology with those obtained by other authors found significant differences in the values of the sample variance and empirical Pearson. The analysis of the initial and maximum wind speed influence on performance of the high-power wind turbine is done

  8. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison; Zumao Chen; Mike Maguire; Adel Sarofim; Changguan Yang; Hong-Shig Shim

    2004-01-28

    This is the thirteenth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a Virtual Engineering-based framework for simulating the performance of Advanced Power Systems. Within the last quarter, good progress has been made on all aspects of the project. Software development efforts have focused on a preliminary detailed software design for the enhanced framework. Given the complexity of the individual software tools from each team (i.e., Reaction Engineering International, Carnegie Mellon University, Iowa State University), a robust, extensible design is required for the success of the project. In addition to achieving a preliminary software design, significant progress has been made on several development tasks for the program. These include: (1) the enhancement of the controller user interface to support detachment from the Computational Engine and support for multiple computer platforms, (2) modification of the Iowa State University interface-to-kernel communication mechanisms to meet the requirements of the new software design, (3) decoupling of the Carnegie Mellon University computational models from their parent IECM (Integrated Environmental Control Model) user interface for integration with the new framework and (4) development of a new CORBA-based model interfacing specification. A benchmarking exercise to compare process and CFD based models for entrained flow gasifiers was completed. A summary of our work on intrinsic kinetics for modeling coal gasification has been completed. Plans for implementing soot and tar models into our entrained flow gasifier models are outlined. Plans for implementing a model for mercury capture based on conventional capture technology, but applied to an IGCC system, are outlined.

  9. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Directory of Open Access Journals (Sweden)

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  10. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  11. High power/large area PV systems

    Science.gov (United States)

    Wise, Joseph; Baraona, Cosmo

    1987-01-01

    The major photovoltaic power system technology drivers for a wide variety of mission types were ranked. Each technology driver was ranked on a scale of high, medium, or low in terms of importance to each particular mission type. The rankings were then compiled to determine the overall importance of each driver over the entire range of space missions. In each case cost was ranked the highest.

  12. Theoretical computation and analysis of benefits of wood cutting power

    Institute of Scientific and Technical Information of China (English)

    MA Yan; YANG Chunmei; ZHAN Li

    2006-01-01

    This paper studies the problem of high energy waste in the course of the wood fiber processing in the wood-based panel industry.In the light of the energy economy principle,the cutting theory on the micron and long-slice wood fiber was put forward.In this paper,by means of analyzing the power waste in traditional processing,a series of analytical measures,such as,cytology,super precision work theory and fiber processing,and so on were utilized in the micron wood fiber formation process,and the cutting conception of the micron and long-slice wood fiber was put forward.Accordingly,the study of the micron and long-slice wood fiber was put into the microstructure study.This paper scientifically explains the reasons why the traditional wood fiber processing consumes more energy and the fiber quality low.In an example,the cutting power on the micron and long-slice wood fiber was calculated,which was compared with the traditional cutting power.The result showed that the energy waste by machining at micron is much lower than by heat grind and the high quality and long-slice wood fiber was gained.Thus,a revolutionary step was taken in the paper-making and wood-based panel industry of China.

  13. Power management systems for sediment microbial fuel cells in high power and continuous power applications

    Science.gov (United States)

    Donovan, Conrad Koble

    The objective of this dissertation was to develop power management systems (PMS) for sediment microbial fuel cells (SFMCs) for high power and continuous applications. The first part of this dissertation covers a new method for testing the performance of SMFCs. This device called the microbial fuel cell tester was developed to automatically test power generation of PMS. The second part focuses on a PMS capable of delivering high power in burst mode. This means that for a small amount of time a large amount of power up to 2.5 Watts can be delivered from a SMFC only generating mW level power. The third part is aimed at developing a multi-potentiostat laboratory tool that measures the performance at fixed cell potentials of microbial fuel cells so that I can optimize them for use with the PMS. This tool is capable of controlling the anode potential or cathode potential and measuring current of six separate SMFCs simultaneously. By operating multiple potentiostats, I was able to run experiments that find ideal operating conditions for the sediment microbial fuel cells, and also I can optimize the power management system for these conditions. The fourth part of the dissertation is targeting a PMS that was able to operate a sensor continuously which was powered by an SMFC. In pervious applications involving SMFCs, the PMS operated in batch mode. In this PMS, the firmware on the submersible ultrasonic receiver (SUR) was modified for use with my PMS. This integration of PMS and SUR allowed for the continuous operation of the SUR without using a battery. Finally, the last part of the dissertation recommends a scale-up power management system to overcome the linearity scale up issue of SMFCs as future work. Concluding remarks are also added to summarize the goal and focus of this dissertation.

  14. The future of high power laser techniques

    Science.gov (United States)

    Poprawe, Reinhart; Loosen, Peter; Hoffmann, Hans-Dieter

    2007-05-01

    High Power Lasers have been used for years in corresponding applications. Constantly new areas and new processes have been demonstrated, developed and transferred to fruitful use in industry. With the advent of diode pumped solid state lasers in the multi-kW-power regime at beam qualities not far away from the diffraction limit, a new area of applicability has opened. In welding applications speeds could be increased and systems could be developed with higher efficiently leading also to new perspectives for increased productivity, e.g. in combined processing. Quality control is increasingly demanded by the applying industries, however applications still are rare. Higher resolution of coaxial process control systems in time and space combined with new strategies in signal processing could give rise to new applications. The general approach described in this paper emphasizes the fact, that laser applications can be developed more efficiently, more precisely and with higher quality, if the laser radiation is tailored properly to the corresponding application. In applying laser sources, the parameter ranges applicable are by far wider and more flexible compared to heat, mechanical or even electrical energy. The time frame ranges from several fs to continuous wave and this spans approximately 15 orders of magnitude. Spacewise, the foci range from several µm to cm and the resulting intensities suitable for materials processing span eight orders of magnitude from 10 3 to 10 11 W/cm2. In addition to space (power, intensity) and time (pulse) the wavelength can be chosen as a further parameter of optimization. As a consequence, the resulting new applications are vast and can be utilized in almost every market segment of our global economy (Fig. 1). In the past and only partly today, however, this flexibility of laser technology is not exploited in full in materials processing, basically because in the high power regime the lasers with tailored beam properties are not

  15. CIDER: Enabling Robustness-Power Tradeoffs on a Computational Eyeglass.

    Science.gov (United States)

    Mayberry, Addison; Tun, Yamin; Hu, Pan; Smith-Freedman, Duncan; Ganesan, Deepak; Marlin, Benjamin; Salthouse, Christopher

    2015-09-01

    The human eye offers a fascinating window into an individual's health, cognitive attention, and decision making, but we lack the ability to continually measure these parameters in the natural environment. The challenges lie in: a) handling the complexity of continuous high-rate sensing from a camera and processing the image stream to estimate eye parameters, and b) dealing with the wide variability in illumination conditions in the natural environment. This paper explores the power-robustness tradeoffs inherent in the design of a wearable eye tracker, and proposes a novel staged architecture that enables graceful adaptation across the spectrum of real-world illumination. We propose CIDER, a system that operates in a highly optimized low-power mode under indoor settings by using a fast Search-Refine controller to track the eye, but detects when the environment switches to more challenging outdoor sunlight and switches models to operate robustly under this condition. Our design is holistic and tackles a) power consumption in digitizing pixels, estimating pupillary parameters, and illuminating the eye via near-infrared, b) error in estimating pupil center and pupil dilation, and c) model training procedures that involve zero effort from a user. We demonstrate that CIDER can estimate pupil center with error less than two pixels (0.6°), and pupil diameter with error of one pixel (0.22mm). Our end-to-end results show that we can operate at power levels of roughly 7mW at a 4Hz eye tracking rate, or roughly 32mW at rates upwards of 250Hz.

  16. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  17. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Science.gov (United States)

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  18. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to be...

  19. High-power LEDs for plant cultivation

    Science.gov (United States)

    Tamulaitis, Gintautas; Duchovskis, Pavelas; Bliznikas, Zenius; Breive, Kestutis; Ulinskaite, Raimonda; Brazaityte, Ausra; Novickovas, Algirdas; Zukauskas, Arturas; Shur, Michael S.

    2004-10-01

    We report on high-power solid-state lighting facility for cultivation of greenhouse vegetables and on the results of the study of control of photosynthetic activity and growth morphology of radish and lettuce imposed by variation of the spectral composition of illumination. Experimental lighting modules (useful area of 0.22 m2) were designed based on 4 types of high-power light-emitting diodes (LEDs) with emission peaked in red at the wavelengths of 660 nm and 640 nm (predominantly absorbed by chlorophyll a and b for photosynthesis, respectively), in blue at 455 nm (phototropic function), and in far-red at 735 nm (important for photomorphology). Morphological characteristics, chlorophyll and phytohormone concentrations in radish and lettuce grown in phytotron chambers under lighting with different spectral composition of the LED-based illuminator and under illumination by high pressure sodium lamps with an equivalent photosynthetic photon flux density were compared. A well-balanced solid-state lighting was found to enhance production of green mass and to ensure healthy morphogenesis of plants compared to those grown using conventional lighting. We observed that the plant morphology and concentrations of morphologically active phytohormones is strongly affected by the spectral composition of light in the red region. Commercial application of the LED-based illumination for large-scale plant cultivation is discussed. This technology is favorable from the point of view of energy consumption, controllable growth, and food safety but is hindered by high cost of the LEDs. Large scale manufacturing of high-power red AlInGaP-based LEDs emitting at 650 nm and a further decrease of the photon price for the LEDs emitting in the vicinity of the absorption peak of chlorophylls have to be achieved to promote horticulture applications.

  20. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  1. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  2. High-power LED package requirements

    Science.gov (United States)

    Wall, Frank; Martin, Paul S.; Harbers, Gerard

    2004-01-01

    Power LEDs have evolved from simple indicators into illumination devices. For general lighting applications, where the objective is to light up an area, white LED arrays have been utilized to serve that function. Cost constraints will soon drive the industry to provide a discrete lighting solution. Early on, that will mean increasing the power densities while quantum efficiencies are addressed. For applications such as automotive headlamps & projection, where light needs to be tightly collimated, or controlled, arrays of die or LEDs will not be able to satisfy the requirements & limitations defined by etendue. Ultimately, whether a luminaire requires a small source with high luminance, or light spread over a general area, economics will force the evolution of the illumination LED into a compact discrete high power package. How the customer interfaces with this new package should be an important element considered early on in the design cycle. If an LED footprint of adequate size is not provided, it may prove impossible for the customer, or end user, to get rid of the heat in a manner sufficient to prevent premature LED light output degradation. Therefore it is critical, for maintaining expected LED lifetime & light output, that thermal performance parameters be defined, by design, at the system level, which includes heat sinking methods & interface materials or methdology.

  3. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  4. The JLab high power ERL light source

    Energy Technology Data Exchange (ETDEWEB)

    G.R. Neil; C. Behre; S.V. Benson; M. Bevins; G. Biallas; J. Boyce; J. Coleman; L.A. Dillon-Townes; D. Douglas; H.F. Dylla; R. Evans; A. Grippo; D. Gruber; J. Gubeli; D. Hardy; C. Hernandez-Garcia; K. Jordan; M.J. Kelley; L. Merminga; J. Mammosser; W. Moore; N. Nishimori; E. Pozdeyev; J. Preble; R. Rimmer; Michelle D. Shinn; T. Siggins; C. Tennant; R. Walker; G.P. Williams and S. Zhang

    2005-03-19

    A new THz/IR/UV photon source at Jefferson Lab is the first of a new generation of light sources based on an Energy-Recovered, (superconducting) Linac (ERL). The machine has a 160 MeV electron beam and an average current of 10 mA in 75 MHz repetition rate hundred femtosecond bunches. These electron bunches pass through a magnetic chicane and therefore emit synchrotron radiation. For wavelengths longer than the electron bunch the electrons radiate coherently a broadband THz {approx} half cycle pulse whose average brightness is > 5 orders of magnitude higher than synchrotron IR sources. Previous measurements showed 20 W of average power extracted[1]. The new facility offers simultaneous synchrotron light from the visible through the FIR along with broadband THz production of 100 fs pulses with >200 W of average power. The FELs also provide record-breaking laser power [2]: up to 10 kW of average power in the IR from 1 to 14 microns in 400 fs pulses at up to 74.85 MHz repetition rates and soon will produce similar pulses of 300-1000 nm light at up to 3 kW of average power from the UV FEL. These ultrashort pulses are ideal for maximizing the interaction with material surfaces. The optical beams are Gaussian with nearly perfect beam quality. See www.jlab.org/FEL for details of the operating characteristics; a wide variety of pulse train configurations are feasible from 10 microseconds long at high repetition rates to continuous operation. The THz and IR system has been commissioned. The UV system is to follow in 2005. The light is transported to user laboratories for basic and applied research. Additional lasers synchronized to the FEL are also available. Past activities have included production of carbon nanotubes, studies of vibrational relaxation of interstitial hydrogen in silicon, pulsed laser deposition and ablation, nitriding of metals, and energy flow in proteins. This paper will present the status of the system and discuss some of the discoveries we have made

  5. Microstructured fibers for high power applications

    Science.gov (United States)

    Baggett, J. C.; Petrovich, M. N.; Hayes, J. R.; Finazzi, V.; Poletti, F.; Amezcua, R.; Broderick, N. G. R.; Richardson, D. J.; Monro, T. M.; Salter, P. L.; Proudley, G.; O'Driscoll, E. J.

    2005-10-01

    Fiber delivery of intense laser radiation is important for a broad range of application sectors, from medicine through to industrial laser processing of materials, and offers many practical system design and usage benefits relative to free space solutions. Optical fibers for high power transmission applications need to offer low optical nonlinearity and high damage thresholds. Single-mode guidance is also often a fundamental requirement for the many applications in which good beam quality is critical. In recent years, microstructured fiber technology has revolutionized the dynamic field of optical fibers, bringing with them a wide range of novel optical properties. These fibers, in which the cladding region is peppered with many small air holes, are separated into two distinct categories, defined by the way in which they guide light: (1) index-guiding holey fibers (HFs), in which the core is solid and light is guided by a modified form of total internal reflection, and (2) photonic band-gap fibers (PBGFs) in which guidance in a hollow core can be achieved via photonic band-gap effects. Both of these microstructured fiber types offer attractive qualities for beam delivery applications. For example, using HF technology, large-mode-area, pure silica fibers with robust single-mode guidance over broad wavelength ranges can be routinely fabricated. In addition, the ability to guide light in an air-core within PBGFs presents obvious power handling advantages. In this paper we review the fundamentals and current status of high power, high brightness, beam delivery in HFs and PBGFs, and speculate as to future prospects.

  6. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. Computing an operating parameter of a unified power flow controller

    Science.gov (United States)

    Wilson, David G; Robinett, III, Rush D

    2015-01-06

    A Unified Power Flow Controller described herein comprises a sensor that outputs at least one sensed condition, a processor that receives the at least one sensed condition, a memory that comprises control logic that is executable by the processor; and power electronics that comprise power storage, wherein the processor causes the power electronics to selectively cause the power storage to act as one of a power generator or a load based at least in part upon the at least one sensed condition output by the sensor and the control logic, and wherein at least one operating parameter of the power electronics is designed to facilitate maximal transmittal of electrical power generated at a variable power generation system to a grid system while meeting power constraints set forth by the electrical power grid.

  9. Transportable high-energy high-power generator.

    Science.gov (United States)

    Novac, B M; Smith, I R; Senior, P; Parker, M; Louverdis, G

    2010-05-01

    High-power applications sometimes require a transportable, simple, and robust gigawatt pulsed power generator, and an analysis of various possible approaches shows that one based on a twin exploding wire array is extremely advantageous. A generator based on this technology and used with a high-energy capacitor bank has recently been developed at Loughborough University. An H-configuration circuit is used, with one pair of diagonally opposite arms each comprising a high-voltage ballast inductor and the other pair exploding wire arrays capable of generating voltages up to 300 kV. The two center points of the H configuration provide the output to the load, which is coupled through a high-voltage self-breakdown spark gap, with the entire autonomous source being housed in a metallic container. Experimentally, a load resistance of a few tens of Ohms is provided with an impulse of more than 300 kV, having a rise time of about 140 ns and a peak power of over 1.7 GW. Details of the experimental arrangement and typical results are presented and diagnostic measurements of the current and voltage output are shown to compare well with theoretical predictions based on detailed numerical modeling. Finally, the next stage toward developing a more powerful and energetic transportable source is outlined.

  10. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  11. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    Science.gov (United States)

    Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.

    2016-05-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.

  12. High Efficiency Reversible Fuel Cell Power Converter

    DEFF Research Database (Denmark)

    Pittini, Riccardo

    The large scale integration of renewable energy sources requires suitable energy storage systems to balance energy production and demand in the electrical grid. Bidirectional fuel cells are an attractive technology for energy storage systems due to the high energy density of fuel. Compared...... entitled "High Efficiency Reversible Fuel Cell Power Converter" and it presents the design of a high efficiency dc-dc converter developed and optimized for bidirectional fuel cell applications. First, a brief overview of fuel cell and energy storage technologies is presented. Different system topologies...... to traditional unidirectional fuel cell, bidirectional fuel cells have increased operating voltage and current ranges. These characteristics increase the stresses on dc-dc and dc-ac converters in the electrical system, which require proper design and advanced optimization. This work is part of the PhD project...

  13. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  14. Hybrid high power femtosecond laser system

    Science.gov (United States)

    Trunov, V. I.; Petrov, V. V.; Pestryakov, E. V.; Kirpichnikov, A. V.

    2006-01-01

    Design of a high-power femtosecond laser system based on hybrid chirped pulse amplification (CPA) technique developed by us is presented. The goal of the hybrid principle is the use of the parametric and laser amplification methods in chirped pulse amplifiers. It makes it possible to amplify the low-cycle pulses with a duration of <= fs to terawatt power with a high contrast and high conversion efficiency of the pump radiation. In a created system the Ti:Sapphire laser with 10 fs pulses at 810 nm and output energy about 1-3 nJ will be used like seed source. The oscillator pulses were stretched to duration of about 500 ps by an all-reflective grating stretcher. Then the stretched pulses are injected into a nondegenerate noncollinear optical parametric amplifier (NOPA) on the two BBO crystals. After amplification in NOPA the residual pump was used in a bow-tie four pass amplifier with hybrid active medium (based on Al II0 3:Ti 3+ and BeAl IIO 4:Ti 3+ crystals). The final stage of the amplification system consists of two channels, namely NIR (820 nm) and short-VIS (410 nm). Numerical simulation has shown that the terawatt level of output power can be achieved also in a short-VIS channel at the pumping of the double-crystal BBO NOPA by the radiation of the fourth harmonic of the Nd:YAG laser at 266 nm. Experimentally parametric amplification in BBO crystals of 30-50 fs pulses were investigated and optimized using SPIDER technique and single-shot autocomelator for the realization of shortest duration 40 fs.

  15. Photovoltaics for high capacity space power systems

    Science.gov (United States)

    Flood, Dennis J.

    1988-01-01

    The anticipated energy requirements of future space missions will grow by factors approaching 100 or more, particularly as a permanent manned presence is established in space. The advances that can be expected in solar array performance and lifetime, when coupled with advanced, high energy density storage batteries and/or fuel cells, will continue to make photovoltaic energy conversion a viable power generating option for the large systems of the future. The specific technologies required to satisfy any particular set of power requirements will vary from mission to mission. Nonetheless, in almost all cases the technology push will be toward lighter weight and higher efficiency, whether of solar arrays of storage devices. This paper will describe the content and direction of the current NASA program in space photovoltaic technology. The paper will also discuss projected system level capabilities of photovoltaic power systems in the context of some of the new mission opportunities under study by NASA, such as a manned lunar base, and a manned visit to Mars.

  16. High-power converters and AC drives

    CERN Document Server

    Wu, Bin

    2017-01-01

    This new edition reflects the recent technological advancements in the MV drive industry, such as advanced multilevel converters and drive configurations. It includes three new chapters, Control of Synchronous Motor Drives, Transformerless MV Drives, and Matrix Converter Fed Drives. In addition, there are extensively revised chapters on Multilevel Voltage Source Inverters and Voltage Source Inverter-Fed Drives. This book includes a systematic analysis on a variety of high-power multilevel converters, illustrates important concepts with simulations and experiments, introduces various megawatt drives produced by world leading drive manufacturers, and addresses practical problems and their mitigations methods.

  17. A COMPUTATIONAL WORKBENCH ENVIRONMENT FOR VIRTUAL POWER PLANT SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Mike Bockelie; Dave Swensen; Martin Denison

    2002-01-31

    This is the fifth Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-00NT41047. The goal of the project is to develop and demonstrate a computational workbench for simulating the performance of Vision 21 Power Plant Systems. Within the last quarter, our efforts have become focused on developing an improved workbench for simulating a gasifier based Vision 21 energyplex. To provide for interoperability of models developed under Vision 21 and other DOE programs, discussions have been held with DOE and other organizations developing plant simulator tools to review the possibility of establishing a common software interface or protocol to use when developing component models. A component model that employs the CCA protocol has successfully been interfaced to our CCA enabled workbench. To investigate the software protocol issue, DOE has selected a gasifier based Vision 21 energyplex configuration for use in testing and evaluating the impacts of different software interface methods. A Memo of Understanding with the Cooperative Research Centre for Coal in Sustainable Development (CCSD) in Australia has been completed that will enable collaborative research efforts on gasification issues. Preliminary results have been obtained for a CFD model of a pilot scale, entrained flow gasifier. A paper was presented at the Vision 21 Program Review Meeting at NETL (Morgantown) that summarized our accomplishments for Year One and plans for Year Two and Year Three.

  18. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  19. Computational Analysis of Powered Lift Augmentation for the LEAPTech Distributed Electric Propulsion Wing

    Science.gov (United States)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of a distributed electric propulsion wing with a 40deg flap deflection has been completed using FUN3D. Two lift-augmentation power conditions were compared with the power-off configuration on the high-lift wing (40deg flap) at a 73 mph freestream flow and for a range of angles of attack from -5 degrees to 14 degrees. The computational study also included investigating the benefit of corotating versus counter-rotating propeller spin direction to powered-lift performance. The results indicate a large benefit in lift coefficient, over the entire range of angle of attack studied, by using corotating propellers that all spin counter to the wingtip vortex. For the landing condition, 73 mph, the unpowered 40deg flap configuration achieved a maximum lift coefficient of 2.3. With high-lift blowing the maximum lift coefficient increased to 5.61. Therefore, the lift augmentation is a factor of 2.4. Taking advantage of the fullspan lift augmentation at similar performance means that a wing powered with the distributed electric propulsion system requires only 42 percent of the wing area of the unpowered wing. This technology will allow wings to be 'cruise optimized', meaning that they will be able to fly closer to maximum lift over drag conditions at the design cruise speed of the aircraft.

  20. High Performance Low Cost Digitally Controlled Power Conversion Technology

    DEFF Research Database (Denmark)

    Jakobsen, Lars Tønnes

    2008-01-01

    Digital control of switch-mode power supplies and converters has within the last decade evolved from being an academic subject to an emerging market in the power electronics industry. This development has been pushed mainly by the computer industry that is looking towards digital power management...

  1. Analysis of Highly Wind Power Integrated Power System model performance during Critical Weather conditions

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    2014-01-01

    . For this purpose, the power system model has been developed that represents the relevant dynamic features of power plants and compensates for power imbalances caused by the forecasting error during critical weather conditions. The regulating power plan, as an input time series for the developed power system model......Secure power system operation of a highly wind power integrated power system is always at risk during critical weather conditions, e.g. in extreme high winds. The risk is even higher when 50% of the total electricity consumption has to be supplied by wind power, as the case for the future Danish...... power system in 2020. This paper analyses and compares the performance of the future Danish power system during extreme wind speeds, where wind power plants are either controlled through a traditional High Wind Shut Down storm controller or a new High Wind Extended Production storm controller...

  2. A New Hard Switching Bidirectional Converter With High Power Density

    Directory of Open Access Journals (Sweden)

    Bahador Fani

    2010-01-01

    Full Text Available In this paper, a new isolated dc-dc bidirectional converter is proposed. This converter consists of two transformers (flyback and forward and only one switch in primary side and one switch in secondary side of transformers. In this converter energy transfers to the output in both on and off switch states so power density of this converter is high This converter controlled by PWM signal. Also this converter operates over a wide input voltage range. Theoretical analysis is presented and computer simulation and experimental results verify the converter analysis.

  3. Study of Efficient Utilization of Power using green Computing

    OpenAIRE

    Ms .Dheera Jadhwani, Mr.Mayur Agrawal, Mr.Hemant Mande

    2012-01-01

    Green computing or green IT, basically concerns toenvironmentally sustainable computing or IT. Thefield of green computing is defined as "theknowledge and practice of designing,manufacturing, using, and disposing of computers,servers, and associated subsystems—which includeprinters, monitors, and networking, storage devicesand communications systems—efficiently andeffectively with minimal or no impact on theenvironment. this computing is similar to greenchemistry that is minimum utilization o...

  4. Optimal Operation of Plug-In Electric Vehicles in Power Systems with High Wind Power Penetrations

    DEFF Research Database (Denmark)

    Hu, Weihao; Su, Chi; Chen, Zhe

    2013-01-01

    The Danish power system has a large penetration of wind power. The wind fluctuation causes a high variation in the power generation, which must be balanced by other sources. The battery storage based Plug-In Electric Vehicles (PEV) may be a possible solution to balance the wind power variations...... in the power systems with high wind power penetrations. In this paper, the integration of plug-in electric vehicles in the power systems with high wind power penetrations is proposed and discussed. Optimal operation strategies of PEV in the spot market are proposed in order to decrease the energy cost for PEV...

  5. Scale Law of the High Power Free Electron Laser

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The scale law and design procedure of the high power FEL are discussed. It is pointed out that theextraction efficiency, which is the critical factor of the output power besides the power of the electron

  6. Improved cooling design for high power waveguide system

    Science.gov (United States)

    Chen, W. C. J.; Hartop, R.

    1981-06-01

    Testing of X band high power components in a traveling wave resonator indicates that this improved cooling design reduces temperature in the waveguide and flange. The waveguide power handling capability and power transmission reliability is increased substantially.

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  9. High power parallel ultrashort pulse laser processing

    Science.gov (United States)

    Gillner, Arnold; Gretzki, Patrick; Büsing, Lasse

    2016-03-01

    The class of ultra-short-pulse (USP) laser sources are used, whenever high precession and high quality material processing is demanded. These laser sources deliver pulse duration in the range of ps to fs and are characterized with high peak intensities leading to a direct vaporization of the material with a minimum thermal damage. With the availability of industrial laser source with an average power of up to 1000W, the main challenge consist of the effective energy distribution and disposition. Using lasers with high repetition rates in the MHz region can cause thermal issues like overheating, melt production and low ablation quality. In this paper, we will discuss different approaches for multibeam processing for utilization of high pulse energies. The combination of diffractive optics and conventional galvometer scanner can be used for high throughput laser ablation, but are limited in the optical qualities. We will show which applications can benefit from this hybrid optic and which improvements in productivity are expected. In addition, the optical limitations of the system will be compiled, in order to evaluate the suitability of this approach for any given application.

  10. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  11. Fibrous zinc anodes for high power batteries

    Science.gov (United States)

    Zhang, X. Gregory

    This paper introduces newly developed solid zinc anodes using fibrous material for high power applications in alkaline and large size zinc-air battery systems. The improved performance of the anodes in these two battery systems is demonstrated. The possibilities for control of electrode porosity and for anode/battery design using fibrous materials are discussed in light of experimental data. Because of its mechanical integrity and connectivity, the fibrous solid anode has good electrical conductivity, mechanical stability, and design flexibility for controlling mass distribution, porosity and effective surface area. Experimental data indicated that alkaline cells made of such anodes can have a larger capacity at high discharging currents than commercially available cells. It showed even greater improvement over commercial cells with a non-conventional cell design. Large capacity anodes for a zinc-air battery have also been made and have shown excellent material utilization at various discharge rates. The zinc-air battery was used to power an electric bicycle and demonstrated good results.

  12. Digitally Controlled High Availability Power Supply

    Energy Technology Data Exchange (ETDEWEB)

    MacNair, David; /SLAC

    2009-05-07

    This paper will report on the test results of a prototype 1320 watt power module for a high availability power supply. The module will allow parallel operation for N+1 redundancy with hot swap capability. The two quadrant output of each module allows pairs of modules to provide a 4 quadrant (bipolar) operation. Each module employs a novel 4 FET buck regulator arranged in a bridge configuration. Each side of the bridge alternately conducts through a small saturable ferrite that limits the reverse current in the FET body diode during turn off. This allows hard switching of the FETs with low switching losses. The module is designed with over-rated components to provide high reliability and better then 97% efficiency at full load. The modules use a Microchip DSP for control, monitoring, and fault detection. The switching FETS are driven by PWM modules in the DSP at 60 KHz. A Dual CAN bus interface provides for low cost redundant control paths. The DSP will also provide current sharing between modules, synchronized switching, and soft start up for hot swapping. The input and output of each module have low resistance FETs to allow hot swapping and isolation of faulted units.

  13. Design and characterization of a novel power over fiber system integrating a high power diode laser

    Science.gov (United States)

    Perales, Mico; Yang, Mei-huan; Wu, Cheng-liang; Hsu, Chin-wei; Chao, Wei-sheng; Chen, Kun-hsein; Zahuranec, Terry

    2017-02-01

    High power 9xx nm diode lasers along with MH GoPower's (MHGP's) flexible line of Photovoltaic Power Converters (PPCs) are spurring high power applications for power over fiber (PoF), including applications for powering remote sensors and sensors monitoring high voltage equipment, powering high voltage IGBT gate drivers, converters used in RF over Fiber (RFoF) systems, and system power applications, including powering UAVs. In PoF, laser power is transmitted over fiber, and is converted to electricity by photovoltaic cells (packaged into Photovoltaic Power Converters, or PPCs) which efficiently convert the laser light. In this research, we design a high power multi-channel PoF system, incorporating a high power 976 nm diode laser, a cabling system with fiber break detection, and a multichannel PPC-module. We then characterizes system features such as its response time to system commands, the PPC module's electrical output stability, the PPC-module's thermal response, the fiber break detection system response, and the diode laser optical output stability. The high power PoF system and this research will serve as a scalable model for those interested in researching, developing, or deploying a high power, voltage isolated, and optically driven power source for high reliability utility, communications, defense, and scientific applications.

  14. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  15. Computational Efficiency of Economic MPC for Power Systems Operation

    DEFF Research Database (Denmark)

    Standardi, Laura; Poulsen, Niels Kjølstad; Jørgensen, John Bagterp

    2013-01-01

    In this work, we propose an Economic Model Predictive Control (MPC) strategy to operate power systems that consist of independent power units. The controller balances the power supply and demand, minimizing production costs. The control problem is formulated as a linear program that is solved...

  16. Series-Tuned High Efficiency RF-Power Amplifiers

    DEFF Research Database (Denmark)

    Vidkjær, Jens

    2008-01-01

    An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits.......An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits....

  17. K-band high power latching switch

    Science.gov (United States)

    Mlinar, M. J.; Piotrowski, W. S.; Raue, J. E.

    1980-12-01

    A 19 GHz waveguide latching switch with a bandwidth of 1400 MHz and an exceptionally low insertion loss of 0.25 dB was demonstrated. The RF and driver ferrites are separate structures and can be optimized individually. This analysis for each structure is separately detailed. Basically, the RF section features a dual turnstile junction. The circulator consists of a dielectric tube which contains two ferrite rods, and a dielectric spacer separating the ferrite parts along the center of symmetry of the waveguide to form two turnstiles. This subassembly is indexed and locked in the center of symmetry of a uniform junction of three waveguides by the metallic transformers installed in the top and bottom walls of the housing. The switching junction and its actuating circuitry met all RF performance objectives and all shock and vibration requirements with no physical damage or performance degradation. It exceeds thermal requirements by operating over a 100 C temperature range (-44 C to +56 C) and has a high power handling capability allowing up to 100 W of CW input power.

  18. High Energy High Power Battery Exceeding PHEV40 Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Rempel, Jane [TIAX LLC, Lexington, MA (United States)

    2016-03-31

    TIAX has developed long-life lithium-ion cells that can meet and exceed the energy and power targets (200Wh/kg and 800W/kg pulse power) set out by DOE for PHEV40 batteries. To achieve these targets, we selected and scaled-up a high capacity version of our proprietary high energy and high power CAM-7® cathode material. We paired the cathode with a blended anode containing Si-based anode material capable of delivering high capacity and long life. Furthermore, we optimized the anode blend composition, cathode and anode electrode design, and selected binder and electrolyte compositions to achieve not only the best performance, but also long life. By implementing CAM-7 with a Si-based blended anode, we built and tested prototype 18650 cells that delivered measured specific energy of 198Wh/kg total energy and 845W/kg at 10% SOC (projected to 220Wh/kg in state-of-the-art 18650 cell hardware and 250Wh/kg in 15Ah pouch cells). These program demonstration cells achieved 90% capacity retention after 500 cycles in on-going cycle life testing. Moreover, we also tested the baseline CAM-7/graphite system in 18650 cells showing that 70% capacity retention can be achieved after ~4000 cycles (20 months of on-going testing). Ultimately, by simultaneously meeting the PHEV40 power and energy targets and providing long life, we have developed a Li-ion battery system that is smaller, lighter, and less expensive than current state-of-the-art Li-ion batteries.

  19. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  20. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  1. Application of modern computer technology to EPRI (Electric Power Research Institute) nuclear computer programs: Final report

    Energy Technology Data Exchange (ETDEWEB)

    Feinauer, L.R.

    1989-08-01

    Many of the nuclear analysis programs in use today were designed and developed well over a decade ago. Within this time frame, tremendous changes in hardware and software technologies have made it necessary to revise and/or restructure most of the analysis programs to take advantage of these changes. As computer programs mature from the development phase to being production programs, program maintenance and portability become very important issues. The maintenance costs associated with a particular computer program can generally be expected to exceed the total development costs by as much as a factor of two. Many of the problems associated with high maintenance costs can be traced back to either poorly designed coding structure, or ''quick fix'' modifications which do not preserve the original coding structure. The lack of standardization between hardware designs presents an obstacle to the software designer in providing 100% portable coding; however, conformance to certain guidelines can ensure portability between a wide variety of machines and operating systems. This report presents guidelines for upgrading EPRI nuclear computer programs to conform to current programming standards while maintaining flexibility for accommodating future hardware and software design trends. Guidelines for development of new computer programs are also presented. 22 refs., 10 figs.

  2. Study of Efficient Utilization of Power using green Computing

    Directory of Open Access Journals (Sweden)

    Ms .Dheera Jadhwani, Mr.Mayur Agrawal, Mr.Hemant Mande

    2012-12-01

    Full Text Available Green computing or green IT, basically concerns toenvironmentally sustainable computing or IT. Thefield of green computing is defined as "theknowledge and practice of designing,manufacturing, using, and disposing of computers,servers, and associated subsystems—which includeprinters, monitors, and networking, storage devicesand communications systems—efficiently andeffectively with minimal or no impact on theenvironment. this computing is similar to greenchemistry that is minimum utilization of hazardousmaterials and , maximizing energy efficiencyduring the product's lifetime, and also promote therecyclability or biodegradability of defunct productsand factory waste .

  3. Innovations in high power fiber laser applications

    Science.gov (United States)

    Beyer, Eckhard; Mahrle, Achim; Lütke, Matthias; Standfuss, Jens; Brückner, Frank

    2012-02-01

    Diffraction-limited high power lasers represent a new generation of lasers for materials processing, characteristic traits of which are: smaller, cost-effective and processing "on the fly". Of utmost importance is the high beam quality of fiber lasers which enables us to reduce the size of the focusing head incl. scanning mirrors. The excellent beam quality of the fiber laser offers a lot of new applications. In the field of remote cutting and welding the beam quality is the key parameter. By reducing the size of the focusing head including the scanning mirrors we can reach scanning frequencies up to 1.5 kHz and in special configurations up to 4 kHz. By using these frequencies very thin and deep welding seams can be generated experienced so far with electron beam welding only. The excellent beam quality of the fiber laser offers a high potential for developing new applications from deep penetration welding to high speed cutting. Highly dynamic cutting systems with maximum speeds up to 300 m/min and accelerations up to 4 g reduce the cutting time for cutting complex 2D parts. However, due to the inertia of such systems the effective cutting speed is reduced in real applications. This is especially true if complex shapes or contours are cut. With the introduction of scanner-based remote cutting systems in the kilowatt range, the effective cutting speed on the contour can be dramatically increased. The presentation explains remote cutting of metal foils and sheets using high brightness single mode fiber lasers. The presentation will also show the effect of optical feedback during cutting and welding with the fiber laser, how those feedbacks could be reduced and how they have to be used to optimize the cutting or welding process.

  4. High power solid state laser modulator

    Science.gov (United States)

    Birx, Daniel L.; Ball, Don G.; Cook, Edward G.

    2004-04-27

    A multi-stage magnetic modulator provides a pulse train of .+-.40 kV electrical pulses at a 5-7 kHz repetition rate to a metal vapor laser. A fractional turn transformer steps up the voltage by a factor of 80 to 1 and magnetic pulse compression is used to reduce the pulse width of the pulse train. The transformer is fabricated utilizing a rod and plate stack type of construction to achieve a high packing factor. The pulses are controlled by an SCR stack where a plurality of SCRs are electrically connected in parallel, each SCR electrically connected to a saturable inductor, all saturable inductors being wound on the same core of magnetic material for enhanced power handling characteristics.

  5. High power coherent polarization locked laser diode.

    Science.gov (United States)

    Purnawirman; Phua, P B

    2011-03-14

    We have coherently combined a broad area laser diode array to obtain high power single-lobed output by using coherent polarization locking. The single-lobed coherent beam is achieved by spatially combining four diode emitters using walk-off crystals and waveplates while their phases are passively locked via polarization discrimination. While our previous work focused on coherent polarization locking of diode in Gaussian beams, we demonstrate in this paper, the feasibility of the same polarization discrimination for locking multimode beams from broad area diode lasers. The resonator is designed to mitigate the loss from smile effect by using retro-reflection feedback in the cavity. In a 980 nm diode array, we produced 7.2 W coherent output with M2 of 1.5x11.5. The brightness of the diode is improved by more than an order of magnitude.

  6. Splitting of high power, cw proton beams

    CERN Document Server

    Facco, Alberto; Berkovits, Dan; Yamane, Isao

    2007-01-01

    A simple method for splitting a high power, continuous wave (cw) proton beam in two or more branches with low losses has been developed in the framework of the EURISOL (European Isotope Separation On-Line adioactive Ion Beam Facility) design study. The aim of the system is to deliver up to 4 MW of H beam to the main radioactive ion beam production target, and up to 100 kWof proton beams to three more targets, simultaneously. A three-step method is used, which includes magnetic neutralization of a fractionof the main H- beam, magnetic splitting of H- and H0, and stripping of H0 to H+. The method allowsslow raising and individual fine adjustment of the beam intensity in each branch.

  7. High-temperature alloys for high-power thermionic systems

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Kwang S.; Jacobson, D.L.; D' cruz, L.; Luo, Anhua; Chen, Bor-Ling.

    1990-08-01

    The need for structural materials with useful strength above 1600 k has stimulated interest in refractory-metal alloys. Tungsten possesses an extreme high modulus of elasticity as well as the highest melting temperature among metals, and hence is being considered as one of the most promising candidate materials for high temperature structural applications such as space nuclear power systems. This report is divided into three chapters covering the following: (1) the processing of tungsten base alloys; (2) the tensile properties of tungsten base alloys; and (3) creep behavior of tungsten base alloys. Separate abstracts were prepared for each chapter. (SC)

  8. High Power High Efficiency Ka-Band Power Combiners for Solid-State Devices

    Science.gov (United States)

    Freeman, Jon C.; Wintucky, Edwin G.; Chevalier, Christine T.

    2006-01-01

    Wide-band power combining units for Ka-band are simulated for use as MMIC amplifier applications. Short-slot couplers as well as magic-tees are the basic elements for the combiners. Wide bandwidth (5 GHz) and low insertion (approx.0.2 dB) and high combining efficiencies (approx.90 percent) are obtained.

  9. Method and apparatus for improved high power impulse magnetron sputtering

    Science.gov (United States)

    Anders, Andre

    2013-11-05

    A high power impulse magnetron sputtering apparatus and method using a vacuum chamber with a magnetron target and a substrate positioned in the vacuum chamber. A field coil being positioned between the magnetron target and substrate, and a pulsed power supply and/or a coil bias power supply connected to the field coil. The pulsed power supply connected to the field coil, and the pulsed power supply outputting power pulse widths of greater that 100 .mu.s.

  10. Optimization Studies for ISOL Type High-Powered Targets

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [Oak Ridge National Laboratory; Ronningen, Reginald Martin [Michigan State University

    2013-09-24

    The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UCx material at reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 1013 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.

  11. Optimization Studies for ISOL Type High-Powered Targets

    Energy Technology Data Exchange (ETDEWEB)

    Remec, Igor [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ronningen, Reginald Martin [Michigan State Univ., East Lansing, MI (United States)

    2013-09-24

    The research studied one-step and two-step Isotope Separation on Line (ISOL) targets for future radioactive beam facilities with high driver-beam power through advanced computer simulations. As a target material uranium carbide in the form of foils was used because of increasing demand for actinide targets in rare-isotope beam facilities and because such material was under development in ISAC at TRIUMF when this project started. Simulations of effusion were performed for one-step and two step targets and the effects of target dimensions and foil matrix were studied. Diffusion simulations were limited by availability of diffusion parameters for UCx material at reduced density; however, the viability of the combined diffusion?effusion simulation methodology was demonstrated and could be used to extract physical parameters such as diffusion coefficients and effusion delay times from experimental isotope release curves. Dissipation of the heat from the isotope-producing targets is the limiting factor for high-power beam operation both for the direct and two-step targets. Detailed target models were used to simulate proton beam interactions with the targets to obtain the fission rates and power deposition distributions, which were then applied in the heat transfer calculations to study the performance of the targets. Results indicate that a direct target, with specification matching ISAC TRIUMF target, could operate in 500-MeV proton beam at beam powers up to ~40 kW, producing ~8 1013 fission/s with maximum temperature in UCx below 2200 C. Targets with larger radius allow higher beam powers and fission rates. For the target radius in the range 9-mm to 30-mm the achievable fission rate increases almost linearly with target radius, however, the effusion delay time also increases linearly with target radius.

  12. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  13. High Power Combiner/Divider Design for Dual Band RF Power Amplifiers

    OpenAIRE

    Flattery, Kyle; Amin, Shoaib; Rönnow, Daniel; Mahamat, Yaya; Eroglu, Abdullah

    2015-01-01

    Design of low loss with an enhanced thermal profile power divider/combiner for high power dual-band Radio Frequency (RF) power amplifier applications is given. The practical implementation, low loss and substrate characteristics make this type of combiner ideal for high power microwave applications.  The combiner operational frequencies are chosen to operate at 900 MHz and 2.14 GHz, which are common frequencies for concurrent dual band RF power amplifiers. The analytical results are verified ...

  14. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  15. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  16. Ultra-high resolution computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  17. Rotating Detonation Combustion: A Computational Study for Stationary Power Generation

    Science.gov (United States)

    Escobar, Sergio

    The increased availability of gaseous fossil fuels in The US has led to the substantial growth of stationary Gas Turbine (GT) usage for electrical power generation. In fact, from 2013 to 2104, out of the 11 Tera Watts-hour per day produced from fossil fuels, approximately 27% was generated through the combustion of natural gas in stationary GT. The thermodynamic efficiency for simple-cycle GT has increased from 20% to 40% during the last six decades, mainly due to research and development in the fields of combustion science, material science and machine design. However, additional improvements have become more costly and more difficult to obtain as technology is further refined. An alternative to improve GT thermal efficiency is the implementation of a combustion regime leading to pressure-gain; rather than pressure loss across the combustor. One concept being considered for such purpose is Rotating Detonation Combustion (RDC). RDC refers to a combustion regime in which a detonation wave propagates continuously in the azimuthal direction of a cylindrical annular chamber. In RDC, the fuel and oxidizer, injected from separated streams, are mixed near the injection plane and are then consumed by the detonation front traveling inside the annular gap of the combustion chamber. The detonation products then expand in the azimuthal and axial direction away from the detonation front and exit through the combustion chamber outlet. In the present study Computational Fluid Dynamics (CFD) is used to predict the performance of Rotating Detonation Combustion (RDC) at operating conditions relevant to GT applications. As part of this study, a modeling strategy for RDC simulations was developed. The validation of the model was performed using benchmark cases with different levels of complexity. First, 2D simulations of non-reactive shock tube and detonation tubes were performed. The numerical predictions that were obtained using different modeling parameters were compared with

  18. Modeling of the dynamics of wind to power conversion including high wind speed behavior

    DEFF Research Database (Denmark)

    Litong-Palima, Marisciel; Bjerge, Martin Huus; Cutululis, Nicolaos Antonio

    2016-01-01

    of power system studies, but the idea of the proposed wind turbine model is to include the main dynamic effects in order to have a better representation of the fluctuations in the output power and of the fast power ramping especially because of high wind speed shutdowns of the wind turbine. The high wind......This paper proposes and validates an efficient, generic and computationally simple dynamic model for the conversion of the wind speed at hub height into the electrical power by a wind turbine. This proposed wind turbine model was developed as a first step to simulate wind power time series...... for power system studies. This paper focuses on describing and validating the single wind turbine model, and is therefore neither describing wind speed modeling nor aggregation of contributions from a whole wind farm or a power system area. The state-of-the-art is to use static power curves for the purpose...

  19. Software for computation of power losses in unbalanced and harmonic polluted industrial electric networks

    Energy Technology Data Exchange (ETDEWEB)

    Chindris, Mircea; Cziker, Andrei; Miron, Anca [Technical Univ. of Cluj, Napoca (Romania). Power Systems Dept.

    2007-07-01

    The electromagnetic phenomena in the industrial electric power networks have reached a level of complexity so high that their accurate knowledge has imposed the developing of complex software products, including expert systems, which can satisfy these expectations. Particularly, the unbalanced and non-sinusoidal working conditions have a negative impact on both individual components of the electrical system and the whole system. The knowledge of distribution losses, which depend on the technical and operation characteristics of the electric network and on the distortion and unbalance degree of the current and/or voltage waveforms, is necessary in order to establish the network parameters and working state. Power losses calculus allows setting the electrical energy distribution cost, estimating the efficiency of the losses reduction solutions, etc. Looking for computation of power losses in the industrial electric networks, taking into account the necessity of making complex mathematic calculations and considering the number of parameters that influence these power losses during an unbalanced and non-sinusoidal state, the authors have developed an original software tool which satisfies this goal, and quickly provides accurate results. The paper presents this software, by describing its methodology and the mathematic equations that have been used to determine the electric power losses for radial electric networks working in unbalanced and harmonic polluted conditions.

  20. Fast Decoupled Power Flow for Power System with High Voltage Direct Current Transmission Line System

    Directory of Open Access Journals (Sweden)

    Prechanon Kumkratug

    2010-01-01

    Full Text Available Problem statement: High voltage direct current transmission line system has been widely applied for control power flow in power system. The power flow analysis was the one of powerful tools by which the power system equipped was analyzed both for planning and operation strategies. Approach: This study presented the method to analyze power flow of power system consisted of HVDC system. HVDC was modeled as the complex power injections. The presented complex power injected was incorporated into the existing power flow program based on fast decoupled method. The presented method was tested on the multimachine power system. Results: The transmission line loss of the system with and without HVDC was compared. Conclusion: From the simulation results, the HVDC can reduce transmission line loss of power system.

  1. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  2. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    Energy Technology Data Exchange (ETDEWEB)

    Chika Nwankpa; Jeremy Johnson; Karen Miu; Prawat Nagvajara; Dagmar Niebur; Sotirios Ziavras

    2003-12-31

    A scalable power system- power flow solution based on parallel computing using embedded multi-processors on Field Programmable Gate Arrays (FPGAs) is studied. Two Nios embedded processors are used in a prototype under centralized and decentralized control approach to execute the power-flow solution program to prove the viability of the approach.

  3. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  4. High power microwave system based on power combining and pulse compression of conventional klystrons

    CERN Document Server

    Xiong, Zheng-Feng; Cheng, Cheng; Ning, Hui; Tang, Chuan-Xiang

    2015-01-01

    A high power microwave system based on power combining and pulse compression of conventional klystrons is introduced in this paper. This system mainly consists of pulse modulator, power combiner, driving source of klystrons and pulse compressor. A solid state induction modulator and pulse transformer were used to drive two 50 MW S-band klystrons with pulse widths 4 {\\mu}s in parallel, after power combining and pulse compression, the tested peak power had reached about 210 MW with pulse widths nearly 400 ns at 25 Hz, while the experimental maximum output power was just limited by the power capacity of loads. This type of high power microwave system has widely application prospect in RF system of large scale particle accelerators, high power radar transmitters and high level electromagnetic environment generators.

  5. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  6. Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine

    Science.gov (United States)

    Ameri, Ali A.

    2012-01-01

    To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.

  7. Laboratory Astrophysics on High Power Lasers and Pulsed Power Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Remington, B A

    2002-02-05

    Over the past decade a new genre of laboratory astrophysics has emerged, made possible by the new high energy density (HED) experimental facilities, such as large lasers, z-pinch generators, and high current particle accelerators. (Remington, 1999; 2000; Drake, 1998; Takabe, 2001) On these facilities, macroscopic collections of matter can be created in astrophysically relevant conditions, and its collective properties measured. Examples of processes and issues that can be experimentally addressed include compressible hydrodynamic mixing, strong shock phenomena, radiative shocks, radiation flow, high Mach-number jets, complex opacities, photoionized plasmas, equations of state of highly compressed matter, and relativistic plasmas. These processes are relevant to a wide range of astrophysical phenomena, such as supernovae and supernova remnants, astrophysical jets, radiatively driven molecular clouds, accreting black holes, planetary interiors, and gamma-ray bursts. These phenomena will be discussed in the context of laboratory astrophysics experiments possible on existing and future HED facilities.

  8. The SPES High Power ISOL production target

    Science.gov (United States)

    Andrighetto, A.; Corradetti, S.; Ballan, M.; Borgna, F.; Manzolaro, M.; Scarpa, D.; Monetti, A.; Rossignoli, M.; Silingardi, R.; Mozzi, A.; Vivian, G.; Boratto, E.; De Ruvo, L.; Sattin, N.; Meneghetti, G.; Oboe, R.; Guerzoni, M.; Margotti, A.; Ferrari, M.; Zenoni, A.; Prete, G.

    2016-11-01

    SPES (Selective Production of Exotic Species) is a facility under construction at INFN-LNL (Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Legnaro), aimed to produce intense neutron-rich radioactive ion beams (RIBs). These will be obtained using the ISOL (Isotope Separation On-Line) method, bombarding a uranium carbide target with a proton beam of 40MeV energy and currents up to 200μA. The target configuration was designed to obtain a high number of fissions, up to 1013 per second, low power deposition and fast release of the produced isotopes. The exotic isotopes generated in the target are ionized, mass separated and re-accelerated by the ALPI superconducting LINAC at energies of 10AMeV and higher, for masses in the region of A = 130 amu , with an expected rate on the secondary target up to 109 particles per second. In this work, recent results on the R&D activities regarding the SPES RIB production target-ion source system are reported.

  9. Test of a High Power Target Design

    CERN Multimedia

    2002-01-01

    %IS343 :\\\\ \\\\ A high power tantalum disc-foil target (RIST) has been developed for the proposed radioactive beam facility, SIRIUS, at the Rutherford Appleton Laboratory. The yield and release characteristics of the RIST target design have been measured at ISOLDE. The results indicate that the yields are at least as good as the best ISOLDE roll-foil targets and that the release curves are significantly faster in most cases. Both targets use 20 -25 $\\mu$m thick foils, but in a different internal geometry.\\\\ \\\\Investigations have continued at ISOLDE with targets having different foil thickness and internal geometries in an attempt to understand the release mechanisms and in particular to maximise the yield of short lived isotopes. A theoretical model has been developed which fits the release curves and gives physical values of the diffusion constants.\\\\ \\\\The latest target is constructed from 2 $\\mu$m thick tantalum foils (mass only 10 mg) and shows very short release times. The yield of $^{11}$Li (half-life of ...

  10. High Precision Current Measurement for Power Converters

    CERN Document Server

    Cerqueira Bastos, M

    2015-01-01

    The accurate measurement of power converter currents is essential to controlling and delivering stable and repeatable currents to magnets in particle accelerators. This paper reviews the most commonly used devices for the measurement of power converter currents and discusses test and calibration methods.

  11. Assessment of computer codes for VVER-440/213-type nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  12. Computational Fluid Dynamics Simulation Study of Active Power Control in Wind Plants

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, Paul; Aho, Jake; Gebraad, Pieter; Pao, Lucy; Zhang, Yingchen

    2016-08-01

    This paper presents an analysis performed on a wind plant's ability to provide active power control services using a high-fidelity computational fluid dynamics-based wind plant simulator. This approach allows examination of the impact on wind turbine wake interactions within a wind plant on performance of the wind plant controller. The paper investigates several control methods for improving performance in waked conditions. One method uses wind plant wake controls, an active field of research in which wind turbine control systems are coordinated to account for their wakes, to improve the overall performance. Results demonstrate the challenge of providing active power control in waked conditions but also the potential methods for improving this performance.

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  15. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly but important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.

  16. Slovak High School Students' Attitudes toward Computers

    Science.gov (United States)

    Kubiatko, Milan; Halakova, Zuzana; Nagyova, Sona; Nagy, Tibor

    2011-01-01

    The pervasive involvement of information and communication technologies and computers in our daily lives influences changes of attitude toward computers. We focused on finding these ecological effects in the differences in computer attitudes as a function of gender and age. A questionnaire with 34 Likert-type items was used in our research. The…

  17. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  18. High-power converters for space applications

    Science.gov (United States)

    Park, J. N.; Cooper, Randy

    1991-06-01

    Phase 1 was a concept definition effort to extend space-type dc/dc converter technology to the megawatt level with a weight of less than 0.1 kg/kW (220 lb./MW). Two system designs were evaluated in Phase 1. Each design operates from a 5 kV stacked fuel cell source and provides a voltage step-up to 100 kV at 10 A for charging capacitors (100 pps at a duty cycle of 17 min on, 17 min off). Both designs use an MCT-based, full-bridge inverter, gaseous hydrogen cooling, and crowbar fault protection. The GE-CRD system uses an advanced high-voltage transformer/rectifier filter is series with a resonant tank circuit, driven by an inverter operating at 20 to 50 kHz. Output voltage is controlled through frequency and phase shift control. Fast transient response and stability is ensured via optimal control. Super-resonant operation employing MCTs provides the advantages of lossless snubbing, no turn-on switching loss, use of medium-speed diodes, and intrinsic current limiting under load-fault conditions. Estimated weight of the GE-CRD system is 88 kg (1.5 cu ft.). Efficiency of 94.4 percent and total system loss is 55.711 kW operating at 1 MW load power. The Maxwell system is based on a resonance transformer approach using a cascade of five LC resonant sections at 100 kHz. The 5 kV bus is converted to a square wave, stepped-up to a 100 kV sine wave by the LC sections, rectified, and filtered. Output voltage is controlled with a special series regulator circuit. Estimated weight of the Maxwell system is 83.8 kg (4.0 cu ft.). Efficiency is 87.2 percent and total system loss is 146.411 kW operating at 1 MW load power.

  19. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  20. Advanced Gunn diode as high power terahertz source for a millimetre wave high power multiplier

    Science.gov (United States)

    Amir, F.; Mitchell, C.; Farrington, N.; Missous, M.

    2009-09-01

    An advanced step-graded Gunn diode is reported, which has been developed through joint modelling-experimental work. The ~ 200 GHz fundamental frequency devices have been realized to test GaAs based Gunn oscillators at sub-millimetre wave for use as a high power (multi mW) Terahertz source in conjunction with a mm-wave multiplier, with novel Schottky diodes. The epitaxial growth of both the Gunn diode and Schottky diode wafers were performed using an industrial scale Molecular Beam Epitaxy (V100+) reactor. The Gunn diodes were then manufactured and packaged by e2v Technologies (UK) Plc. Physical models of the high power Gunn diode sources, presented here, are developed in SILVACO.

  1. Non-Equilibrium Phenomena in High Power Beam Materials Processing

    Science.gov (United States)

    Tosto, Sebastiano

    2004-03-01

    The paper concerns some aspects of non-equilibrium materials processing with high power beams. Three examples show that the formation of metastable phases plays a crucial role to understand the effects of beam-matter interaction: (i) modeling of pulsed laser induced thermal sputtering; (ii) formation of metastable phases during solidification of the melt pool; (i) possibility of carrying out heat treatments by low power irradiation ``in situ''. The case (i) deals with surface evaporation and boiling processes in presence of superheating. A computer simulation model of thermal sputtering by vapor bubble nucleation in molten phase shows that non-equilibrium processing enables the rise of large surface temperature gradients in the boiling layer and the possibility of sub-surface temperature maximum. The case (ii) concerns the heterogeneous welding of Cu and AISI 304L stainless steel plates by electron beam irradiation. Microstructural investigation of the molten zone has shown that dwell times of the order of 10-1-10-3 s, consistent with moderate cooling rates in the range 10^3-10^5 K/s, entail the formation of metastable Cu-Fe phases. The case (iii) concerns electron beam welding and post-welding treatments of 2219 Al base alloy. Electron microscopy and positron annihilation have explained why post-weld heat transients induced by low power irradiation of specimens in the as welded condition enable ageing effects usually expected after some hours of treatment in furnace. The problem of microstructural instability is particularly significant for a correct design of components manufactured with high power beam technologies and subjected to severe acceptance standards to ensure advanced performances during service life.

  2. The computational power of time dilation in special relativity

    Science.gov (United States)

    Biamonte, Jacob

    2014-03-01

    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic Grover speedup from quantum computing and an n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation. Parts of this talk are based on [J.Phys.Conf.Ser. 229:012020 (2010), arXiv:0907.1579]. Support is acknowledged from the Foundational Questions Institute (FQXi) and the Compagnia di San Paolo Foundation.

  3. High power electronics package: from modeling to implementation

    NARCIS (Netherlands)

    Yuan, C.A.; Kregting, R.; Ye, H.; Driel, W. van; Gielen, A.W.J.; Zhang, G.Q.

    2011-01-01

    Power electronics, such as high power RF components and high power LEDs, requires the combination of robust and reliable package structures, materials, and processes to guarantee their functional performance and lifetime. We started with the thermal and thermal-mechanical modeling of such component

  4. Design concept and performance considerations for fast high power semiconductor switching for high repetition rate and high power excimer laser

    Science.gov (United States)

    Goto, Tatsumi; Kakizaki, Kouji; Takagi, Shigeyuki; Satoh, Saburoh; Shinohe, Takashi; Ohashi, Hiromichi; Endo, Fumihiko; Okamura, Katsuya; Ishii, Akira; Teranishi, Tsuneharu; Yasuoka, Koichi

    1997-07-01

    A semiconductor switching power supply has been developed, in which a novel structure semiconductor device, metal-oxide-semiconductor assisted gate-triggered thyristor (MAGT) was incorporated with a single stage magnetic pulse compression circuit (MPC). The MAGT was specially designed to directly replace thyratrons in a power supply for a high repetition rate laser. Compared with conventional high power semiconductor switching devices, it was designed to enable a fast switching, retaining a high blocking voltage and to extremely reduce the transient turn-on power losses, enduring a higher peak current. A maximum peak current density of 32 kA/cm2 and a current density risetime rate di/dt of 142 kA/(cm2×μs) were obtained at the chip area with an applied anode voltage of 1.5 kV. A MAGT switching unit connecting 32 MAGTs in series was capable of switching on more than 25 kV-300 A at a repetition rate of 5 kHz, which, coupled with the MPC, was equivalent to the capability of a high power thyratron. A high repetition rate and high power XeCl excimer laser was excited by the power supply. The results confirmed the stable laser operation of a repetition rate of up to 5 kHz, the world record to our knowledge. An average output power of 0.56 kW was obtained at 5 kHz where the shortage of the total discharge current was subjoined by a conventional power supply with seven parallel switching thyratrons, simultaneously working, for the MAGT power supply could not switch a greater current than that switched by one thyratron. It was confirmed by those excitations that the MAGT unit with the MPC could replace a high power commercial thyratron directly for excimer lasers. The switching stability was significantly superior to that of the thyratron in a high repetition rate region, judging from the discharge current wave forms. It should be possible for the MAGT unit, in the future, to directly switch the discharge current within a rise time of 0.1 μs with a magnetic assist.

  5. Unique Power Dense, Configurable, Robust, High-Voltage Power Supplies Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Princeton Power will develop and deliver three small, lightweight 50 W high-voltage power supplies that have a configurable output voltage range from 500 to 50 kVDC....

  6. Computer Aided Design of Ka-Band Waveguide Power Combining Architectures for Interplanetary Spacecraft

    Science.gov (United States)

    Vaden, Karl R.

    2006-01-01

    Communication systems for future NASA interplanetary spacecraft require transmitter power ranging from several hundred watts to kilowatts. Several hybrid junctions are considered as elements within a corporate combining architecture for high power Ka-band space traveling-wave tube amplifiers (TWTAs). This report presents the simulated transmission characteristics of several hybrid junctions designed for a low loss, high power waveguide based power combiner.

  7. High Power Photonic Crystal Fibre Raman Laser

    Institute of Scientific and Technical Information of China (English)

    YAN Pei-Guang; RUAN Shuang-Chen; YU Yong-Qin; GUO Chun-Yu; GUO Yuan; LIU Cheng-Xiang

    2006-01-01

    A cw Raman laser based on a 100-m photonic crystal fibre is demonstrated with up to 3.8 W output power at the incident pump power of 12 W, corresponding to an optical-to-optical efficiency of about 31.6%. The second order Stokes light, which is firstly reported in a cw photonic crystal fibre Raman laser, is obtained at 1183nm with an output power of 1.6 W and a slope efficiency of about 45.7%.

  8. Thermoelectric Powered High Temperature Wireless Sensing

    Science.gov (United States)

    Kucukkomurler, Ahmet

    This study describes use of a thermoelectric power converter to transform waste heat into electrical energy to power an RF receiver and transmitter, for use in harsh environment wireless temperature sensing and telemetry. The sensing and transmitting module employs a DS-1820 low power digital temperature sensor to perform temperature to voltage conversion, an ATX-34 RF transmitter, an ARX-34 RF receiver module, and a PIC16f84A microcontroller to synchronize data communication between them. The unit has been tested in a laboratory environment, and promising results have been obtained for an actual automotive wireless under hood temperature sensing and telemetry implementation.

  9. High efficiency fuel cell based uninterruptible power supply for digital equipment

    Science.gov (United States)

    Gonzales, James; Tamizhmani, Govindasamy

    Eliminating the ac-dc converter (such as a computer's power supply), in a dc system when using a fuel cell based uninterruptible power supply (UPS), serves several primary functions. Firstly, it eliminates the need for a dc-ac inverter, and secondly, it eliminates a usually highly inefficient component-the power supply. Multiple conversions result in multiple inefficiencies. By replacing the computer's ac power supply with a high efficiency dc power supply capable of operating directly from a fuel cell - and thereby eliminating the inverter - the overall efficiency of the UPS can be increased by 50% or more. This is essential considering that the primary function of a fuel cell based UPS is long-term operation of the system, and poor efficiency equates to higher fuel consumption. Furthermore, inefficient systems have greater power demands, and therefore a larger fuel cell stack is needed to power them. At the present cost of fuel cell systems, this is a considerable problem. The easiest way to accomplish a direct dc UPS is to replace the computer's ac-dc power supply with a dc-dc power supply.

  10. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  11. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  12. Computer Security for Commercial Nuclear Power Plants - Literature Review for Korea Hydro Nuclear Power Central Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Duran, Felicia Angelica [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Security Systems Analysis Dept.; Waymire, Russell L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Security Systems Analysis Dept.

    2013-10-01

    Sandia National Laboratories (SNL) is providing training and consultation activities on security planning and design for the Korea Hydro and Nuclear Power Central Research Institute (KHNPCRI). As part of this effort, SNL performed a literature review on computer security requirements, guidance and best practices that are applicable to an advanced nuclear power plant. This report documents the review of reports generated by SNL and other organizations [U.S. Nuclear Regulatory Commission, Nuclear Energy Institute, and International Atomic Energy Agency] related to protection of information technology resources, primarily digital controls and computer resources and their data networks. Copies of the key documents have also been provided to KHNP-CRI.

  13. A FAST BIT-LOADING ALGORITHM FOR HIGH SPEED POWER LINE COMMUNICATIONS

    Institute of Scientific and Technical Information of China (English)

    Zhang Shengqing; Zhao Li; Zou Cairong

    2012-01-01

    Adaptive bit-loading is a key technology in high speed power line communications with the Orthogonal Frequency Division Multiplexing (OFDM) modulation technology.According to the real situation of the transmitting power spectrum limited in high speed power line communications,this paper explored the adaptive bit loading algorithm to maximize transmission bit number when transmitting power spectral density and bit error rate are not exceed upper limit.With the characteristics of the power line channel,first of all,it obtains the optimal bit loading algorithm,and then provides the improved algorithm to reduce the computational complexity.Based on the analysis and simulation,it offers a non-iterative bit allocation algorithm,and finally the simulation shows that this new algorithm can greatly reduce the computational complexity,and the actual bit allocation results close to optimal.

  14. Advanced Capacitors for High-Power Applications Project

    Data.gov (United States)

    National Aeronautics and Space Administration — As the consumer and industrial requirements for compact, high-power-density, electrical power systems grow substantially over the next decade; there will be a...

  15. Computational tool for simulation of power and refrigeration cycles

    Science.gov (United States)

    Córdoba Tuta, E.; Reyes Orozco, M.

    2016-07-01

    Small improvement in thermal efficiency of power cycles brings huge cost savings in the production of electricity, for that reason have a tool for simulation of power cycles allows modeling the optimal changes for a best performance. There is also a big boom in research Organic Rankine Cycle (ORC), which aims to get electricity at low power through cogeneration, in which the working fluid is usually a refrigerant. A tool to design the elements of an ORC cycle and the selection of the working fluid would be helpful, because sources of heat from cogeneration are very different and in each case would be a custom design. In this work the development of a multiplatform software for the simulation of power cycles and refrigeration, which was implemented in the C ++ language and includes a graphical interface which was developed using multiplatform environment Qt and runs on operating systems Windows and Linux. The tool allows the design of custom power cycles, selection the type of fluid (thermodynamic properties are calculated through CoolProp library), calculate the plant efficiency, identify the fractions of flow in each branch and finally generates a report very educational in pdf format via the LaTeX tool.

  16. Five Mass Power Transmission Line of a Ship Computer Modelling

    Directory of Open Access Journals (Sweden)

    Kazakoff Alexander Borisoff

    2016-03-01

    Full Text Available The work, presented in this paper, appears to be a natural continuation of the work presented and reported before, on the design of power transmission line of a ship, but with different multi-mass model. Some data from the previous investigations are used as a reference data, mainly from the analytical investigations, for the developed in the previ- ous study, frequency and modal analysis of a five mass model of a power transmission line of a ship. In the paper, a profound dynamic analysis of a concrete five mass dynamic model of the power transmission line of a ship is performed using Finite Element Analysis (FEA, based on the previously recommended model, investigated in the previous research and reported before. Thus, the partially validated by frequency analysis five mass model of a power transmission line of a ship is subjected to dynamic analysis. The objective of the work presented in this paper is dynamic modelling of a five mass transmission line of a ship, partial validation of the model and von Mises stress analysis calculation with the help of Finite Element Analysis (FEA and comparison of the derived results with the analytically calculated values. The partially validated five mass power transmission line of a ship can be used for definition of many dy- namic parameters, particularly amplitude of displacement, velocity and acceleration, respectively in time and frequency domain. The frequency behaviour of the model parameters is investigated in frequency domain and it corresponds to the predicted one.

  17. Controlled Compact High Voltage Power Lines

    Directory of Open Access Journals (Sweden)

    Postolati V.

    2016-04-01

    Full Text Available Nowadays modern overhead transmission lines (OHL constructions having several significant differences from conventional ones are being used in power grids more and more widely. Implementation of compact overhead lines equipped with FACTS devices, including phase angle regulator settings (compact controlled OHL, appears to be one of the most effective ways of power grid development. Compact controlled AC HV OHL represent a new generation of power transmission lines embodying recent advanced achievements in design solutions, including towers and insulation, together with interconnection schemes and control systems. Results of comprehensive research and development in relation to 110–500kV compact controlled power transmission lines together with theoretical basis, substantiation, and methodological approaches to their practical application are presented in the present paper.

  18. Two applications of parallel processing in power system computation

    Energy Technology Data Exchange (ETDEWEB)

    Lemaitre, C.; Thomas, B. [Electricite de France, 92 - Clamart (France). Research and Development Div.

    1996-12-31

    Performance improvements are discussed achieved in two power system software modules through the use of parallel processing techniques. The first software module, EVARISTE, outputs a voltage stability indicator for various power system situations. The second module, MEXICO, assesses power system reliability and operating costs by simulating a large number of contingencies for generation and transmission equipment. Both software modules are well-suited to coarse-grain parallel processing. The first module was parallelized on a distributed-memory machine and the second on a shared-memory machine. A description of the parallelization process used in these two cases is presented, and details on the performance levels achieved are discussed, including aspects of programming, parameter selection, and machine characteristics. (author) 7 refs.

  19. High Intensity Tactical Power Sources for the 1990 Army.

    Science.gov (United States)

    conceptual or physical study which may become feasible as high intensity power sources . These considerations include present state of the art of...requirements, energy and power output capabilities, and fixed costs. From these tables, it may be seen that a variety of electrical power sources would be...required to satisfy diverse requirements, but an attempt is made to categorize possible high intensity power sources into their areas of optimum

  20. High power ring methods and accelerator driven subcritical reactor application

    Energy Technology Data Exchange (ETDEWEB)

    Tahar, Malek Haj [Univ. of Grenoble (France)

    2016-08-07

    High power proton accelerators allow providing, by spallation reaction, the neutron fluxes necessary in the synthesis of fissile material, starting from Uranium 238 or Thorium 232. This is the basis of the concept of sub-critical operation of a reactor, for energy production or nuclear waste transmutation, with the objective of achieving cleaner, safer and more efficient process than today’s technologies allow. Designing, building and operating a proton accelerator in the 500-1000 MeV energy range, CW regime, MW power class still remains a challenge nowadays. There is a limited number of installations at present achieving beam characteristics in that class, e.g., PSI in Villigen, 590 MeV CW beam from a cyclotron, SNS in Oakland, 1 GeV pulsed beam from a linear accelerator, in addition to projects as the ESS in Europe, a 5 MW beam from a linear accelerator. Furthermore, coupling an accelerator to a sub-critical nuclear reactor is a challenging proposition: some of the key issues/requirements are the design of a spallation target to withstand high power densities as well as ensure the safety of the installation. These two domains are the grounds of the PhD work: the focus is on the high power ring methods in the frame of the KURRI FFAG collaboration in Japan: upgrade of the installation towards high intensity is crucial to demonstrate the high beam power capability of FFAG. Thus, modeling of the beam dynamics and benchmarking of different codes was undertaken to validate the simulation results. Experimental results revealed some major losses that need to be understood and eventually overcome. By developing analytical models that account for the field defects, one identified major sources of imperfection in the design of scaling FFAG that explain the important tune variations resulting in the crossing of several betatron resonances. A new formula is derived to compute the tunes and properties established that characterize the effect of the field imperfections on the

  1. Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations

    CERN Document Server

    Reid, M T Homer

    2013-01-01

    We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.

  2. Power Grid Islands Service Restoration Based on Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; HE Jinghan; YIN Hang; BO Zhiqian; B Kirby

    2011-01-01

    To reduce customer minutes loss (CML), service restoration is an increasingly important task for the power grid. The main objective in islands service restoration procedures is to restore as many loads as possible for healthy networks through the reconfiguration without violating the network operation constraints. The proposed method can be described by the following equation.

  3. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  4. High Efficiency Microwave Power Amplifier: From the Lab to Industry

    Science.gov (United States)

    Sims, William Herbert, III; Bell, Joseph L. (Technical Monitor)

    2001-01-01

    Since the beginnings of space travel, various microwave power amplifier designs have been employed. These included Class-A, -B, and -C bias arrangements. However, shared limitation of these topologies is the inherent high total consumption of input power associated with the generation of radio frequency (RF)/microwave power. The power amplifier has always been the largest drain for the limited available power on the spacecraft. Typically, the conversion efficiency of a microwave power amplifier is 10 to 20%. For a typical microwave power amplifier of 20 watts, input DC power of at least 100 watts is required. Such a large demand for input power suggests that a better method of RF/microwave power generation is required. The price paid for using a linear amplifier where high linearity is unnecessary includes higher initial and operating costs, lower DC-to-RF conversion efficiency, high power consumption, higher power dissipation and the accompanying need for higher capacity heat removal means, and an amplifier that is more prone to parasitic oscillation. The first use of a higher efficiency mode of power generation was described by Baxandall in 1959. This higher efficiency mode, Class-D, is achieved through distinct switching techniques to reduce the power losses associated with switching, conduction, and gate drive losses of a given transistor.

  5. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.

    Science.gov (United States)

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.

  6. HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing

    Science.gov (United States)

    Karimi, Ramin; Hajdu, Andras

    2016-01-01

    Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678

  7. Systematic Approach for Design of Broadband, High Efficiency, High Power RF Amplifiers

    National Research Council Canada - National Science Library

    Mohadeskasaei, Seyed Alireza; An, Jianwei; Chen, Yueyun; Li, Zhi; Abdullahi, Sani Umar; Sun, Tie

    2017-01-01

    ...‐AB RF amplifiers with high gain flatness. It is usually difficult to simultaneously achieve a high gain flatness and high efficiency in a broadband RF power amplifier, especially in a high power design...

  8. On the Power of Correlated Randomness in Secure Computation

    DEFF Research Database (Denmark)

    Ishai, Yuval; Kushilevitz, Eyal; Meldgaard, Sigurd Torkel

    2013-01-01

    We investigate the extent to which correlated secret randomness can help in secure computation with no honest majority. It is known that correlated randomness can be used to evaluate any circuit of size s with perfect security against semi-honest parties or statistical security against malicious ...

  9. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  10. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  11. High power densities from high-temperature material interactions

    Energy Technology Data Exchange (ETDEWEB)

    Morris, J.F.

    1981-01-01

    Thermionic energy conversion (TEC) and metallic-fluid heat pipes (MFHPs) offer important and unique advantages in terrestrial and space energy processing. And they are well suited to serve together synergistically. TEC and MFHPs operate through working-fluid vaporization, condensation cycles that accept great thermal power densities at high temperatures. TEC and MFHPs have apparently simple, isolated performance mechanisms that are somewhat similar. And they also have obviously difficult, complected material problems that again are somewhat similar. Intensive investigation reveals that aspects of their operating cycles and material problems tend to merge: high-temperature material effects determine the level and lifetime of performance. Simplified equations verify the preceding statement for TEC and MFHPs. Material properties and interactions exert primary influences on operational effectiveness. And thermophysicochemical stabilities dictate operating temperatures which regulate the thermoemissive currents of TEC and the vaporization flow rates of MFHPs. Major high-temperature material problems of TEC and MFHPs have been solved. These solutions lead to productive, cost-effective applications of current TEC and MFHPs - and point to significant improvements with anticipated technological gains.

  12. Computer program for afterheat temperature distribution for mobile nuclear power plant

    Science.gov (United States)

    Parker, W. G.; Vanbibber, L. E.

    1972-01-01

    ESATA computer program was developed to analyze thermal safety aspects of post-impacted mobile nuclear power plants. Program is written in FORTRAN 4 and designed for IBM 7094/7044 direct coupled system.

  13. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  14. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  15. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  16. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  17. The Plant-Window System: A framework for an integrated computing environment at advanced nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Wood, R.T.; Mullens, J.A. [Oak Ridge National Lab., TN (United States); Naser, J.A. [Electric Power Research Inst., Palo Alto, CA (United States)

    1997-10-01

    Power plant data, and the information that can be derived from it, provide the link to the plant through which the operations, maintenance and engineering staff understand and manage plant performance. The extensive use of computer technology in advanced reactor designs provides the opportunity to greatly expand the capability to obtain, analyze, and present data about the plant to station personnel. However, to support highly efficient and increasingly safe operation of nuclear power plants, it is necessary to transform the vast quantity of available data into clear, concise, and coherent information that can be readily accessed and used throughout the plant. This need can be met by an integrated computer workstation environment that provides the necessary information and software applications, in a manner that can be easily understood and sued, to the proper users throughout the plan. As part of a Cooperative Research and Development Agreement with the Electric Power Research Institute, the Oak Ridge National laboratory has developed functional requirements for a Plant-Wide Integrated Environment Distributed On Workstations (Plant-Window) System. The Plant-Window System (PWS) can serve the needs of operations, engineering, and maintenance personnel at nuclear power stations by providing integrated data and software applications within a common computing environment. The PWS requirements identify functional capabilities and provide guidelines for standardized hardware, software, and display interfaces so as to define a flexible computing environment for both current generation nuclear power plants and advanced reactor designs.

  18. Proceedings: Workshop on advanced mathematics and computer science for power systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Esselman, W.H.; Iveson, R.H. (Electric Power Research Inst., Palo Alto, CA (United States))

    1991-08-01

    The Mathematics and Computer Workshop on Power System Analysis was held February 21--22, 1989, in Palo Alto, California. The workshop was the first in a series sponsored by EPRI's Office of Exploratory Research as part of its effort to develop ways in which recent advances in mathematics and computer science can be applied to the problems of the electric utility industry. The purpose of this workshop was to identify research objectives in the field of advanced computational algorithms needed for the application of advanced parallel processing architecture to problems of power system control and operation. Approximately 35 participants heard six presentations on power flow problems, transient stability, power system control, electromagnetic transients, user-machine interfaces, and database management. In the discussions that followed, participants identified five areas warranting further investigation: system load flow analysis, transient power and voltage analysis, structural instability and bifurcation, control systems design, and proximity to instability. 63 refs.

  19. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  20. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  1. 157 W all-fiber high-power picosecond laser.

    Science.gov (United States)

    Song, Rui; Hou, Jing; Chen, Shengping; Yang, Weiqiang; Lu, Qisheng

    2012-05-01

    An all-fiber high-power picosecond laser is constructed in a master oscillator power amplifier configuration. The self-constructed fiber laser seed is passively mode locked by a semiconductor saturable absorber mirror. Average output power of 157 W is obtained after three stages of amplification at a fundamental repetition rate of 60 MHz. A short length of ytterbium double-clad fiber with a high doping level is used to suppress nonlinear effects. However, a stimulated Raman scattering (SRS) effect occurs owing to the 78 kW high peak power. A self-made all-fiber repetition rate increasing system is used to octuple the repetition rate and decrease the high peak power. Average output power of 156.6 W is obtained without SRS under the same pump power at a 480 MHz repetition rate with 0.6 nm line width.

  2. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  3. High brilliance and high efficiency: optimized high power diode laser bars

    Science.gov (United States)

    Hülsewede, R.; Schulze, H.; Sebastian, J.; Schröder, D.; Meusel, J.; Hennig, P.

    2008-02-01

    The strong increasing laser market has ongoing demands to reduce the costs of diode laser pumped systems. For that reason JENOPTIK Diode Lab GmbH (JDL) optimized the bar brilliance (small vertical far field divergence) and bar efficiency (higher optical power operation) with respect to the pump applications. High efficiency reduces the costs for mounting and cooling and high brilliance increases the coupling efficiency. Both are carefully adjusted in the 9xx nm - high power diode laser bars for pump applications in disc- and fiber lasers. Based on low loss waveguide structures high brilliance bars with 19° fast axis beam divergence (FWHM) with 58 % maximum efficiency and 27° fast axis beam divergence (FWHM) with 62 % maximum efficiency are developed. Mounted on conductive cooled heat sinks high power operation with lifetime > 20.000 hours at 120 W output power level (50 % filling factor bars) and 80W (20 % filling factor bars) is demonstrated. 808nm bars used as pump sources for Nd:YAG solid state lasers are still dominating in the market. With respect to the demands on high reliability at high power operation current results of a 100 W high power life time test are showing more than 9000 hour operation time for passively cooled packaged high efficiency 50 % filling factor bars. Measurement of the COMD-level after this hard pulse life time test demonstrates very high power levels with no significant droop in COMD-power level. This confirms the high facet stability of JDL's facet technology. New high power diode laser bars with wavelength of 825 nm and 885 nm are still under development and first results are presented.

  4. High Fidelity Adiabatic Quantum Computation via Dynamical Decoupling

    CERN Document Server

    Quiroz, Gregory

    2012-01-01

    We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.

  5. Computing Integer Powers in Floating-Point Arithmetic

    CERN Document Server

    Kornerup, Peter; Muller, Jean-Michel

    2007-01-01

    We introduce two algorithms for accurately evaluating powers to a positive integer in floating-point arithmetic, assuming a fused multiply-add (fma) instruction is available. We show that our log-time algorithm always produce faithfully-rounded results, discuss the possibility of getting correctly rounded results, and show that results correctly rounded in double precision can be obtained if extended-precision is available with the possibility to round into double precision (with a single rounding).

  6. Computing Integer Powers in Floating-Point Arithmetic

    OpenAIRE

    Kornerup, Peter; Lefèvre, Vincent; Muller, Jean-Michel

    2007-01-01

    We introduce two algorithms for accurately evaluating powers to a positive integer in floating-point arithmetic, assuming a fused multiply-add (fma) instruction is available. We show that our log-time algorithm always produce faithfully-rounded results, discuss the possibility of getting correctly rounded results, and show that results correctly rounded in double precision can be obtained if extended-precision is available with the possibility to round into double precision (with a single rou...

  7. Dynamic Computer Model of a Stirling Space Nuclear Power System

    Science.gov (United States)

    2006-05-04

    Profiles of the Stirling converter……. 50 Figure 4-3. Random fiber regenerator matrices with 80% and 88% porosity................... 53 Figure 4-4...ideal in shape (Figure 2-7). 29,30 The main components of the Stirling converter are the heater, regenerator , cooler, displacer, power piston...and alternator. The heater and cooler provide a continuous heat source and sink, respectively, for the Stirling converter. The regenerator adds

  8. On the Power of Correlated Randomness in Secure Computation

    DEFF Research Database (Denmark)

    Ishai, Yuval; Kushilevitz, Eyal; Meldgaard, Sigurd Torkel

    2013-01-01

    We investigate the extent to which correlated secret randomness can help in secure computation with no honest majority. It is known that correlated randomness can be used to evaluate any circuit of size s with perfect security against semi-honest parties or statistical security against malicious...... positive and negative results on unconditionally secure computation with correlated randomness. Concretely, we obtain the following results. Minimizing communication. Any multiparty functionality can be realized, with perfect security against semi-honest parties or statistical security against malicious...... parties, where the communication complexity grows linearly with s. This leaves open two natural questions: (1) Can the communication complexity be made independent of the circuit size? (2) Is it possible to obtain perfect security against malicious parties? We settle the above questions, obtaining both...

  9. High-pressure (>1-bar) dielectric barrier discharge lamps generating short pulses of high-peak power vacuum ultraviolet radiation

    Energy Technology Data Exchange (ETDEWEB)

    Carman, R J; Mildren, R P; Ward, B K; Kane, D M [Short Wavelength Interactions with Materials (SWIM), Physics Department, Macquarie University, North Ryde, Sydney, NSW 2109 (Australia)

    2004-09-07

    We have investigated the scaling of peak vacuum ultraviolet output power from homogeneous Xe dielectric barrier discharges excited by short voltage pulses. Increasing the Xe fill pressure above 1-bar provides an increased output pulse energy, a shortened pulse duration and increases in the peak output power of two to three orders of magnitude. High peak power pulses of up to 6 W cm{sup -2} are generated with a high efficiency for pulse rates up to 50 kHz. We show that the temporal pulse characteristics are in good agreement with results from detailed computer modelling of the discharge kinetics.

  10. Simulation of High Power Lasers (Preprint)

    Science.gov (United States)

    2010-06-01

    product of laser power. 5. References 1 Wilcox, D. C, Turbulence Modeling for CFD, DCW Industries, Inc. pp. 185-193, July 1998. 2 Menter, F. L...Modeling for CFD, DCW Industries, Inc. pp. 294-296, July 1998. 4 Perram, G. P, .Int. J. Chem. Kinet. 27, 817-28 (1995). 5 Madden, T. J. and Solomon

  11. Coordinated Frequency Control of Wind Turbines in Power Systems with High Wind Power Penetration

    DEFF Research Database (Denmark)

    Tarnowski, Germán Claudio

    particular views. These models were developed and verified during this work, basedaround a particular manufacturer’s wind turbine and on said isolated power system withwind power. The capability of variable speed wind turbines for providing Inertial Response is analysed. To perform this assessment, a control...... and the dynamic stability of the grid frequency under large disturbances would be compromised. The aim of this study is to investigate the integration of large scale wind power generation in power systems and its active power control.Novel methods and solutions dealing specifically with the electric frequency...... stability and high wind power penetration or in islanding situations are addressed. The review of relevant theoretical concepts is supported by measurements carried out on an isolated power system characterized by high wind power penetration. Different mathematical and simulation models are used in several...

  12. Computational power and generative capacity of genetic systems.

    Science.gov (United States)

    Igamberdiev, Abir U; Shklovskiy-Kordi, Nikita E

    2016-01-01

    Semiotic characteristics of genetic sequences are based on the general principles of linguistics formulated by Ferdinand de Saussure, such as the arbitrariness of sign and the linear nature of the signifier. Besides these semiotic features that are attributable to the basic structure of the genetic code, the principle of generativity of genetic language is important for understanding biological transformations. The problem of generativity in genetic systems arises to a possibility of different interpretations of genetic texts, and corresponds to what Alexander von Humboldt called "the infinite use of finite means". These interpretations appear in the individual development as the spatiotemporal sequences of realizations of different textual meanings, as well as the emergence of hyper-textual statements about the text itself, which underlies the process of biological evolution. These interpretations are accomplished at the level of the readout of genetic texts by the structures defined by Efim Liberman as "the molecular computer of cell", which includes DNA, RNA and the corresponding enzymes operating with molecular addresses. The molecular computer performs physically manifested mathematical operations and possesses both reading and writing capacities. Generativity paradoxically resides in the biological computational system as a possibility to incorporate meta-statements about the system, and thus establishes the internal capacity for its evolution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Atmospheric Propagation and Combining of High-Power Lasers

    Science.gov (United States)

    2015-09-08

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6703--15-9646 Atmospheric Propagation and Combining of High-Power Lasers W. NelsoN...ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Atmospheric Propagation and Combining of High-Power Lasers W. Nelson,* P. Sprangle...Turbulence Beam combining In this paper we analyze the beam combining and atmospheric propagation of high-power lasers for directed-energy (DE

  14. High Temperature Power Converters for Military Hybrid Electric Vehicles

    Science.gov (United States)

    2011-08-09

    M) MINI-SYMPOSIUM AUGUST 9-11 DEARBORN, MICHIGAN HIGH TEMPERATURE POWER CONVERTERS FOR MILITARY HYBRID ELECTRIC VEHICLES ABSTRACT...SUBTITLE High Temperature Power Converters for Military Hybrid Electric Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...public release High Temperature Power Converters for Military Hybrid Electric Vehicles Page 2 of 8 I. INTRODUCTION Today, wide bandgap devices

  15. High-Speed Low Power Design in CMOS

    DEFF Research Database (Denmark)

    Ghani, Arfan; Usmani, S. H.; Stassen, Flemming

    2004-01-01

    Static CMOS design displays benefits such as low power consumption, dominated by dynamic power consumption. In contrast, MOS Current Mode Logic (MCML) displays static rather than dynamic power consumption. High-speed low-power design is one of the many application areas in VLSI that require...... the appropriate domains of performance and power requirements in which MCML presents benefits over standard CMOS. An optimized cell library is designed and implemented in both CMOS and MCML in order to make a comparison with reference to speed and power. Much more time is spent in order to nderstand...

  16. Design and Construction of Low Cost High Voltage dc Power Supply for Constant Power Operation

    Science.gov (United States)

    Kumar, N. S.; Jayasankar, V.

    2013-06-01

    Pulsed load applications like laser based systems need high voltage dc power supplies with better regulation characteristics. This paper presents the design, construction and testing of dc power supply with 1 kV output at 300 W power level. The designed converter has half bridge switched mode power supply (SMPS) configuration with 20 kHz switching. The paper covers the design of half bridge inverter, closed loop control, High frequency transformer and other related electronics. The designed power supply incorporates a low cost OPAMP based feedback controller which is designed using small signal modelling of the converter. The designed converter was constructed and found to work satisfactorily as per the specifications.

  17. Modelling aluminium wire bond reliability in high power OMP devices

    NARCIS (Netherlands)

    Kregting, R.; Yuan, C.A.; Xiao, A.; Bruijn, F. de

    2011-01-01

    In a RF power application such as the OMP, the wires are subjected to high current (because of the high power) and high temperature (because of the heat from IC and joule-heating from the wire itself). Moreover, the wire shape is essential to the RF performance. Hence, the aluminium wire is preferre

  18. Multidisciplinary Modelling Tools for Power Electronic Circuits:with Focus on High Power Modules

    OpenAIRE

    Bahman, Amir Sajjad

    2015-01-01

    This thesis presents multidisciplinary modelling techniques in a Design For Reliability (DFR) approach for power electronic circuits. With increasing penetration of renewable energy systems, the demand for reliable power conversion systems is becoming critical. Since a large part of electricity is processed through power electronics, highly efficient, sustainable, reliable and cost-effective power electronic devices are needed. Reliability of a product is defined as the ability to perform wit...

  19. Measurement of H!gh Power Current-Stabilized Power Supply with High Stability

    Institute of Scientific and Technical Information of China (English)

    YanHuaihai; FengXiuming; BaiZhen; ZhouZhongzu

    2003-01-01

    The DC power supply system of HIRFL has been upgraded since 1999, these new power supplies are used mainly as high frequency ZVS soft-switching converters or thyristor phase-controlled rectifiers. Each power supply is strictly tested before being put into operation, especially for long-term current stability, current ripple, efficiency, repeatability, EMI and so on. The tested results indicated that performances of power supplies satisfy requirement of HIRFL.

  20. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  1. High specific power flexible integrated IMM photovoltaic blanket Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Originally designed for space applications, multi-junction solar cells have a high overall power conversion efficiency (>30%) which compares favorably to...

  2. High Efficiency Hall Thruster Discharge Power Converter Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek leveraged previous, internally sponsored, high power, Hall thruster discharge converter development which allowed it to design, build, and test new printed...

  3. Temperature Stabilized Characterization of High Voltage Power Supplies

    CERN Document Server

    Krarup, Ole

    2017-01-01

    High precision measurements of the masses of nuclear ions in the ISOLTRAP experiment relies on an MR-ToF. A major source of noise and drift is the instability of the high voltage power supplies employed. Electrical noise and temperature changes can broaden peaks in time-of-flight spectra and shift the position of peaks between runs. In this report we investigate how the noise and drift of high-voltage power supplies can be characterized. Results indicate that analog power supplies generally have better relative stability than digitally controlled ones, and that the high temperature coefficients of all power supplies merit efforts to stabilize them.

  4. Energy Use and Power Levels in New Monitors and Personal Computers

    Energy Technology Data Exchange (ETDEWEB)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-07-23

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  5. Phosphoric acid fuel cell power plant system performance model and computer program

    Science.gov (United States)

    Alkasab, K. A.; Lu, C. Y.

    1984-01-01

    A FORTRAN computer program was developed for analyzing the performance of phosphoric acid fuel cell power plant systems. Energy mass and electrochemical analysis in the reformer, the shaft converters, the heat exchangers, and the fuel cell stack were combined to develop a mathematical model for the power plant for both atmospheric and pressurized conditions, and for several commercial fuels.

  6. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  7. Proceedings: Workshop on Advanced Mathematics and Computer Science for Power Systems Analysis

    Energy Technology Data Exchange (ETDEWEB)

    None

    1991-08-01

    EPRI's Office of Exploratory Research sponsors a series of workshops that explore how to apply recent advances in mathematics and computer science to the problems of the electric utility industry. In this workshop, participants identified research objectives that may significantly improve the mathematical methods and computer architecture currently used for power system analysis.

  8. Negative capacitance for ultra-low power computing

    Science.gov (United States)

    Khan, Asif Islam

    Owing to the fundamental physics of the Boltzmann distribution, the ever-increasing power dissipation in nanoscale transistors threatens an end to the almost-four-decade-old cadence of continued performance improvement in complementary metal-oxide-semiconductor (CMOS) technology. It is now agreed that the introduction of new physics into the operation of field-effect transistors---in other words, "reinventing the transistor'"--- is required to avert such a bottleneck. In this dissertation, we present the experimental demonstration of a novel physical phenomenon, called the negative capacitance effect in ferroelectric oxides, which could dramatically reduce power dissipation in nanoscale transistors. It was theoretically proposed in 2008 that by introducing a ferroelectric negative capacitance material into the gate oxide of a metal-oxide-semiconductor field-effect transistor (MOSFET), the subthreshold slope could be reduced below the fundamental Boltzmann limit of 60 mV/dec, which, in turn, could arbitrarily lower the power supply voltage and the power dissipation. The research presented in this dissertation establishes the theoretical concept of ferroelectric negative capacitance as an experimentally verified fact. The main results presented in this dissertation are threefold. To start, we present the first direct measurement of negative capacitance in isolated, single crystalline, epitaxially grown thin film capacitors of ferroelectric Pb(Zr0.2Ti0.8)O3. By constructing a simple resistor-ferroelectric capacitor series circuit, we show that, during ferroelectric switching, the ferroelectric voltage decreases, while the stored charge in it increases, which directly shows a negative slope in the charge-voltage characteristics of a ferroelectric capacitor. Such a situation is completely opposite to what would be observed in a regular resistor-positive capacitor series circuit. This measurement could serve as a canonical test for negative capacitance in any novel

  9. Output beam analysis of high power COIL

    Institute of Scientific and Technical Information of China (English)

    Deli Yu(于德利); Fengting Sang(桑凤亭); Yuqi Jin(金玉奇); Yizhu Sun(孙以珠)

    2003-01-01

    As the output power of a chemical oxygen iodine laser (COIL) increases, the output laser beam instabilityappears as the far-field beam spot drift and deformation for the large Fresnel number unstable resonator.In order to interpret this phenomenon, an output beam mode simulation code was developed with the fastFourier transform method. The calculation results show that the presence of the nonuniform gain in COILproduces a skewed output intensity distribution, which causes the mirror tilt and bulge due to the thermalexpansion. With the output power of COIL increases, the mirror surfaces, especially the back surface ofthe scraper mirror, absorb more and more heat, which causes the drift and deformation of far field beamspot seriously. The initial misalignment direction is an important factor for the far field beam spot driftingand deformation.

  10. High-power CSI-fed induction motor drive with optimal power distribution based control

    Science.gov (United States)

    Kwak, S.-S.

    2011-11-01

    In this article, a current source inverter (CSI) fed induction motor drive with an optimal power distribution control is proposed for high-power applications. The CSI-fed drive is configured with a six-step CSI along with a pulsewidth modulated voltage source inverter (PWM-VSI) and capacitors. Due to the PWM-VSI and the capacitor, sinusoidal motor currents and voltages with high quality as well as natural commutation of the six-step CSI can be obtained. Since this CSI-fed drive can deliver required output power through both the six-step CSI and PWM-VSI, this article shows that the kVA ratings of both the inverters can be reduced by proper real power distribution. The optimal power distribution under load requirements, based on power flow modelling of the CSI-fed drive, is proposed to not only minimise the PWM-VSI rating but also reduce the six-step CSI rating. The dc-link current control of the six-step CSI is developed to realise the optimal power distribution. Furthermore, a vector controlled drive for high-power induction motors is proposed based on the optimal power distribution. Experimental results verify the high-power CSI-fed drive with the optimal power distribution control.

  11. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  12. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  13. High Energy Density Capacitors for Pulsed Power Applications

    Science.gov (United States)

    2009-07-01

    high energy density energy storage capacitors. High efficency capacitors are available with energy densities as high as 3 J/cc for 1000 shots or...GENERAL ATOMICS ENERGY PRODUCTS Engineering Bulletin HIGH ENERGY DENSITY CAPACITORS FOR PULSED POWER APPLICATIONS Fred MacDougall, Joel...00-2009 4. TITLE AND SUBTITLE High Energy Density Capacitors for Pulsed Power Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  14. High-power AlGaAs channeled substrate planar diode lasers for spaceborne communications

    Science.gov (United States)

    Connolly, J. C.; Goldstein, B.; Pultz, G. N.; Slavin, S. E.; Carlin, D. B.; Ettenberg, M.

    1988-01-01

    A high power channeled substrate planar AlGaAs diode laser with an emission wavelength of 8600 to 8800 A was developed. The optoelectronic behavior (power current, single spatial and spectral behavior, far field characteristics, modulation, and astigmatism properties) and results of computer modeling studies on the performance of the laser are discussed. Lifetest data on these devices at high output power levels is also included. In addition, a new type of channeled substrate planar laser utilizing a Bragg grating to stabilize the longitudinal mode was demonstrated. The fabrication procedures and optoelectronic properties of this new diode laser are described.

  15. Investigation on Satellite-borne High-power Solid-state Power Amplifier Technology and Experiment

    OpenAIRE

    Wu Xiao-po; Zhao Hai-yang; Xi Song-tao

    2014-01-01

    Based on the research and development efforts of satellite-borne lumped solid-state transmitters, the design of a satellite-borne high-power microwave amplifier module is introduced. Focusing on satellite-borne applications, aspects of the high-power density thermal design, multipactor proof design, EMC design and so on, which are critical technologies for a solid-state power amplifier, are discussed. Subsequently, experiments are used to verify the concept.

  16. Investigation on Satellite-borne High-power Solid-state Power Amplifier Technology and Experiment

    Directory of Open Access Journals (Sweden)

    Wu Xiao-po

    2014-06-01

    Full Text Available Based on the research and development efforts of satellite-borne lumped solid-state transmitters, the design of a satellite-borne high-power microwave amplifier module is introduced. Focusing on satellite-borne applications, aspects of the high-power density thermal design, multipactor proof design, EMC design and so on, which are critical technologies for a solid-state power amplifier, are discussed. Subsequently, experiments are used to verify the concept.

  17. Lattice Boltzmann Method used for the aircraft characteristics computation at high angle of attack

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Traditional Finite Volume Method(FVM)and Lattice Boltzmann Method(LBM)are both used to compute the high angle attack aerodynamic characteristics of the benchmark aircraft model named CT-1.Even though the software requires flow on the order of Ma<0.4,simulation at Ma=0.5 is run in PowerFLOW after theoretical analysis.The consistency with the wind tunnel testing is satisfied,especially for the LBM which can produce perfect results at high angle attack.PowerFLOW can accurately capture the detail of flows because it is inherently time-dependent and parallel and suits large-scale computation very well.

  18. Self-commutating converters for high power applications

    CERN Document Server

    Arrillaga, Jos; Watson, Neville R; Murray, Nicholas J

    2010-01-01

    For very high voltage or very high current applications, the power industry still relies on thyristor-based Line Commutated Conversion (LCC), which limits the power controllability to two quadrant operation. However, the ratings of self-commutating switches such as the Insulated-Gate Bipolar Transistor (IGBT) and Integrated Gate-Commutated Thyristor (IGCT), are reaching levels that make the technology possible for very high power applications. This unique book reviews the present state and future prospects of self-commutating static power converters for applications requiring either ultr

  19. Design of High Power Density Amplifiers: Application to Ka Band

    Science.gov (United States)

    Passi, Davide; Leggieri, Alberto; Di Paolo, Franco; Bartocci, Marco; Tafuto, Antonio

    2017-06-01

    Recent developments in the design of high-power-high-frequency amplifiers are assessed in this paper by the analysis and measurements of a high power density amplifier operating in the Ka Band. Design procedure is presented and a technical investigation is reported. The proposed device has shown over 23% of useful frequency bandwidth. It is an ensemble of 16 monolithic solid state power amplifiers that employees mixed technologies as spatial and planar combiners. Test performed have given maximum delivered power of 47.2 dBm.

  20. High-power MUTC photodetectors for RF photonic links

    Science.gov (United States)

    Estrella, Steven; Johansson, Leif A.; Mashanovitch, Milan L.; Beling, Andreas

    2016-02-01

    High power photodiodes are needed for a range of applications. The high available power conversion efficiency makes these ideal for antenna remoting applications, including high power, low duty-cycle RF pulse generation. The compact footprint and fiber optic input allow densely packed RF aperture arrays with low cross-talk for phased high directionality emitters. Other applications include linear RF photonic links and other high dynamic range optical systems. Freedom Photonics has developed packaged modified uni-traveling carrier (MUTC) photodetectors for high-power applications. Both single and balanced photodetector pairs are mounted on a ceramic carrier, and packaged in a compact module optimized for high power operation. Representative results include greater than 100 mA photocurrent, >100m W generated RF power and >20 GHz bandwidth. In this paper, we evaluate the saturation and bandwidth of these single ended and balanced photodetectors for detector diameter in the 16 μm to 34 μm range. Packaged performance is compared to chip performance. Further new development towards the realization of <100GHz packaged photodetector modules with optimized high power performance is described. Finally, incorporation of these photodetector structures in novel photonic integrated circuits (PICs) for high optical power application areas is outlined.

  1. Power computations in time series analyses for traffic safety interventions.

    Science.gov (United States)

    McLeod, A Ian; Vingilis, E R

    2008-05-01

    The evaluation of traffic safety interventions or other policies that can affect road safety often requires the collection of administrative time series data, such as monthly motor vehicle collision data that may be difficult and/or expensive to collect. Furthermore, since policy decisions may be based on the results found from the intervention analysis of the policy, it is important to ensure that the statistical tests have enough power, that is, that we have collected enough time series data both before and after the intervention so that a meaningful change in the series will likely be detected. In this short paper, we present a simple methodology for doing this. It is expected that the methodology presented will be useful for sample size determination in a wide variety of traffic safety intervention analysis applications. Our method is illustrated with a proposed traffic safety study that was funded by NIH.

  2. Principal Component Analysis - A Powerful Tool in Computing Marketing Information

    Directory of Open Access Journals (Sweden)

    Constantin C.

    2014-12-01

    Full Text Available This paper is about an instrumental research regarding a powerful multivariate data analysis method which can be used by the researchers in order to obtain valuable information for decision makers that need to solve the marketing problem a company face with. The literature stresses the need to avoid the multicollinearity phenomenon in multivariate analysis and the features of Principal Component Analysis (PCA in reducing the number of variables that could be correlated with each other to a small number of principal components that are uncorrelated. In this respect, the paper presents step-by-step the process of applying the PCA in marketing research when we use a large number of variables that naturally are collinear.

  3. High power couplers for Project X

    Energy Technology Data Exchange (ETDEWEB)

    Kazakov, S.; Champion, M.S.; Yakovlev, V.P.; Kramp, M.; Pronitchev, O.; Orlov, Y.; /Fermilab

    2011-03-01

    Project X, a multi-megawatt proton source under development at Fermi National Accelerator Laboratory. The key element of the project is a superconducting (SC) 3GV continuous wave (CW) proton linac. The linac includes 5 types of SC accelerating cavities of two frequencies.(325 and 650MHz) The cavities consume up to 30 kW average RF power and need proper main couplers. Requirements and approach to the coupler design are discussed in the report. New cost effective schemes are described. Results of electrodynamics and thermal simulations are presented.

  4. The spectral density function for the Laplacian on high tensor powers of a line bundle

    OpenAIRE

    2001-01-01

    For a symplectic manifold with quantizing line bundle, a choice of almost complex structure determines a Laplacian acting on tensor powers of the bundle. For high tensor powers Guillemin-Uribe showed that there is a well-defined cluster of low-lying eigenvalues, whose distribution is described by a spectral density function. We give an explicit computation of the spectral density function, by constructing certain quasimodes on the associated principle bundle.

  5. Integration of thermoelectrics and photovoltaics as auxiliary power sources in mobile computing applications

    Energy Technology Data Exchange (ETDEWEB)

    Muhtaroglu, Ali; von Jouanne, Annette [School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331-5501 (United States); Yokochi, Alex [School of Chemical, Biological and Environmental Engineering, Oregon State University, Corvallis, OR 97331-2702 (United States)

    2008-02-15

    The inclusion of renewable technologies as auxiliary power sources in mobile computing platforms can lead to improved performance such as the extension of battery life. This paper presents sustainable power management characteristics and performance enhancement opportunities in mobile computing systems resulting from the integration of thermoelectric generators and photovoltaic units. Thermoelectric generators are employed for scavenging waste heat from processors or other significant components in the computer's chipset while the integration of photovoltaic units is demonstrated for generating power from environmental illumination. A scalable and flexible power architecture is also verified to effectively integrate these renewable energy sources. This paper confirms that battery life extension can be achieved through the appropriate integration of renewable sources such as thermoelectric and photovoltaic devices. (author)

  6. Adequacy of Frequency Reserves for High Wind Power Generation

    DEFF Research Database (Denmark)

    Das, Kaushik; Litong-Palima, Marisciel; Maule, Petr

    2016-01-01

    In this article, a new methodology is developed to assess the adequacy of frequency reserves to handle power imbalances caused by wind power forecast errors. The goal of this methodology is to estimate the adequate volume and speed of activation of frequency reserves required to handle power...... imbalances caused due to high penetration of wind power. An algorithm is proposed and developed to estimate the power imbalances due to wind power forecast error following activation of different operating reserves. Frequency containment reserve requirements for mitigating these power imbalances...... are developed through this methodology. Furthermore, the probability of reducing this frequency containment reserve requirement is investigated through this methodology with activation of different volumes and speed of frequency restoration reserve. Wind power generation for 2020 and 2030 scenarios...

  7. High density operation for reactor-relevant power exhaust

    Science.gov (United States)

    Wischmeier, M.

    2015-08-01

    With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.

  8. High performance magnet power supply optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, L.T.

    1988-01-01

    The power supply system for the joint LBL--SLAC proposed accelerator PEP provides the opportunity to take a fresh look at the current techniques employed for controlling large amounts of dc power and the possibility of using a new one. A basic requirement of +- 100 ppM regulation is placed on the guide field of the bending magnets and quadrupoles placed around the 2200 meter circumference of the accelerator. The optimization questions to be answered by this paper are threefold: Can a firing circuit be designed to reduce the combined effects of the harmonics and line voltage combined effects of the harmonics and line voltage unbalance to less than 100 ppM in the magnet field. Given the ambiguity of the previous statement, is the addition of a transistor bank to a nominal SCR controlled system the way to go or should one opt for an SCR chopper system running at 1 KHz where multiple supplies are fed from one large dc bus and the cost--performance evaluation of the three possible systems.

  9. Low reflectance high power RF load

    Energy Technology Data Exchange (ETDEWEB)

    Ives, R. Lawrence; Mizuhara, Yosuke M.

    2016-02-02

    A load for traveling microwave energy has an absorptive volume defined by cylindrical body enclosed by a first end cap and a second end cap. The first end cap has an aperture for the passage of an input waveguide with a rotating part that is coupled to a reflective mirror. The inner surfaces of the absorptive volume consist of a resistive material or are coated with a coating which absorbs a fraction of incident RF energy, and the remainder of the RF energy reflects. The angle of the reflector and end caps is selected such that reflected RF energy dissipates an increasing percentage of the remaining RF energy at each reflection, and the reflected RF energy which returns to the rotating mirror is directed to the back surface of the rotating reflector, and is not coupled to the input waveguide. Additionally, the reflector may have a surface which generates a more uniform power distribution function axially and laterally, to increase the power handling capability of the RF load. The input waveguide may be corrugated for HE11 mode input energy.

  10. 3-D Printed High Power Microwave Magnetrons

    Science.gov (United States)

    Jordan, Nicholas; Greening, Geoffrey; Exelby, Steven; Gilgenbach, Ronald; Lau, Y. Y.; Hoff, Brad

    2015-11-01

    The size, weight, and power requirements of HPM systems are critical constraints on their viability, and can potentially be improved through the use of additive manufacturing techniques, which are rapidly increasing in capability and affordability. Recent experiments on the UM Recirculating Planar Magnetron (RPM), have explored the use of 3-D printed components in a HPM system. The system was driven by MELBA-C, a Marx-Abramyan system which delivers a -300 kV voltage pulse for 0.3-1.0 us, with a 0.15-0.3 T axial magnetic field applied by a pair of electromagnets. Anode blocks were printed from Water Shed XC 11122 photopolymer using a stereolithography process, and prepared with either a spray-coated or electroplated finish. Both manufacturing processes were compared against baseline data for a machined aluminum anode, noting any differences in power output, oscillation frequency, and mode stability. Evolution and durability of the 3-D printed structures were noted both visually and by tracking vacuum inventories via a residual gas analyzer. Research supported by AFOSR (grant #FA9550-15-1-0097) and AFRL.

  11. Power Constrained High-Level Synthesis of Battery Powered Digital Systems

    DEFF Research Database (Denmark)

    Nielsen, Sune Fallgaard; Madsen, Jan

    2003-01-01

    We present a high-level synthesis algorithm solving the combined scheduling, allocation and binding problem minimizing area under both latency and maximum power per clock-cycle constraints. Our approach eliminates the large power spikes, resulting in an increased battery lifetime, a property...... of utmost importance for battery powered embedded systems. Our approach extends the partial-clique partitioning algorithm by introducing power awareness through a heuristic algorithm which bounds the design space to those of power feasible schedules. We have applied our algorithm on a set of dataflow graphs...

  12. Improved cutting performance in high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove

    2003-01-01

    Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described.......Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described....

  13. In-volume heating using high-power laser diodes

    NARCIS (Netherlands)

    Denisenkov, V.S.; Kiyko, V.V.; Vdovin, G.V.

    2015-01-01

    High-power lasers are useful instruments suitable for applications in various fields; the most common industrial applications include cutting and welding. We propose a new application of high-power laser diodes as in-bulk heating source for food industry. Current heating processes use surface

  14. Linear and nonlinear filters under high power microwave conditions

    Directory of Open Access Journals (Sweden)

    F. Brauer

    2009-05-01

    Full Text Available The development of protection circuits against a variety of electromagnetic disturbances is important to assure the immunity of an electronic system. In this paper the behavior of linear and nonlinear filters is measured and simulated with high power microwave (HPM signals to achieve a comprehensive protection against different high power electromagnetic (HPEM threats.

  15. Improved cutting performance in high power laser cutting

    DEFF Research Database (Denmark)

    Olsen, Flemming Ove

    2003-01-01

    Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described.......Recent results in high power laser cutting especially with focus on cutting of mild grade steel types for shipbuilding are described....

  16. In-volume heating using high-power laser diodes

    NARCIS (Netherlands)

    Denisenkov, V.S.; Kiyko, V.V.; Vdovin, G.V.

    2015-01-01

    High-power lasers are useful instruments suitable for applications in various fields; the most common industrial applications include cutting and welding. We propose a new application of high-power laser diodes as in-bulk heating source for food industry. Current heating processes use surface heatin

  17. Atmospheric propagation and combining of high power lasers: comment.

    Science.gov (United States)

    Goodno, Gregory D; Rothenberg, Joshua E

    2016-10-10

    Nelson et al. [Appl. Opt.55, 1757 (2016)APOPAI0003-693510.1364/AO.55.001757] recently concluded that coherent beam combining and remote phase locking of high-power lasers are fundamentally limited by the laser source linewidth. These conclusions are incorrect and not relevant to practical high-power coherently combined laser architectures.

  18. Proceedings CSR 2010 Workshop on High Productivity Computations

    CERN Document Server

    Ablayev, Farid; Vasiliev, Alexander; 10.4204/EPTCS.52

    2011-01-01

    This volume contains the proceedings of the Workshop on High Productivity Computations (HPC 2010) which took place on June 21-22 in Kazan, Russia. This workshop was held as a satellite workshop of the 5th International Computer Science Symposium in Russia (CSR 2010). HPC 2010 was intended to organize the discussions about high productivity computing means and models, including but not limited to high performance and quantum information processing.

  19. Phase noise measurement of high-power fiber amplifiers

    Institute of Scientific and Technical Information of China (English)

    Hu Xiao; Xiaolin Wang; Yanxing Ma; Bing He; Pu Zhou; Jun Zhou; Xiaojun Xu

    2011-01-01

    We measure the phase fluctuation in a high-power fiber amplifier using a multi-dithering technique. Its fluctuation property is qualitatively analyzed by the power spectral density and integrated spectral density.Low frequency fluctuations caused by the environment are dominant in the phase fluctuations in an amplifier, whereas the high frequency components related to laser power affect the control bandwidth. The bandwidth requirement of the active phase-locking is calculated to be 300 Hz, 670 Hz, 1.6 kHz, and 3.9 kHz under the output power of 25,55, 125, and 180W, respectively. The approximately linear relationship between the control bandwidth and laser power needs to be further investigated.%@@ We measure the phase fluctuation in a high-power fiber amplifier using a multi-dithering technique.Its fluctuation property is qualitatively analyzed by the power spectral density and integrated spectral density.Low frequency fluctuations caused by the environment are dominant in the phase fluctuations in an am-plifier, whereas the high frequency components related to laser power affect the control bandwidth.The bandwidth requirement of the active phase-locking is calculated to be 300 Hz, 670 Hz, 1.6 kHz, and 3.9kHz under the output power of 25, 55, 125, and 180 W, respectively.The approximately linear relationship between the control bandwidth and laser power needs to be further investigated.

  20. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  1. 1.55 Micron High Peak Power Fiber Amplifier Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this proposal, we propose to demonstrate and build a 1.55 micron single frequency high energy and high peak power fiber amplifier by developing an innovative...

  2. 1.55 Micron High Peak Power Fiber Amplifier Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this proposal, we propose to demonstrate and build a 1.55 micron single frequency high energy and high peak power fiber amplifier by developing an innovative...

  3. On the Ongoing Evolution of Very High Frequency Power Supplies

    DEFF Research Database (Denmark)

    Knott, Arnold; Andersen, Toke Meyer; Kamby, Peter

    2013-01-01

    in radio frequency transmission equipment helps to overcome those. However those circuits were not designed to meet the same requirements as power converters. This paper summarizes the contributions in recent years in application of very high frequency (VHF) technologies in power electronics, describes......The ongoing demand for smaller and lighter power supplies is driving the motivation to increase the switching frequencies of power converters. Drastic increases however come along with new challenges, namely the increase of switching losses in all components. The application of power circuits used...

  4. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    Science.gov (United States)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  5. On the Computational Power of Spiking Neural P Systems with Self-Organization

    Science.gov (United States)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  6. SOLAR POWERING OF HIGH EFFICIENCY ABSORPTION CHILLER

    Energy Technology Data Exchange (ETDEWEB)

    Randy C. Gee

    2004-11-15

    This is the Final Report for two solar cooling projects under this Cooperative Agreement. The first solar cooling project is a roof-integrated solar cooling and heating system, called the Power Roof{trademark}, which began operation in Raleigh, North Carolina in late July 2002. This system provides 176 kW (50 ton) of solar-driven space cooling using a unique nonimaging concentrating solar collector. The measured performance of the system during its first months of operation is reported here, along with a description of the design and operation of this system. The second solar cooling system, with a 20-ton capacity, is being retrofit to a commercial office building in Charleston, South Carolina but has not yet been completed.

  7. High power UV and VUV pulsed excilamps

    Science.gov (United States)

    Tarasenko, V.; Erofeev, M.; Lomaev, M.; Rybka, D.

    2008-07-01

    Emission characteristics of a nanosecond discharge in inert gases and its halogenides without preionization of the gap from an auxiliary source have been investigated. A volume discharge, initiated by an avalanche electron beam (VDIAEB) was realized at pressures up to 12 atm. In xenon at pressure of 1.2 atm, the energy of spontaneous radiation in the full solid angle was sim 45 mJ/cm^3, and the FWHM of a radiation pulse was sim 110 ns. The spontaneous radiation power rise in xenon was observed at pressures up to 12 atm. Pulsed radiant exitance of inert gases halogenides excited by VDIAEB was sim 4.5 kW/cm^2 at efficiency up to 5.5 %.

  8. High-Power-Density Organic Radical Batteries.

    Science.gov (United States)

    Friebe, Christian; Schubert, Ulrich S

    2017-02-01

    Batteries that are based on organic radical compounds possess superior charging times and discharging power capability in comparison to established electrochemical energy-storage technologies. They do not rely on metals and, hence, feature a favorable environmental impact. They furthermore offer the possibility of roll-to-roll processing through the use of different printing techniques, which enables the cost-efficient fabrication of mechanically flexible devices. In this review, organic radical batteries are presented with the focus on the hitherto developed materials and the key properties thereof, e.g., voltage, capacity, and cycle life. Furthermore, basic information, such as significant characteristics, housing approaches, and applied additives, are presented and discussed in the context of organic radical batteries.

  9. Laser Cooled High-Power Fiber Amplifier

    CERN Document Server

    Nemova, Galina

    2009-01-01

    A theoretical model for laser cooled continuous-wave fiber amplifier is presented. The amplification process takes place in the Tm3+-doped core of the fluoride ZBLAN (ZrF4-BaF2-LaF3-AlF3-NaF) glass fiber. The cooling process takes place in the Yb3+:ZBLAN fiber cladding. It is shown that for each value of the pump power and the amplified signal there is a distribution of the concentration of the Tm3+ along the length of the fiber amplifier, which provides its athermal operation. The influence of a small deviation in the value of the amplified signal on the temperature of the fiber with the fixed distribution of the Tm3+ions in the fiber cladding is investigated.

  10. Designing high efficient solar powered lighting systems

    DEFF Research Database (Denmark)

    Poulsen, Peter Behrensdorff; Thorsteinsson, Sune; Lindén, Johannes;

    2016-01-01

    Some major challenges in the development of L2L products is the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report the development of 2 Three-Port-Converters respec......Some major challenges in the development of L2L products is the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report the development of 2 Three......-Port-Converters respectively for 1-10Wp and 10-50 Wp with a peak efficiency of 97% at 1.8 W of PV power for the 10 Wp version. Furthermore, a modelling tool for L2L products has been developed and a laboratory for feeding in component data not available in the datasheets to the model is described....

  11. Eighth CW and High Average Power RF Workshop

    CERN Document Server

    2014-01-01

    We are pleased to announce the next Continuous Wave and High Average RF Power Workshop, CWRF2014, to take place at Hotel NH Trieste, Trieste, Italy from 13 to 16 May, 2014. This is the eighth in the CWRF workshop series and will be hosted by Elettra - Sincrotrone Trieste S.C.p.A. (www.elettra.eu). CWRF2014 will provide an opportunity for designers and users of CW and high average power RF systems to meet and interact in a convivial environment to share experiences and ideas on applications which utilize high-power klystrons, gridded tubes, combined solid-state architectures, high-voltage power supplies, high-voltage modulators, high-power combiners, circulators, cavities, power couplers and tuners. New ideas for high-power RF system upgrades and novel ways of RF power generation and distribution will also be discussed. CWRF2014 sessions will start on Tuesday morning and will conclude on Friday lunchtime. A visit to Elettra and FERMI will be organized during the workshop. ORGANIZING COMMITTEE (OC): Al...

  12. High-frequency high-voltage high-power DC-to-DC converters

    Science.gov (United States)

    Wilson, T. G.; Owen, H. A.; Wilson, P. M.

    1982-01-01

    A simple analysis of the current and voltage waveshapes associated with the power transistor and the power diode in an example current-or-voltage step-up (buck-boost) converter is presented. The purpose of the analysis is to provide an overview of the problems and design trade-offs which must be addressed as high-power high-voltage converters are operated at switching frequencies in the range of 100 kHz and beyond. Although the analysis focuses on the current-or-voltage step-up converter as the vehicle for discussion, the basic principles presented are applicable to other converter topologies as well.

  13. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  14. SAMPSON Parallel Computation for Sensitivity Analysis of TEPCO's Fukushima Daiichi Nuclear Power Plant Accident

    Science.gov (United States)

    Pellegrini, M.; Bautista Gomez, L.; Maruyama, N.; Naitoh, M.; Matsuoka, S.; Cappello, F.

    2014-06-01

    On March 11th 2011 a high magnitude earthquake and consequent tsunami struck the east coast of Japan, resulting in a nuclear accident unprecedented in time and extents. After scram started at all power stations affected by the earthquake, diesel generators began operation as designed until tsunami waves reached the power plants located on the east coast. This had a catastrophic impact on the availability of plant safety systems at TEPCO's Fukushima Daiichi, leading to the condition of station black-out from unit 1 to 3. In this article the accident scenario is studied with the SAMPSON code. SAMPSON is a severe accident computer code composed of hierarchical modules to account for the diverse physics involved in the various phases of the accident evolution. A preliminary parallelization analysis of the code was performed using state-of-the-art tools and we demonstrate how this work can be beneficial to the nuclear safety analysis. This paper shows that inter-module parallelization can reduce the time to solution by more than 20%. Furthermore, the parallel code was applied to a sensitivity study for the alternative water injection into TEPCO's Fukushima Daiichi unit 3. Results show that the core melting progression is extremely sensitive to the amount and timing of water injection, resulting in a high probability of partial core melting for unit 3.

  15. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  16. Optimizing performance per watt on GPUs in High Performance Computing: temperature, frequency and voltage effects

    CERN Document Server

    Price, D C; Barsdell, B R; Babich, R; Greenhill, L J

    2014-01-01

    The magnitude of the real-time digital signal processing challenge attached to large radio astronomical antenna arrays motivates use of high performance computing (HPC) systems. The need for high power efficiency (performance per watt) at remote observatory sites parallels that in HPC broadly, where efficiency is an emerging critical metric. We investigate how the performance per watt of graphics processing units (GPUs) is affected by temperature, core clock frequency and voltage. Our results highlight how the underlying physical processes that govern transistor operation affect power efficiency. In particular, we show experimentally that GPU power consumption grows non-linearly with both temperature and supply voltage, as predicted by physical transistor models. We show lowering GPU supply voltage and increasing clock frequency while maintaining a low die temperature increases the power efficiency of an NVIDIA K20 GPU by up to 37-48% over default settings when running xGPU, a compute-bound code used in radio...

  17. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    Science.gov (United States)

    Shaat, Musbah; Bader, Faouzi

    2010-12-01

    Cognitive Radio (CR) systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC) can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM) for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs) constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  18. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    Directory of Open Access Journals (Sweden)

    Shaat Musbah

    2010-01-01

    Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  19. LHC Computing Grid Project Launches intAction with International Support. A thousand times more computing power by 2006

    CERN Multimedia

    2001-01-01

    The first phase of the LHC Computing Grid project was approved at an extraordinary meeting of the Council on 20 September 2001. CERN is preparing for the unprecedented avalanche of data that will be produced by the Large Hadron Collider experiments. A thousand times more computer power will be needed by 2006! CERN's need for a dramatic advance in computing capacity is urgent. As from 2006, the four giant detectors observing trillions of elementary particle collisions at the LHC will accumulate over ten million Gigabytes of data, equivalent to the contents of about 20 million CD-ROMs, each year of its operation. A thousand times more computing power will be needed than is available to CERN today. The strategy the collabortations have adopted to analyse and store this unprecedented amount of data is the coordinated deployment of Grid technologies at hundreds of institutes which will be able to search out and analyse information from an interconnected worldwide grid of tens of thousands of computers and storag...

  20. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.