WorldWideScience

Sample records for high computational efficiency

  1. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  2. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  3. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  4. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  5. Low-cost, high-performance and efficiency computational photometer design

    Science.gov (United States)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  6. Efficient computation of hashes

    International Nuclear Information System (INIS)

    Lopes, Raul H C; Franqueira, Virginia N L; Hobson, Peter R

    2014-01-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  7. Computational design of high efficiency release targets for use at ISOL facilities

    CERN Document Server

    Liu, Y

    1999-01-01

    This report describes efforts made at the Oak Ridge National Laboratory to design high-efficiency-release targets that simultaneously incorporate the short diffusion lengths, high permeabilities, controllable temperatures, and heat-removal properties required for the generation of useful radioactive ion beam (RIB) intensities for nuclear physics and astrophysics research using the isotope separation on-line (ISOL) technique. Short diffusion lengths are achieved either by using thin fibrous target materials or by coating thin layers of selected target material onto low-density carbon fibers such as reticulated-vitreous-carbon fiber (RVCF) or carbon-bonded-carbon fiber (CBCF) to form highly permeable composite target matrices. Computational studies that simulate the generation and removal of primary beam deposited heat from target materials have been conducted to optimize the design of target/heat-sink systems for generating RIBs. The results derived from diffusion release-rate simulation studies for selected t...

  8. Computational model for a high temperature electrolyzer coupled to a HTTR for efficient nuclear hydrogen production

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos, E-mail: danielgonro@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la, E-mail: lgarcia@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba); Sanchez, Danny [Universidade Estadual de Santa Cruz (UESC), Ilheus, BA (Brazil)

    2015-07-01

    High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)

  9. Computational model for a high temperature electrolyzer coupled to a HTTR for efficient nuclear hydrogen production

    International Nuclear Information System (INIS)

    Gonzalez, Daniel; Rojas, Leorlen; Rosales, Jesus; Castro, Landy; Gamez, Abel; Brayner, Carlos; Garcia, Lazaro; Garcia, Carlos; Torre, Raciel de la; Sanchez, Danny

    2015-01-01

    High temperature electrolysis process coupled to a very high temperature reactor (VHTR) is one of the most promising methods for hydrogen production using a nuclear reactor as the primary heat source. However there are not references in the scientific publications of a test facility that allow to evaluate the efficiency of the process and other physical parameters that has to be taken into consideration for its accurate application in the hydrogen economy as a massive production method. For this lack of experimental facilities, mathematical models are one of the most used tools to study this process and theirs flowsheets, in which the electrolyzer is the most important component because of its complexity and importance in the process. A computational fluid dynamic (CFD) model for the evaluation and optimization of the electrolyzer of a high temperature electrolysis hydrogen production process flowsheet was developed using ANSYS FLUENT®. Electrolyzer's operational and design parameters will be optimized in order to obtain the maximum hydrogen production and the higher efficiency in the module. This optimized model of the electrolyzer will be incorporated to a chemical process simulation (CPS) code to study the overall high temperature flowsheet coupled to a high temperature accelerator driven system (ADS) that offers advantages in the transmutation of the spent fuel. (author)

  10. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pais Pitta de Lacerda Ruivo, Tiago [IIT, Chicago; Bernabeu Altayo, Gerard [Fermilab; Garzoglio, Gabriele [Fermilab; Timm, Steven [Fermilab; Kim, Hyun-Woo [Fermilab; Noh, Seo-Young [KISTI, Daejeon; Raicu, Ioan [IIT, Chicago

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56 virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).

  11. Computational screening of new inorganic materials for highly efficient solar energy conversion

    DEFF Research Database (Denmark)

    Kuhar, Korina

    2017-01-01

    in solar cells convert solar energy into electricity, and PC uses harvested energy to conduct chemical reactions, such as splitting water into oxygen and, more importantly, hydrogen, also known as the fuel of the future. Further progress in both PV and PC fields is mostly limited by the flaws in materials...... materials. In this work a high-throughput computational search for suitable absorbers for PV and PC applications is presented. A set of descriptors has been developed, such that each descriptor targets an important property or issue of a good solar energy conversion material. The screening study...... that we have access to. Despite the vast amounts of energy at our disposal, we are not able to harvest this solar energy efficiently. Currently, there are a few ways of converting solar power into usable energy, such as photovoltaics (PV) or photoelectrochemical generation of fuels (PC). PV processes...

  12. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Directory of Open Access Journals (Sweden)

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  13. Highly efficient separation materials created by computational approach. For the separation of lanthanides and actinides

    International Nuclear Information System (INIS)

    Goto, Masahiro; Uezu, Kazuya; Aoshima, Atsushi; Koma, Yoshikazu

    2002-05-01

    In this study, efficient separation materials have been created by the computational approach. Based on the computational calculation, novel organophosphorus extractants, which have two functional moieties in the molecular structure, were developed for the recycle system of transuranium elements using liquid-liquid extraction. Furthermore, molecularly imprinted resins were prepared by the surface-imprint polymerization technique. Thorough this research project, we obtained two principal results: 1) design of novel extractants by computational approach, and 2) preparation of highly selective resins by the molecular imprinting technique. The synthesized extractants showed extremely high extractability to rare earth metals compared to those of commercially available extractants. The results of extraction equilibrium suggested that the structural effect of extractants is one of the key factors to enhance the selectivity and extractability in rare earth extractions. Furthermore, a computational analysis was carried out to evaluate the extraction properties for the extraction of rare earth metals by the synthesized extractants. The computer simulation was shown to be very useful for designing new extractants. The new concept to connect some functional moieties with a spacer is very useful and is a promising method to develop novel extractants for the treatment of nuclear fuel. In the second part, we proposed a novel molecular imprinting technique (surface template polymerization) for the separation of lanthanides and actinides. A surface-templated resin is prepared by an emulsion polymerization using an ion-binding (host) monomer, a resin matrix-forming monomer and the target Nd(III) metal ion. A host monomer which has amphiphilic nature forms a complex with a metal ion at the interface, and the complex remains as it is. After the matrix is polymerized, the coordination structure is 'imprinted' at the resin interface. Adsorption of Nd(III) and La(III) ions onto the

  14. Highly efficient computer algorithm for identifying layer thickness of atomically thin 2D materials

    Science.gov (United States)

    Lee, Jekwan; Cho, Seungwan; Park, Soohyun; Bae, Hyemin; Noh, Minji; Kim, Beom; In, Chihun; Yang, Seunghoon; Lee, Sooun; Seo, Seung Young; Kim, Jehyun; Lee, Chul-Ho; Shim, Woo-Young; Jo, Moon-Ho; Kim, Dohun; Choi, Hyunyong

    2018-03-01

    The fields of layered material research, such as transition-metal dichalcogenides (TMDs), have demonstrated that the optical, electrical and mechanical properties strongly depend on the layer number N. Thus, efficient and accurate determination of N is the most crucial step before the associated device fabrication. An existing experimental technique using an optical microscope is the most widely used one to identify N. However, a critical drawback of this approach is that it relies on extensive laboratory experiences to estimate N; it requires a very time-consuming image-searching task assisted by human eyes and secondary measurements such as atomic force microscopy and Raman spectroscopy, which are necessary to ensure N. In this work, we introduce a computer algorithm based on the image analysis of a quantized optical contrast. We show that our algorithm can apply to a wide variety of layered materials, including graphene, MoS2, and WS2 regardless of substrates. The algorithm largely consists of two parts. First, it sets up an appropriate boundary between target flakes and substrate. Second, to compute N, it automatically calculates the optical contrast using an adaptive RGB estimation process between each target, which results in a matrix with different integer Ns and returns a matrix map of Ns onto the target flake position. Using a conventional desktop computational power, the time taken to display the final N matrix was 1.8 s on average for the image size of 1280 pixels by 960 pixels and obtained a high accuracy of 90% (six estimation errors among 62 samples) when compared to the other methods. To show the effectiveness of our algorithm, we also apply it to TMD flakes transferred on optically transparent c-axis sapphire substrates and obtain a similar result of the accuracy of 94% (two estimation errors among 34 samples).

  15. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    Science.gov (United States)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  16. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  17. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  18. Energy Efficiency Evaluation and Benchmarking of AFRL’s Condor High Performance Computer

    Science.gov (United States)

    2011-08-01

    PlayStation 3 nodes executing the HPL benchmark. When idle, the two PS3s consume 188.49 W on average. At peak HPL performance, the nodes draw an average of...AUG 2011 2. REPORT TYPE CONFERENCE PAPER (Post Print) 3. DATES COVERED (From - To) JAN 2011 – JUN 2011 4 . TITLE AND SUBTITLE ENERGY EFFICIENCY...the High Performance LINPACK (HPL) benchmark while also measuring the energy consumed to achieve such performance. Supercomputers are ranked by

  19. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    Science.gov (United States)

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  20. Power-efficient computer architectures recent advances

    CERN Document Server

    Själander, Magnus; Kaxiras, Stefanos

    2014-01-01

    As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture.Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Sp

  1. Efficient computation of argumentation semantics

    CERN Document Server

    Liao, Beishui

    2013-01-01

    Efficient Computation of Argumentation Semantics addresses argumentation semantics and systems, introducing readers to cutting-edge decomposition methods that drive increasingly efficient logic computation in AI and intelligent systems. Such complex and distributed systems are increasingly used in the automation and transportation systems field, and particularly autonomous systems, as well as more generic intelligent computation research. The Series in Intelligent Systems publishes titles that cover state-of-the-art knowledge and the latest advances in research and development in intelligen

  2. Computer aided drug discovery of highly ligand efficient, low molecular weight imidazopyridine analogs as FLT3 inhibitors.

    Science.gov (United States)

    Frett, Brendan; McConnell, Nick; Smith, Catherine C; Wang, Yuanxiang; Shah, Neil P; Li, Hong-yu

    2015-04-13

    The FLT3 kinase represents an attractive target to effectively treat AML. Unfortunately, no FLT3 targeted therapeutic is currently approved. In line with our continued interests in treating kinase related disease for anti-FLT3 mutant activity, we utilized pioneering synthetic methodology in combination with computer aided drug discovery and identified low molecular weight, highly ligand efficient, FLT3 kinase inhibitors. Compounds were analyzed for biochemical inhibition, their ability to selectively inhibit cell proliferation, for FLT3 mutant activity, and preliminary aqueous solubility. Validated hits were discovered that can serve as starting platforms for lead candidates. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    Science.gov (United States)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  4. High Efficiency Computation of the Variances of Structural Evolutionary Random Responses

    Directory of Open Access Journals (Sweden)

    J.H. Lin

    2000-01-01

    Full Text Available For structures subjected to stationary or evolutionary white/colored random noise, their various response variances satisfy algebraic or differential Lyapunov equations. The solution of these Lyapunov equations used to be very difficult. A precise integration method is proposed in the present paper, which solves such Lyapunov equations accurately and very efficiently.

  5. GATE: Improving the computational efficiency

    International Nuclear Information System (INIS)

    Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.

    2006-01-01

    GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable

  6. Molecular design of highly efficient extractants for separation of lanthanides and actinides by computational chemistry

    International Nuclear Information System (INIS)

    Uezu, Kazuya; Yamagawa, Jun-ichiro; Goto, Masahiro

    2006-01-01

    Novel organophosphorus extractants, which have two functional moieties in the molecular structure, were developed for the recycle system of transuranium elements using liquid-liquid extraction. The synthesized extractants showed extremely high extractability to lanthanides elements compared to those of commercially available extractants. The results of extraction equilibrium suggested that the structural effect of extractants is one of the key factors to enhance the selectivity and extractability in lanthanides extractions. Furthermore, molecular modeling was carried out to evaluate the extraction properties for extraction of lanthanides by the synthesized extractants. Molecular modeling was shown to be very useful for designing new extractants. The new concept to connect some functional moieties with a spacer is very useful and is a promising method to develop novel extractants for treatment of nuclear fuel. (author)

  7. Megavoltage cone-beam computed tomography using a high-efficiency image receptor

    International Nuclear Information System (INIS)

    Seppi, Ed J.; Munro, Peter; Johnsen, Stan W.; Shapiro, Ed G.; Tognina, Carlo; Jones, Dan; Pavkovich, John M.; Webb, Chris; Mollov, Ivan; Partain, Larry D.; Colbeth, Rick E.

    2003-01-01

    Purpose: To develop an image receptor capable of forming high-quality megavoltage CT images using modest radiation doses. Methods and Materials: A flat-panel imaging system consisting of a conventional flat-panel sensor attached to a thick CsI scintillator has been fabricated. The scintillator consists of individual CsI crystals 8 mm thick by 0.38 mm x 0.38-mm pitch. Five sides of each crystal are coated with a reflecting powder/epoxy mixture, and the sixth side is in contact with the flat-panel sensor. A timing interface coordinates acquisition by the imaging system and pulsing of the linear accelerator. With this interface, as little as one accelerator pulse (0.023 cGy at the isocenter) can be used to form projection images. Different CT phantoms irradiated by a 6-MV X-ray beam have been imaged to evaluate the performance of the imaging system. The phantoms have been mounted on a rotating stage and rotated while 360 projection images are acquired in 48 s. These projections have been reconstructed using the Feldkamp cone-beam CT reconstruction algorithm. Results and Discussion: Using an irradiation of 16 cGy (360 projections x 0.046 cGy/projection), the contrast resolution is ∼1% for large objects. High-contrast structures as small as 1.2 mm are clearly visible. The reconstructed CT values are linear (R 2 =0.98) for electron densities between 0.001 and 2.16 g/cm 3 , and the reconstruction time for a 512 x 512 x 512 data set is 6 min. Images of an anthropomorphic phantom show that soft-tissue structures such as the heart, lung, kidneys, and liver are visible in the reconstructed images (16 cGy, 5-mm-thick slices). Conclusions: The acquisition of megavoltage CT images with soft-tissue contrast is possible with irradiations as small as 16 cGy

  8. Efficient computation of Laguerre polynomials

    NARCIS (Netherlands)

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2017-01-01

    textabstractAn efficient algorithm and a Fortran 90 module (LaguerrePol) for computing Laguerre polynomials . Ln(α)(z) are presented. The standard three-term recurrence relation satisfied by the polynomials and different types of asymptotic expansions valid for . n large and . α small, are used

  9. Hiding Electronic Patient Record (EPR) in medical images: A high capacity and computationally efficient technique for e-healthcare applications.

    Science.gov (United States)

    Loan, Nazir A; Parah, Shabir A; Sheikh, Javaid A; Akhoon, Jahangir A; Bhat, Ghulam M

    2017-09-01

    A high capacity and semi-reversible data hiding scheme based on Pixel Repetition Method (PRM) and hybrid edge detection for scalable medical images has been proposed in this paper. PRM has been used to scale up the small sized image (seed image) and hybrid edge detection ensures that no important edge information is missed. The scaled up version of seed image has been divided into 2×2 non overlapping blocks. In each block there is one seed pixel whose status decides the number of bits to be embedded in the remaining three pixels of that block. The Electronic Patient Record (EPR)/data have been embedded by using Least Significant and Intermediate Significant Bit Substitution (ISBS). The RC4 encryption has been used to add an additional security layer for embedded EPR/data. The proposed scheme has been tested for various medical and general images and compared with some state of art techniques in the field. The experimental results reveal that the proposed scheme besides being semi-reversible and computationally efficient is capable of handling high payload and as such can be used effectively for electronic healthcare applications. Copyright © 2017. Published by Elsevier Inc.

  10. Efficient Secure Multiparty Subset Computation

    Directory of Open Access Journals (Sweden)

    Sufang Zhou

    2017-01-01

    Full Text Available Secure subset problem is important in secure multiparty computation, which is a vital field in cryptography. Most of the existing protocols for this problem can only keep the elements of one set private, while leaking the elements of the other set. In other words, they cannot solve the secure subset problem perfectly. While a few studies have addressed actual secure subsets, these protocols were mainly based on the oblivious polynomial evaluations with inefficient computation. In this study, we first design an efficient secure subset protocol for sets whose elements are drawn from a known set based on a new encoding method and homomorphic encryption scheme. If the elements of the sets are taken from a large domain, the existing protocol is inefficient. Using the Bloom filter and homomorphic encryption scheme, we further present an efficient protocol with linear computational complexity in the cardinality of the large set, and this is considered to be practical for inputs consisting of a large number of data. However, the second protocol that we design may yield a false positive. This probability can be rapidly decreased by reexecuting the protocol with different hash functions. Furthermore, we present the experimental performance analyses of these protocols.

  11. Efficient GPU-based skyline computation

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Assent, Ira; Magnani, Matteo

    2013-01-01

    The skyline operator for multi-criteria search returns the most interesting points of a data set with respect to any monotone preference function. Existing work has almost exclusively focused on efficiently computing skylines on one or more CPUs, ignoring the high parallelism possible in GPUs. In...

  12. Efficient computation of spaced seeds

    Directory of Open Access Journals (Sweden)

    Ilie Silvana

    2012-02-01

    Full Text Available Abstract Background The most frequently used tools in bioinformatics are those searching for similarities, or local alignments, between biological sequences. Since the exact dynamic programming algorithm is quadratic, linear-time heuristics such as BLAST are used. Spaced seeds are much more sensitive than the consecutive seed of BLAST and using several seeds represents the current state of the art in approximate search for biological sequences. The most important aspect is computing highly sensitive seeds. Since the problem seems hard, heuristic algorithms are used. The leading software in the common Bernoulli model is the SpEED program. Findings SpEED uses a hill climbing method based on the overlap complexity heuristic. We propose a new algorithm for this heuristic that improves its speed by over one order of magnitude. We use the new implementation to compute improved seeds for several software programs. We compute as well multiple seeds of the same weight as MegaBLAST, that greatly improve its sensitivity. Conclusion Multiple spaced seeds are being successfully used in bioinformatics software programs. Enabling researchers to compute very fast high quality seeds will help expanding the range of their applications.

  13. Efficient quantum computing with weak measurements

    International Nuclear Information System (INIS)

    Lund, A P

    2011-01-01

    Projective measurements with high quantum efficiency are often assumed to be required for efficient circuit-based quantum computing. We argue that this is not the case and show that the fact that they are not required was actually known previously but was not deeply explored. We examine this issue by giving an example of how to perform the quantum-ordering-finding algorithm efficiently using non-local weak measurements considering that the measurements used are of bounded weakness and some fixed but arbitrary probability of success less than unity is required. We also show that it is possible to perform the same computation with only local weak measurements, but this must necessarily introduce an exponential overhead.

  14. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  15. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  16. A Practical Evaluation of a High-Security Energy-Efficient Gateway for IoT Fog Computing Applications.

    Science.gov (United States)

    Suárez-Albela, Manuel; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Castedo, Luis

    2017-08-29

    Fog computing extends cloud computing to the edge of a network enabling new Internet of Things (IoT) applications and services, which may involve critical data that require privacy and security. In an IoT fog computing system, three elements can be distinguished: IoT nodes that collect data, the cloud, and interconnected IoT gateways that exchange messages with the IoT nodes and with the cloud. This article focuses on securing IoT gateways, which are assumed to be constrained in terms of computational resources, but that are able to offload some processing from the cloud and to reduce the latency in the responses to the IoT nodes. However, it is usually taken for granted that IoT gateways have direct access to the electrical grid, which is not always the case: in mission-critical applications like natural disaster relief or environmental monitoring, it is common to deploy IoT nodes and gateways in large areas where electricity comes from solar or wind energy that charge the batteries that power every device. In this article, how to secure IoT gateway communications while minimizing power consumption is analyzed. The throughput and power consumption of Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC) are considered, since they are really popular, but have not been thoroughly analyzed when applied to IoT scenarios. Moreover, the most widespread Transport Layer Security (TLS) cipher suites use RSA as the main public key-exchange algorithm, but the key sizes needed are not practical for most IoT devices and cannot be scaled to high security levels. In contrast, ECC represents a much lighter and scalable alternative. Thus, RSA and ECC are compared for equivalent security levels, and power consumption and data throughput are measured using a testbed of IoT gateways. The measurements obtained indicate that, in the specific fog computing scenario proposed, ECC is clearly a much better alternative than RSA, obtaining energy consumption reductions of up to

  17. A Practical Evaluation of a High-Security Energy-Efficient Gateway for IoT Fog Computing Applications

    Science.gov (United States)

    Castedo, Luis

    2017-01-01

    Fog computing extends cloud computing to the edge of a network enabling new Internet of Things (IoT) applications and services, which may involve critical data that require privacy and security. In an IoT fog computing system, three elements can be distinguished: IoT nodes that collect data, the cloud, and interconnected IoT gateways that exchange messages with the IoT nodes and with the cloud. This article focuses on securing IoT gateways, which are assumed to be constrained in terms of computational resources, but that are able to offload some processing from the cloud and to reduce the latency in the responses to the IoT nodes. However, it is usually taken for granted that IoT gateways have direct access to the electrical grid, which is not always the case: in mission-critical applications like natural disaster relief or environmental monitoring, it is common to deploy IoT nodes and gateways in large areas where electricity comes from solar or wind energy that charge the batteries that power every device. In this article, how to secure IoT gateway communications while minimizing power consumption is analyzed. The throughput and power consumption of Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC) are considered, since they are really popular, but have not been thoroughly analyzed when applied to IoT scenarios. Moreover, the most widespread Transport Layer Security (TLS) cipher suites use RSA as the main public key-exchange algorithm, but the key sizes needed are not practical for most IoT devices and cannot be scaled to high security levels. In contrast, ECC represents a much lighter and scalable alternative. Thus, RSA and ECC are compared for equivalent security levels, and power consumption and data throughput are measured using a testbed of IoT gateways. The measurements obtained indicate that, in the specific fog computing scenario proposed, ECC is clearly a much better alternative than RSA, obtaining energy consumption reductions of up

  18. Highly efficient high temperature electrolysis

    DEFF Research Database (Denmark)

    Hauch, Anne; Ebbesen, Sune; Jensen, Søren Højgaard

    2008-01-01

    High temperature electrolysis of water and steam may provide an efficient, cost effective and environmentally friendly production of H-2 Using electricity produced from sustainable, non-fossil energy sources. To achieve cost competitive electrolysis cells that are both high performing i.e. minimum...... internal resistance of the cell, and long-term stable, it is critical to develop electrode materials that are optimal for steam electrolysis. In this article electrolysis cells for electrolysis of water or steam at temperatures above 200 degrees C for production of H-2 are reviewed. High temperature...... electrolysis is favourable from a thermodynamic point of view, because a part of the required energy can be supplied as thermal heat, and the activation barrier is lowered increasing the H-2 production rate. Only two types of cells operating at high temperature (above 200 degrees C) have been described...

  19. HIGH EFFICIENCY TURBINE

    OpenAIRE

    VARMA, VIJAYA KRUSHNA

    2012-01-01

    Varma designed ultra modern and high efficiency turbines which can use gas, steam or fuels as feed to produce electricity or mechanical work for wide range of usages and applications in industries or at work sites. Varma turbine engines can be used in all types of vehicles. These turbines can also be used in aircraft, ships, battle tanks, dredgers, mining equipment, earth moving machines etc, Salient features of Varma Turbines. 1. Varma turbines are simple in design, easy to manufac...

  20. Efficient computation of k-Nearest Neighbour Graphs for large high-dimensional data sets on GPU clusters.

    Directory of Open Access Journals (Sweden)

    Ali Dashti

    Full Text Available This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU. The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.

  1. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  2. High-efficiency CARM

    Energy Technology Data Exchange (ETDEWEB)

    Bratman, V.L.; Kol`chugin, B.D.; Samsonov, S.V.; Volkov, A.B. [Institute of Applied Physics, Nizhny Novgorod (Russian Federation)

    1995-12-31

    The Cyclotron Autoresonance Maser (CARM) is a well-known variety of FEMs. Unlike the ubitron in which electrons move in a periodical undulator field, in the CARM the particles move along helical trajectories in a uniform magnetic field. Since it is much simpler to generate strong homogeneous magnetic fields than periodical ones for a relatively low electron energy ({Brit_pounds}{le}1-3 MeV) the period of particles` trajectories in the CARM can be sufficiently smaller than in the undulator in which, moreover, the field decreases rapidly in the transverse direction. In spite of this evident advantage, the number of papers on CARM is an order less than on ubitron, which is apparently caused by the low (not more than 10 %) CARM efficiency in experiments. At the same time, ubitrons operating in two rather complicated regimes-trapping and adiabatic deceleration of particles and combined undulator and reversed guiding fields - yielded efficiencies of 34 % and 27 %, respectively. The aim of this work is to demonstrate that high efficiency can be reached even for a simplest version of the CARM. In order to reduce sensitivity to an axial velocity spread of particles, a short interaction length where electrons underwent only 4-5 cyclotron oscillations was used in this work. Like experiments, a narrow anode outlet of a field-emission electron gun cut out the {open_quotes}most rectilinear{close_quotes} near-axis part of the electron beam. Additionally, magnetic field of a small correcting coil compensated spurious electron oscillations pumped by the anode aperture. A kicker in the form of a sloping to the axis frame with current provided a control value of rotary velocity at a small additional velocity spread. A simple cavity consisting of a cylindrical waveguide section restricted by a cut-off waveguide on the cathode side and by a Bragg reflector on the collector side was used as the CARM-oscillator microwave system.

  3. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    DEFF Research Database (Denmark)

    Petersen, Morten Aa; Aaronson, Neil K; Arraras, Juan I

    2013-01-01

    The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF...

  4. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    NARCIS (Netherlands)

    Petersen, M.A.; Aaronson, N.K.; Arraras, J.I.; Chie, W.C.; Conroy, T.; Constantini, A.; Giesinger, J.M.; Holzner, B.; King, M.T.; Singer, S.; Velikova, G.; Verdonck-de Leeuw, I.M.; Young, T.; Groenvold, M.

    2013-01-01

    Objectives The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF)

  5. The EORTC computer-adaptive tests measuring physical functioning and fatigue exhibited high levels of measurement precision and efficiency

    NARCIS (Netherlands)

    Petersen, M.A.; Aaronson, N.K.; Arraras, J.I.; Chie, W.C.; Conroy, T.; Costantini, A.; Giesinger, J.M.; Holzner, B.; King, M.T.; Singer, S.; Velikova, G.; de Leeuw, I.M.; Young, T.; Groenvold, M.

    2013-01-01

    Objectives: The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Group is developing a computer-adaptive test (CAT) version of the EORTC Quality of Life Questionnaire (QLQ-C30). We evaluated the measurement properties of the CAT versions of physical functioning (PF)

  6. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  7. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  8. Refficientlib: an efficient load-rebalanced adaptive mesh refinement algorithm for high-performance computational physics meshes

    OpenAIRE

    Baiges Aznar, Joan; Bayona Roa, Camilo Andrés

    2017-01-01

    No separate or additional fees are collected for access to or distribution of the work. In this paper we present a novel algorithm for adaptive mesh refinement in computational physics meshes in a distributed memory parallel setting. The proposed method is developed for nodally based parallel domain partitions where the nodes of the mesh belong to a single processor, whereas the elements can belong to multiple processors. Some of the main features of the algorithm presented in this paper a...

  9. Computation of the efficiency distribution of a multichannel focusing collimator

    International Nuclear Information System (INIS)

    Balasubramanian, A.; Venkateswaran, T.V.

    1977-01-01

    This article describes two computer methods of calculating the point source efficiency distribution functions of a focusing collimator with round tapered holes. The first method which computes only the geometric efficiency distribution is adequate for low energy collimators while the second method which computes both geometric and penetration efficiencies can be made use of for medium and high energy collimators. The scatter contribution to the efficiency is not taken into account. In the first method the efficiency distribution of a single cone of the collimator is obtained and the data are used for computing the distribution of the whole collimator. For high energy collimator the entire detector region is imagined to be divided into elemental areas. Efficiency of the elemental area is computed after suitably weighting for the penetration within the collimator septa, which is determined by three dimensional geometric techniques. The method of computing the line source efficiency distribution from point source distribution is also explained. The formulations have been tested by computing the efficiency distribution of several commercial collimators and collimators fabricated by us. (Auth.)

  10. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  11. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  12. High efficiency positron moderation

    International Nuclear Information System (INIS)

    Taqqu, D.

    1990-01-01

    A new positron moderation scheme is proposed. It makes use of electric and magnetic fields to confine the β + emitted by a radioactive source forcing them to slow down within a thin foil. A specific arrangement is described where an intermediary slowed-down beam of energy below 10 keV is produced. By directing it towards a standard moderator optimal conversion into slow positrons is achieved. This scheme is best applied to short lived β + emitters for which a 25% moderation efficiency can be reached. Within the state of the art technology a slow positron source intensity exceeding 2 x 10 10 e + /sec is achievable. (orig.)

  13. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval

    2003-01-01

    Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by ...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation.......Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented...... by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...

  14. Numerical aspects for efficient welding computational mechanics

    Directory of Open Access Journals (Sweden)

    Aburuga Tarek Kh.S.

    2014-01-01

    Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.

  15. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  16. Computer Architecture for Energy Efficient SFQ

    Science.gov (United States)

    2014-08-27

    IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit

  17. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  18. Efficient quantum computing using coherent photon conversion.

    Science.gov (United States)

    Langford, N K; Ramelow, S; Prevedel, R; Munro, W J; Milburn, G J; Zeilinger, A

    2011-10-12

    Single photons are excellent quantum information carriers: they were used in the earliest demonstrations of entanglement and in the production of the highest-quality entanglement reported so far. However, current schemes for preparing, processing and measuring them are inefficient. For example, down-conversion provides heralded, but randomly timed, single photons, and linear optics gates are inherently probabilistic. Here we introduce a deterministic process--coherent photon conversion (CPC)--that provides a new way to generate and process complex, multiquanta states for photonic quantum information applications. The technique uses classically pumped nonlinearities to induce coherent oscillations between orthogonal states of multiple quantum excitations. One example of CPC, based on a pumped four-wave-mixing interaction, is shown to yield a single, versatile process that provides a full set of photonic quantum processing tools. This set satisfies the DiVincenzo criteria for a scalable quantum computing architecture, including deterministic multiqubit entanglement gates (based on a novel form of photon-photon interaction), high-quality heralded single- and multiphoton states free from higher-order imperfections, and robust, high-efficiency detection. It can also be used to produce heralded multiphoton entanglement, create optically switchable quantum circuits and implement an improved form of down-conversion with reduced higher-order effects. Such tools are valuable building blocks for many quantum-enabled technologies. Finally, using photonic crystal fibres we experimentally demonstrate quantum correlations arising from a four-colour nonlinear process suitable for CPC and use these measurements to study the feasibility of reaching the deterministic regime with current technology. Our scheme, which is based on interacting bosonic fields, is not restricted to optical systems but could also be implemented in optomechanical, electromechanical and superconducting

  19. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  20. New highly efficient piezoceramic materials

    International Nuclear Information System (INIS)

    Dantsiger, A.Ya.; Razumovskaya, O.N.; Reznichenko, L.A.; Grineva, L.D.; Devlikanova, R.U.; Dudkina, S.I.; Gavrilyachenko, S.V.; Dergunova, N.V.

    1993-01-01

    New high efficient piezoceramic materials with various combination of parameters inclusing high Curie point for high-temperature transducers using in atomic power engineering are worked. They can be used in systems for heated matters nondestructive testing, controllers for varied industrial power plants and other high-temperature equipment

  1. A computationally efficient fuzzy control s

    Directory of Open Access Journals (Sweden)

    Abdel Badie Sharkawy

    2013-12-01

    Full Text Available This paper develops a decentralized fuzzy control scheme for MIMO nonlinear second order systems with application to robot manipulators via a combination of genetic algorithms (GAs and fuzzy systems. The controller for each degree of freedom (DOF consists of a feedforward fuzzy torque computing system and a feedback fuzzy PD system. The feedforward fuzzy system is trained and optimized off-line using GAs, whereas not only the parameters but also the structure of the fuzzy system is optimized. The feedback fuzzy PD system, on the other hand, is used to keep the closed-loop stable. The rule base consists of only four rules per each DOF. Furthermore, the fuzzy feedback system is decentralized and simplified leading to a computationally efficient control scheme. The proposed control scheme has the following advantages: (1 it needs no exact dynamics of the system and the computation is time-saving because of the simple structure of the fuzzy systems and (2 the controller is robust against various parameters and payload uncertainties. The computational complexity of the proposed control scheme has been analyzed and compared with previous works. Computer simulations show that this controller is effective in achieving the control goals.

  2. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  3. Computationally Efficient Prediction of Ionic Liquid Properties

    DEFF Research Database (Denmark)

    Chaban, V. V.; Prezhdo, O. V.

    2014-01-01

    Due to fundamental differences, room-temperature ionic liquids (RTIL) are significantly more viscous than conventional molecular liquids and require long simulation times. At the same time, RTILs remain in the liquid state over a much broader temperature range than the ordinary liquids. We exploit...... to ambient temperatures. We numerically prove the validity of the proposed concept for density and ionic diffusion of four different RTILs. This simple method enhances the computational efficiency of the existing simulation approaches as applied to RTILs by more than an order of magnitude....

  4. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  5. Octopus: embracing the energy efficiency of handheld multimedia computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1999-01-01

    In the MOBY DICK project we develop and define the architecture of a new generation of mobile hand-held computers called Mobile Digital Companions. The Companions must meet several major requirements: high performance, energy efficient, a notion of Quality of Service (QoS), small size, and low

  6. Energy efficiency indicators for high electric-load buildings

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, Bernard; Balmer, Markus A.; Kinney, Satkartar; Le Strat, Pascale; Shibata, Yoshiaki; Varone, Frederic

    2003-06-01

    Energy per unit of floor area is not an adequate indicator for energy efficiency in high electric-load buildings. For two activities, restaurants and computer centres, alternative indicators for energy efficiency are discussed.

  7. Energy Efficiency in Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...

  8. A primer on the energy efficiency of computing

    Energy Technology Data Exchange (ETDEWEB)

    Koomey, Jonathan G. [Research Fellow, Steyer-Taylor Center for Energy Policy and Finance, Stanford University (United States)

    2015-03-30

    The efficiency of computing at peak output has increased rapidly since the dawn of the computer age. This paper summarizes some of the key factors affecting the efficiency of computing in all usage modes. While there is still great potential for improving the efficiency of computing devices, we will need to alter how we do computing in the next few decades because we are finally approaching the limits of current technologies.

  9. Unconventional, High-Efficiency Propulsors

    DEFF Research Database (Denmark)

    Andersen, Poul

    1996-01-01

    The development of ship propellers has generally been characterized by search for propellers with as high efficiency as possible and at the same time low noise and vibration levels and little or no cavitation. This search has lead to unconventional propulsors, like vane-wheel propulsors, contra-r...

  10. Energy Efficiency in Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...

  11. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  12. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  13. Overview of Ecological Agriculture with High Efficiency

    OpenAIRE

    Huang, Guo-qin; Zhao, Qi-guo; Gong, Shao-lin; Shi, Qing-hua

    2012-01-01

    From the presentation, connotation, characteristics, principles, pattern, and technologies of ecological agriculture with high efficiency, we conduct comprehensive and systematic analysis and discussion of the theoretical and practical progress of ecological agriculture with high efficiency. (i) Ecological agriculture with high efficiency was first advanced in China in 1991. (ii) Ecological agriculture with high efficiency highlights "high efficiency", "ecology", and "combination". (iii) Ecol...

  14. High-power, high-efficiency FELs

    International Nuclear Information System (INIS)

    Sessler, A.M.

    1989-04-01

    High power, high efficiency FELs require tapering, as the particles loose energy, so as to maintain resonance between the electromagnetic wave and the particles. They also require focusing of the particles (usually done with curved pole faces) and focusing of the electromagnetic wave (i.e. optical guiding). In addition, one must avoid transverse beam instabilities (primarily resistive wall) and longitudinal instabilities (i.e sidebands). 18 refs., 7 figs., 3 tabs

  15. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  16. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  17. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  18. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  19. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  20. High Efficiency Room Air Conditioner

    Energy Technology Data Exchange (ETDEWEB)

    Bansal, Pradeep [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This project was undertaken as a CRADA project between UT-Battelle and Geberal Electric Company and was funded by Department of Energy to design and develop of a high efficiency room air conditioner. A number of novel elements were investigated to improve the energy efficiency of a state-of-the-art WAC with base capacity of 10,000 BTU/h. One of the major modifications was made by downgrading its capacity from 10,000 BTU/hr to 8,000 BTU/hr by replacing the original compressor with a lower capacity (8,000 BTU/hr) but high efficiency compressor having an EER of 9.7 as compared with 9.3 of the original compressor. However, all heat exchangers from the original unit were retained to provide higher EER. The other subsequent major modifications included- (i) the AC fan motor was replaced by a brushless high efficiency ECM motor along with its fan housing, (ii) the capillary tube was replaced with a needle valve to better control the refrigerant flow and refrigerant set points, and (iii) the unit was tested with a drop-in environmentally friendly binary mixture of R32 (90% molar concentration)/R125 (10% molar concentration). The WAC was tested in the environmental chambers at ORNL as per the design rating conditions of AHAM/ASHRAE (Outdoor- 95F and 40%RH, Indoor- 80F, 51.5%RH). All these modifications resulted in enhancing the EER of the WAC by up to 25%.

  1. High-efficient electron linacs

    International Nuclear Information System (INIS)

    Glavatskikh, K.V.; Zverev, B.V.; Kalyuzhnyj, V.E.; Morozov, V.L.; Nikolaev, S.V.; Plotnikov, S.N.; Sobenin, N.P.; Vovna, V.A.; Gryzlov, A.V.

    1993-01-01

    Comparison analysis of ELA on running and still waves designed for 10 MeV energy and with high efficiency is carried out. It is shown, that from the point of view of dimensions ELA with a still wave or that of a combined type is more preferable. From the point of view of impedance characteristics in any variant with application of magnetron as HF-generator it is necessary to implement special requirements to the accelerating structure if no ferrite isolation is provided in HF-channel. 3 refs., 4 figs., 1 tab

  2. Quantum Computing and the Limits of the Efficiently Computable

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I'll discuss how computational complexity---the study of what can and can't be feasibly computed---has been interacting with physics in interesting and unexpected ways. I'll first give a crash course about computer science's P vs. NP problem, as well as about the capabilities and limits of quantum computers. I'll then touch on speculative models of computation that would go even beyond quantum computers, using (for example) hypothetical nonlinearities in the Schrodinger equation. Finally, I'll discuss BosonSampling ---a proposal for a simple form of quantum computing, which nevertheless seems intractable to simulate using a classical computer---as well as the role of computational complexity in the black hole information puzzle.

  3. Efficiently outsourcing multiparty computation under multiple keys

    NARCIS (Netherlands)

    Peter, Andreas; Tews, Erik; Tews, Erik; Katzenbeisser, Stefan

    2013-01-01

    Secure multiparty computation enables a set of users to evaluate certain functionalities on their respective inputs while keeping these inputs encrypted throughout the computation. In many applications, however, outsourcing these computations to an untrusted server is desirable, so that the server

  4. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  5. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  6. A computationally efficient approach for template matching-based ...

    Indian Academy of Sciences (India)

    In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.

  7. An energy-efficient failure detector for vehicular cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  8. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  9. High-efficiency photovoltaic cells

    Science.gov (United States)

    Yang, H.T.; Zehr, S.W.

    1982-06-21

    High efficiency solar converters comprised of a two cell, non-lattice matched, monolithic stacked semiconductor configuration using optimum pairs of cells having bandgaps in the range 1.6 to 1.7 eV and 0.95 to 1.1 eV, and a method of fabrication thereof, are disclosed. The high band gap subcells are fabricated using metal organic chemical vapor deposition (MOCVD), liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) to produce the required AlGaAs layers of optimized composition, thickness and doping to produce high performance, heteroface homojunction devices. The low bandgap subcells are similarly fabricated from AlGa(As)Sb compositions by LPE, MBE or MOCVD. These subcells are then coupled to form a monolithic structure by an appropriate bonding technique which also forms the required transparent intercell ohmic contact (IOC) between the two subcells. Improved ohmic contacts to the high bandgap semiconductor structure can be formed by vacuum evaporating to suitable metal or semiconductor materials which react during laser annealing to form a low bandgap semiconductor which provides a low contact resistance structure.

  10. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  11. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  12. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  13. Role of computational efficiency in process simulation

    Directory of Open Access Journals (Sweden)

    Kurt Strand

    1989-07-01

    Full Text Available It is demonstrated how efficient numerical algorithms may be combined to yield a powerful environment for analysing and simulating dynamic systems. The importance of using efficient numerical algorithms is emphasized and demonstrated through examples from the petrochemical industry.

  14. Efficient Skyline Computation in Structured Peer-to-Peer Systems

    DEFF Research Database (Denmark)

    Cui, Bin; Chen, Lijiang; Xu, Linhao

    2009-01-01

    An increasing number of large-scale applications exploit peer-to-peer network architecture to provide highly scalable and flexible services. Among these applications, data management in peer-to-peer systems is one of the interesting domains. In this paper, we investigate the multidimensional...... skyline computation problem on a structured peer-to-peer network. In order to achieve low communication cost and quick response time, we utilize the iMinMax(\\theta ) method to transform high-dimensional data to one-dimensional value and distribute the data in a structured peer-to-peer network called BATON....... Thereafter, we propose a progressive algorithm with adaptive filter technique for efficient skyline computation in this environment. We further discuss some optimization techniques for the algorithm, and summarize the key principles of our algorithm into a query routing protocol with detailed analysis...

  15. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Science.gov (United States)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.

  16. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    Science.gov (United States)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm

  17. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  18. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  19. Efficient Parallel Engineering Computing on Linux Workstations

    Science.gov (United States)

    Lou, John Z.

    2010-01-01

    A C software module has been developed that creates lightweight processes (LWPs) dynamically to achieve parallel computing performance in a variety of engineering simulation and analysis applications to support NASA and DoD project tasks. The required interface between the module and the application it supports is simple, minimal and almost completely transparent to the user applications, and it can achieve nearly ideal computing speed-up on multi-CPU engineering workstations of all operating system platforms. The module can be integrated into an existing application (C, C++, Fortran and others) either as part of a compiled module or as a dynamically linked library (DLL).

  20. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    dynamics increases exponentially with respect to temperature. APL dependence on temperature is linear over an entire temperature range. I provide numerical evidence that thermal expansion coefficient of a lipid bilayer can be computed at elevated temperatures and extrapolated to the temperature of interest...

  1. Efficient multigrid computation of steady hypersonic flows

    NARCIS (Netherlands)

    Koren, B.; Hemker, P.W.; Murthy, T.K.S.

    1991-01-01

    In steady hypersonic flow computations, Newton iteration as a local relaxation procedure and nonlinear multigrid iteration as an acceleration procedure may both easily fail. In the present chapter, same remedies are presented for overcoming these problems. The equations considered are the steady,

  2. Efficient Computations and Representations of Visible Surfaces.

    Science.gov (United States)

    1979-12-01

    position as stated. The smooth contour generator may lie along a sharp ridge, for instance. Richards & Stevens -28- 6m lace contout s ?S ,.......... ceoonec...From understanding computation to understanding neural circuitry. Neurosci. Res. Prog. Bull. 13. 470-488. Metelli, F. 1970 An algebraic development of

  3. Synthesis of Efficient Structures for Concurrent Computation.

    Science.gov (United States)

    1983-10-01

    formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network

  4. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  5. Towards highly efficient water photoelectrolysis

    Science.gov (United States)

    Elavambedu Prakasam, Haripriya

    ethylene glycol resulted in remarkable growth characteristics of titania nanotube arrays, hexagonal closed packed up to 1 mm in length, with tube aspect ratios of approximately 10,000. For the first time, complete anodization of the starting titanium foil has been demonstrated resulting in back to back nanotube array membranes ranging from 360 mum--1 mm in length. The nanotubes exhibited growth rates of up to 15 mum/hr. A detailed study on the factors affecting the growth rate and nanotube dimensions is presented. It is suggested that faster high field ionic conduction through a thinner barrier layer is responsible for the higher growth rates observed in electrolytes containing ethylene glycol. Methods to fabricate free standing, titania nanotube array membranes ranging in thickness from 50 microm--1000 mum has also been an outcome of this dissertation. In an effort to combine the charge transport properties of titania with the light absorption properties of iron (III) oxide, films comprised of vertically oriented Ti-Fe-O nanotube arrays on FTO coated glass substrates have been successfully synthesized in ethylene glycol electrolytes. Depending upon the Fe content the bandgap of the resulting films varied from about 3.26 to 2.17 eV. The Ti-Fe oxide nanotube array films demonstrated a photocurrent of 2 mA/cm2 under global AM 1.5 illumination with a 1.2% (two-electrode) photoconversion efficiency, demonstrating a sustained, time-energy normalized hydrogen evolution rate by water splitting of 7.1 mL/W·hr in a 1 M KOH solution with a platinum counter electrode under an applied bias of 0.7 V. The Ti-Fe-O material architecture demonstrates properties useful for hydrogen generation by water photoelectrolysis and, more importantly, this dissertation demonstrates that the general nanotube-array synthesis technique can be extended to other ternary oxide compositions of interest for water photoelectrolysis.

  6. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Efficient Computer Implementations of Fast Fourier Transforms.

    Science.gov (United States)

    1980-12-01

    fit in computer? Yes, continue (9) Determine fastest algorithm between WFTA and PFA from Table 4.6. For N=420, WFTA PFA Mult 1296 2528 Add 11352 10956...real adds = 24tN/4 + 2(3tN/4) = 15tN/2 (G.8) 260 All odd prime C<ictors ciual to or (,rater than 5 iso the general transform section. Based on the

  8. HIGH-EFFICIENCY INFRARED RECEIVER

    Directory of Open Access Journals (Sweden)

    A. K. Esman

    2016-01-01

    Full Text Available Recent research and development show promising use of high-performance solid-state receivers of the electromagnetic radiation. These receivers are based on the low-barrier Schottky diodes. The approach to the design of the receivers on the basis of delta-doped low-barrier Schottky diodes with beam leads without bias is especially actively developing because for uncooled receivers of the microwave radiation these diodes have virtually no competition. The purpose of this work is to improve the main parameters and characteristics that determine the practical relevance of the receivers of mid-infrared electromagnetic radiation at the operating room temperature by modifying the electrodes configuration of the diode and optimizing the distance between them. Proposed original design solution of the integrated receiver of mid-infrared radiation on the basis of the low-barrier Schottky diodes with beam leads allows to effectively adjust its main parameters and characteristics. Simulation of the electromagnetic characteristics of the proposed receiver by using the software package HFSS with the basic algorithm of a finite element method which implemented to calculate the behavior of electromagnetic fields on an arbitrary geometry with a predetermined material properties have shown that when the inner parts of the electrodes of the low-barrier Schottky diode is performed in the concentric elliptical convex-concave shape, it can be reduce the reflection losses to -57.75 dB and the standing wave ratio to 1.003 while increasing the directivity up to 23 at a wavelength of 6.09 μm. At this time, the rounded radii of the inner parts of the anode and cathode electrodes are equal 212 nm and 318 nm respectively and the gap setting between them is 106 nm. These parameters will improve the efficiency of the developed infrared optical-promising and electronic equipment for various purposes intended for work in the mid-infrared wavelength range. 

  9. High-efficiency wind turbine

    Science.gov (United States)

    Hein, L. A.; Myers, W. N.

    1980-01-01

    Vertical axis wind turbine incorporates several unique features to extract more energy from wind increasing efficiency 20% over conventional propeller driven units. System also features devices that utilize solar energy or chimney effluents during periods of no wind.

  10. High efficiency, long life terrestrial solar panel

    Science.gov (United States)

    Chao, T.; Khemthong, S.; Ling, R.; Olah, S.

    1977-01-01

    The design of a high efficiency, long life terrestrial module was completed. It utilized 256 rectangular, high efficiency solar cells to achieve high packing density and electrical output. Tooling for the fabrication of solar cells was in house and evaluation of the cell performance was begun. Based on the power output analysis, the goal of a 13% efficiency module was achievable.

  11. High efficiency turbine blade coatings

    Energy Technology Data Exchange (ETDEWEB)

    Youchison, Dennis L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallis, Michail A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-06-01

    The development of advanced thermal barrier coatings (TBCs) of yttria stabilized zirconia (YSZ) that exhibit lower thermal conductivity through better control of electron beam - physical vapor deposition (EB-PVD) processing is of prime interest to both the aerospace and power industries. This report summarizes the work performed under a two-year Lab-Directed Research and Development (LDRD) project (38664) to produce lower thermal conductivity, graded-layer thermal barrier coatings for turbine blades in an effort to increase the efficiency of high temperature gas turbines. This project was sponsored by the Nuclear Fuel Cycle Investment Area. Therefore, particular importance was given to the processing of the large blades required for industrial gas turbines proposed for use in the Brayton cycle of nuclear plants powered by high temperature gas-cooled reactors (HTGRs). During this modest (~1 full-time equivalent (FTE)) project, the processing technology was developed to create graded TBCs by coupling ion beam-assisted deposition (IBAD) with substrate pivoting in the alumina-YSZ system. The Electron Beam - 1200 kW (EB-1200) PVD system was used to deposit a variety of TBC coatings with micron layered microstructures and reduced thermal conductivity below 1.5 W/m.K. The use of IBAD produced fully stoichiometric coatings at a reduced substrate temperature of 600°C and a reduced oxygen background pressure of 0.1 Pa. IBAD was also used to successfully demonstrate the transitioning of amorphous PVD-deposited alumina to the -phase alumina required as an oxygen diffusion barrier and for good adhesion to the substrate Ni2Al3 bondcoat. This process replaces the time consuming thermally grown oxide formation required before the YSZ deposition. In addition to the process technology, Direct Simulation Monte Carlo plume modeling and spectroscopic characterization of the PVD plumes were performed. The project consisted of five tasks. These included the

  12. The Efficient Use of Vector Computers with Emphasis on Computational Fluid Dynamics : a GAMM-Workshop

    CERN Document Server

    Gentzsch, Wolfgang

    1986-01-01

    The GAMM Committee for Numerical Methods in Fluid Mechanics organizes workshops which should bring together experts of a narrow field of computational fluid dynamics (CFD) to exchange ideas and experiences in order to speed-up the development in this field. In this sense it was suggested that a workshop should treat the solution of CFD problems on vector computers. Thus we organized a workshop with the title "The efficient use of vector computers with emphasis on computational fluid dynamics". The workshop took place at the Computing Centre of the University of Karlsruhe, March 13-15,1985. The participation had been restricted to 22 people of 7 countries. 18 papers have been presented. In the announcement of the workshop we wrote: "Fluid mechanics has actively stimulated the development of superfast vector computers like the CRAY's or CYBER 205. Now these computers on their turn stimulate the development of new algorithms which result in a high degree of vectorization (sca1ar/vectorized execution-time). But w...

  13. Evaluating performance of high efficiency mist eliminators

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    material on each element at final dP. Plots of overall filtering efficiencies for DOP (spherical aerosol) and dry surrogate (aspherical aerosols) at specified dPs were computed for each filter. Filtering efficiencies were determined as a function of particle size. Curves were also generated showing the most penetrating particle size as a function of dP. A preliminary set of tests was conducted to evaluate spray location, duration, pressure, and wash volume for in-place cleaning the interior surface (reducing dP) of the HEME element. A variety of nozzle designs were evaluated and test results demonstrated the potential to overload the HEME (saturate filter medium) resulting in very high dPs and extensive drain times. At least one combination of spray nozzle design, spray location on the surface of the element, and spray time/pressure was successful in achieving extension of operational life. (authors)

  14. High accuracy ion optics computing

    International Nuclear Information System (INIS)

    Amos, R.J.; Evans, G.A.; Smith, R.

    1986-01-01

    Computer simulation of focused ion beams for surface analysis of materials by SIMS, or for microfabrication by ion beam lithography plays an important role in the design of low energy ion beam transport and optical systems. Many computer packages currently available, are limited in their applications, being inaccurate or inappropriate for a number of practical purposes. This work describes an efficient and accurate computer programme which has been developed and tested for use on medium sized machines. The programme is written in Algol 68 and models the behaviour of a beam of charged particles through an electrostatic system. A variable grid finite difference method is used with a unique data structure, to calculate the electric potential in an axially symmetric region, for arbitrary shaped boundaries. Emphasis has been placed upon finding an economic method of solving the resulting set of sparse linear equations in the calculation of the electric field and several of these are described. Applications include individual ion lenses, extraction optics for ions in surface analytical instruments and the design of columns for ion beam lithography. Computational results have been compared with analytical calculations and with some data obtained from individual einzel lenses. (author)

  15. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  16. High Efficiency Ka-Band Spatial Combiner

    Directory of Open Access Journals (Sweden)

    D. Passi

    2014-12-01

    Full Text Available A Ka-Band, High Efficiency, Small Size Spatial Combiner (SPC is proposed in this paper, which uses an innovatively matched quadruple Fin Lines to microstrip (FLuS transitions. At the date of this paper and at the Author's best knowledge no such FLuS innovative transitions have been reported in literature before. These transitions are inserted into a WR28 waveguide T-junction, in order to allow the integration of 16 Monolithic Microwave Integrated Circuit (MMIC Solid State Power Amplifiers (SSPA's. A computational electromagnetic model using the finite elements method has been implemented. A mean insertion loss of 2 dB is achieved with a return loss better the 10 dB in the 31-37 GHz bandwidth.

  17. High efficiency motor selection handbook

    Science.gov (United States)

    McCoy, Gilbert A.; Litman, Todd; Douglass, John G.

    1990-10-01

    Substantial reductions in energy and operational costs can be achieved through the use of energy-efficient electric motors. A handbook was compiled to help industry identify opportunities for cost-effective application of these motors. It covers the economic and operational factors to be considered when motor purchase decisions are being made. Its audience includes plant managers, plant engineers, and others interested in energy management or preventative maintenance programs.

  18. Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov

    2017-01-01

    Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal

  19. A Computational Framework for Efficient Low Temperature Plasma Simulations

    Science.gov (United States)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  20. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  1. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Bartlett, Roscoe Ainsworth

    2010-05-01

    The ubiquitous use of raw pointers in higher-level code is the primary cause of all memory usage problems and memory leaks in C++ programs. This paper describes what might be considered a radical approach to the problem which is to encapsulate the use of all raw pointers and all raw calls to new and delete in higher-level C++ code. Instead, a set of cooperating template classes developed in the Trilinos package Teuchos are used to encapsulate every use of raw C++ pointers in every use case where it appears in high-level code. Included in the set of memory management classes is the typical reference-counted smart pointer class similar to boost::shared ptr (and therefore C++0x std::shared ptr). However, what is missing in boost and the new standard library are non-reference counted classes for remaining use cases where raw C++ pointers would need to be used. These classes have a debug build mode where nearly all programmer errors are caught and gracefully reported at runtime. The default optimized build mode strips all runtime checks and allows the code to perform as efficiently as raw C++ pointers with reasonable usage. Also included is a novel approach for dealing with the circular references problem that imparts little extra overhead and is almost completely invisible to most of the code (unlike the boost and therefore C++0x approach). Rather than being a radical approach, encapsulating all raw C++ pointers is simply the logical progression of a trend in the C++ development and standards community that started with std::auto ptr and is continued (but not finished) with std::shared ptr in C++0x. Using the Teuchos reference-counted memory management classes allows one to remove unnecessary constraints in the use of objects by removing arbitrary lifetime ordering constraints which are a type of unnecessary coupling [23]. The code one writes with these classes will be more likely to be correct on first writing, will be less likely to contain silent (but deadly) memory

  2. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    2016-04-19

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiency of pathline computation.

  3. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  4. Efficient universal computing architectures for decoding neural activity.

    Directory of Open Access Journals (Sweden)

    Benjamin I Rapoport

    Full Text Available The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain- machine interfaces (BMIs. Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain- machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than [Formula: see text]. We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA implementation of this portion

  5. High efficiency focus neutron generator

    Science.gov (United States)

    Sadeghi, H.; Amrollahi, R.; Zare, M.; Fazelpour, S.

    2017-12-01

    In the present paper, the new idea to increase the neutron yield of plasma focus devices is investigated and the results are presented. Based on many studies, more than 90% of neutrons in plasma focus devices were produced by beam target interactions and only 10% of them were due to thermonuclear reactions. While propounding the new idea, the number of collisions between deuteron ions and deuterium gas atoms were increased remarkably well. The COMSOL Multiphysics 5.2 was used to study the given idea in the known 28 plasma focus devices. In this circumstance, the neutron yield of this system was also obtained and reported. Finally, it was found that in the ENEA device with 1 Hz working frequency, 1.1 × 109 and 1.1 × 1011 neutrons per second were produced by D-D and D-T reactions, respectively. In addition, in the NX2 device with 16 Hz working frequency, 1.34 × 1010 and 1.34 × 1012 neutrons per second were produced by D-D and D-T reactions, respectively. The results show that with regards to the sizes and energy of these devices, they can be used as the efficient neutron generators.

  6. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  7. Positive Wigner functions render classical simulation of quantum computation efficient.

    Science.gov (United States)

    Mari, A; Eisert, J

    2012-12-07

    We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.

  8. High speed computer assisted tomography

    International Nuclear Information System (INIS)

    Maydan, D.; Shepp, L.A.

    1980-01-01

    X-ray generation and detection apparatus for use in a computer assisted tomography system which permits relatively high speed scanning. A large x-ray tube having a circular anode (3) surrounds the patient area. A movable electron gun (8) orbits adjacent to the anode. The anode directs into the patient area xrays which are delimited into a fan beam by a pair of collimating rings (21). After passing through the patient, x-rays are detected by an array (22) of movable detectors. Detector subarrays (23) are synchronously movable out of the x-ray plane to permit the passage of the fan beam

  9. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    OpenAIRE

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  10. Computer simulation at high pressure

    International Nuclear Information System (INIS)

    Alder, B.J.

    1977-11-01

    The use of either the Monte Carlo or molecular dynamics method to generate equations-of-state data for various materials at high pressure is discussed. Particular emphasis is given to phase diagrams, such as the generation of various types of critical lines for mixtures, melting, structural and electronic transitions in solids, two-phase ionic fluid systems of astrophysical interest, as well as a brief aside of possible eutectic behavior in the interior of the earth. Then the application of the molecular dynamics method to predict transport coefficients and the neutron scattering function is discussed with a view as to what special features high pressure brings out. Lastly, an analysis by these computational methods of the measured intensity and frequency spectrum of depolarized light and also of the deviation of the dielectric measurements from the constancy of the Clausius--Mosotti function is given that leads to predictions of how the electronic structure of an atom distorts with pressure

  11. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  12. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    1999-01-01

    In this article we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  13. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  14. Efficient Computation of Casimir Interactions between Arbitrary 3D Objects

    International Nuclear Information System (INIS)

    Reid, M. T. Homer; Rodriguez, Alejandro W.; White, Jacob; Johnson, Steven G.

    2009-01-01

    We introduce an efficient technique for computing Casimir energies and forces between objects of arbitrarily complex 3D geometries. In contrast to other recently developed methods, our technique easily handles nonspheroidal, nonaxisymmetric objects, and objects with sharp corners. Using our new technique, we obtain the first predictions of Casimir interactions in a number of experimentally relevant geometries, including crossed cylinders and tetrahedral nanoparticles.

  15. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  16. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  17. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  18. Critical study of high efficiency deep grinding

    OpenAIRE

    Johnstone, lain

    2002-01-01

    The recent years, the aerospace industry in particular has embraced and actively pursued the development of stronger high performance materials, namely nickel based superalloys and hardwearing steels. This has resulted in a need for a more efficient method of machining, and this need was answered with the advent of High Efficiency Deep Grinding (HEDG). This relatively new process using Cubic Boron Nitride (CBN) electroplated grinding wheels has been investigated through experim...

  19. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  20. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang

    2013-01-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  1. Efficient computation in adaptive artificial spiking neural networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); R.B.P. Nusselder (Roeland); H.S. Scholte; S.M. Bohte (Sander)

    2017-01-01

    textabstractArtificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of

  2. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  3. Efficient quantum circuits for one-way quantum computing.

    Science.gov (United States)

    Tanamoto, Tetsufumi; Liu, Yu-Xi; Hu, Xuedong; Nori, Franco

    2009-03-13

    While Ising-type interactions are ideal for implementing controlled phase flip gates in one-way quantum computing, natural interactions between solid-state qubits are most often described by either the XY or the Heisenberg models. We show an efficient way of generating cluster states directly using either the imaginary SWAP (iSWAP) gate for the XY model, or the sqrt[SWAP] gate for the Heisenberg model. Our approach thus makes one-way quantum computing more feasible for solid-state devices.

  4. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  5. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  6. Secure Computation, I/O-Efficient Algorithms and Distributed Signatures

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Kölker, Jonas; Toft, Tomas

    2012-01-01

    values of form r, gr for random secret-shared r ∈ ℤq and gr in a group of order q. This costs a constant number of exponentiation per player per value generated, even if less than n/3 players are malicious. This can be used for efficient distributed computing of Schnorr signatures. We further develop...... the technique so we can sign secret data in a distributed fashion at essentially the same cost....

  7. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  8. Efficient MATLAB computations with sparse and factored tensors.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  9. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  10. Progress of OLED devices with high efficiency at high luminance

    Science.gov (United States)

    Nguyen, Carmen; Ingram, Grayson; Lu, Zhenghong

    2014-03-01

    Organic light emitting diodes (OLEDs) have progressed significantly over the last two decades. For years, OLEDs have been promoted as the next generation technology for flat panel displays and solid-state lighting due to their potential for high energy efficiency and dynamic range of colors. Although high efficiency can readily be obtained at low brightness levels, a significant decline at high brightness is commonly observed. In this report, we will review various strategies for achieving highly efficient phosphorescent OLED devices at high luminance. Specifically, we will provide details regarding the performance and general working principles behind each strategy. We will conclude by looking at how some of these strategies can be combined to produce high efficiency white OLEDs at high brightness.

  11. Measure Guideline. High Efficiency Natural Gas Furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Brand, L. [Partnership for Advanced Residential Retrofit (PARR), Des Plaines, IL (United States); Rose, W. [Partnership for Advanced Residential Retrofit (PARR), Des Plaines, IL (United States)

    2012-10-01

    This measure guideline covers installation of high-efficiency gas furnaces, including: when to install a high-efficiency gas furnace as a retrofit measure; how to identify and address risks; and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  12. Measure Guideline: High Efficiency Natural Gas Furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Brand, L.; Rose, W.

    2012-10-01

    This Measure Guideline covers installation of high-efficiency gas furnaces. Topics covered include when to install a high-efficiency gas furnace as a retrofit measure, how to identify and address risks, and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  13. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  14. Experiments on high efficiency aerosol filtration

    International Nuclear Information System (INIS)

    Mazzini, M.; Cuccuru, A.; Kunz, P.

    1977-01-01

    Research on high efficiency aerosol filtration by the Nuclear Engineering Institute of Pisa University and by CAMEN in collaboration with CNEN is outlined. HEPA filter efficiency was studied as a function of the type and size of the test aerosol, and as a function of flowrate (+-50% of the nominal value), air temperature (up to 70 0 C), relative humidity (up to 100%), and durability in a corrosive atmosphere (up to 140 hours in NaCl mist). In the selected experimental conditions these influences were appreciable but are not sufficient to be significant in industrial HEPA filter applications. Planned future research is outlined: measurement of the efficiency of two HEPA filters in series using a fixed particle size; dependence of the efficiency on air, temperatures up to 300-500 0 C; performance when subject to smoke from burning organic materials (natural rubber, neoprene, miscellaneous plastics). Such studies are relevant to possible accidental fires in a plutonium laboratory

  15. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  16. High efficiency, variable geometry, centrifugal cryogenic pump

    International Nuclear Information System (INIS)

    Forsha, M.D.; Nichols, K.E.; Beale, C.A.

    1994-01-01

    A centrifugal cryogenic pump has been developed which has a basic design that is rugged and reliable with variable speed and variable geometry features that achieve high pump efficiency over a wide range of head-flow conditions. The pump uses a sealless design and rolling element bearings to achieve high reliability and the ruggedness to withstand liquid-vapor slugging. The pump can meet a wide range of variable head, off-design flow requirements and maintain design point efficiency by adjusting the pump speed. The pump also has features that allow the impeller and diffuser blade heights to be adjusted. The adjustable height blades were intended to enhance the pump efficiency when it is operating at constant head, off-design flow rates. For small pumps, the adjustable height blades are not recommended. For larger pumps, they could provide off-design efficiency improvements. This pump was developed for supercritical helium service, but the design is well suited to any cryogenic application where high efficiency is required over a wide range of head-flow conditions

  17. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  18. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  19. Efficient technique for computational design of thermoelectric materials

    Science.gov (United States)

    Núñez-Valdez, Maribel; Allahyari, Zahed; Fan, Tao; Oganov, Artem R.

    2018-01-01

    Efficient thermoelectric materials are highly desirable, and the quest for finding them has intensified as they could be promising alternatives to fossil energy sources. Here we present a general first-principles approach to predict, in multicomponent systems, efficient thermoelectric compounds. The method combines a robust evolutionary algorithm, a Pareto multiobjective optimization, density functional theory and a Boltzmann semi-classical calculation of thermoelectric efficiency. To test the performance and reliability of our overall framework, we use the well-known system Bi2Te3-Sb2Te3.

  20. Saving energy via high-efficiency fans.

    Science.gov (United States)

    Heine, Thomas

    2016-08-01

    Thomas Heine, sales and market manager for EC Upgrades, the retrofit arm of global provider of air movement solutions, ebm-papst A&NZ, discusses the retrofitting of high-efficiency fans to existing HVAC equipment to 'drastically reduce energy consumption'.

  1. Improving robustness and computational efficiency using modern C++

    International Nuclear Information System (INIS)

    Paterno, M; Kowalkowski, J; Green, C

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  2. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    Science.gov (United States)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  3. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  4. High Efficiency Reversible Fuel Cell Power Converter

    DEFF Research Database (Denmark)

    Pittini, Riccardo

    as well as different dc-ac and dc-dc converter topologies are presented and analyzed. A new ac-dc topology for high efficiency data center applications is proposed and an efficiency characterization based on the fuel cell stack I-V characteristic curve is presented. The second part discusses the main...... converter components. Wide bandgap power semiconductors are introduced due to their superior performance in comparison to traditional silicon power devices. The analysis presents a study based on switching loss measurements performed on Si IGBTs, SiC JFETs, SiC MOSFETs and their respective gate drivers...

  5. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based......, and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. A review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning and cost are often...

  6. Adding computationally efficient realism to Monte Carlo turbulence simulation

    Science.gov (United States)

    Campbell, C. W.

    1985-01-01

    Frequently in aerospace vehicle flight simulation, random turbulence is generated using the assumption that the craft is small compared to the length scales of turbulence. The turbulence is presumed to vary only along the flight path of the vehicle but not across the vehicle span. The addition of the realism of three-dimensionality is a worthy goal, but any such attempt will not gain acceptance in the simulator community unless it is computationally efficient. A concept for adding three-dimensional realism with a minimum of computational complexity is presented. The concept involves the use of close rational approximations to irrational spectra and cross-spectra so that systems of stable, explicit difference equations can be used to generate the turbulence.

  7. High Efficiency Centrifugal Compressor for Rotorcraft Applications

    Science.gov (United States)

    Medic, Gorazd; Sharma, Om P.; Jongwook, Joo; Hardin, Larry W.; McCormick, Duane C.; Cousins, William T.; Lurie, Elizabeth A.; Shabbir, Aamir; Holley, Brian M.; Van Slooten, Paul R.

    2017-01-01

    A centrifugal compressor research effort conducted by United Technologies Research Center under NASA Research Announcement NNC08CB03C is documented. The objectives were to identify key technical barriers to advancing the aerodynamic performance of high-efficiency, high work factor, compact centrifugal compressor aft-stages for turboshaft engines; to acquire measurements needed to overcome the technical barriers and inform future designs; to design, fabricate, and test a new research compressor in which to acquire the requisite flow field data. A new High-Efficiency Centrifugal Compressor stage -- splittered impeller, splittered diffuser, 90 degree bend, and exit guide vanes -- with aerodynamically aggressive performance and configuration (compactness) goals were designed, fabricated, and subquently tested at the NASA Glenn Research Center.

  8. High-Temperature High-Efficiency Solar Thermoelectric Generators

    Energy Technology Data Exchange (ETDEWEB)

    Baranowski, LL; Warren, EL; Toberer, ES

    2014-03-01

    Inspired by recent high-efficiency thermoelectric modules, we consider thermoelectrics for terrestrial applications in concentrated solar thermoelectric generators (STEGs). The STEG is modeled as two subsystems: a TEG, and a solar absorber that efficiently captures the concentrated sunlight and limits radiative losses from the system. The TEG subsystem is modeled using thermoelectric compatibility theory; this model does not constrain the material properties to be constant with temperature. Considering a three-stage TEG based on current record modules, this model suggests that 18% efficiency could be experimentally expected with a temperature gradient of 1000A degrees C to 100A degrees C. Achieving 15% overall STEG efficiency thus requires an absorber efficiency above 85%, and we consider two methods to achieve this: solar-selective absorbers and thermally insulating cavities. When the TEG and absorber subsystem models are combined, we expect that the STEG modeled here could achieve 15% efficiency with optical concentration between 250 and 300 suns.

  9. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  10. High efficiency and broadband acoustic diodes

    Science.gov (United States)

    Fu, Congyi; Wang, Bohan; Zhao, Tianfei; Chen, C. Q.

    2018-01-01

    Energy transmission efficiency and working bandwidth are the two major factors limiting the application of current acoustic diodes (ADs). This letter presents a design of high efficiency and broadband acoustic diodes composed of a nonlinear frequency converter and a linear wave filter. The converter consists of two masses connected by a bilinear spring with asymmetric tension and compression stiffness. The wave filter is a linear mass-spring lattice (sonic crystal). Both numerical simulation and experiment show that the energy transmission efficiency of the acoustic diode can be improved by as much as two orders of magnitude, reaching about 61%. Moreover, the primary working band width of the AD is about two times of the cut-off frequency of the sonic crystal filter. The cut-off frequency dependent working band of the AD implies that the developed AD can be scaled up or down from macro-scale to micro- and nano-scale.

  11. Graphics processor efficiency for realization of rapid tabular computations

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2016-01-01

    Capabilities of graphics processing units (GPU) and central processing units (CPU) have been investigated for realization of fast-calculation algorithms with the use of tabulated functions. The realization of tabulated functions is exemplified by the GPU/CPU architecture-based processors. Comparison is made between the operating efficiencies of GPU and CPU, employed for tabular calculations at different conditions of use. Recommendations are formulated for the use of graphical and central processors to speed up scientific and engineering computations through the use of tabulated functions

  12. Efficient quantum algorithm for computing n-time correlation functions.

    Science.gov (United States)

    Pedernales, J S; Di Candia, R; Egusquiza, I L; Casanova, J; Solano, E

    2014-07-11

    We propose a method for computing n-time correlation functions of arbitrary spinorial, fermionic, and bosonic operators, consisting of an efficient quantum algorithm that encodes these correlations in an initially added ancillary qubit for probe and control tasks. For spinorial and fermionic systems, the reconstruction of arbitrary n-time correlation functions requires the measurement of two ancilla observables, while for bosonic variables time derivatives of the same observables are needed. Finally, we provide examples applicable to different quantum platforms in the frame of the linear response theory.

  13. Computationally Efficient and Noise Robust DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Many natural signals, such as voiced speech and some musical instruments, are approximately periodic over short intervals. These signals are often described in mathematics by the sum of sinusoids (harmonics) with frequencies that are proportional to the fundamental frequency, or pitch. In sensor...... a joint DOA and pitch estimator. In white Gaussian noise, we derive even more computationally efficient solutions which are designed using the narrowband power spectrum of the harmonics. Numerical results reveal the performance of the estimators in colored noise compared with the Cram\\'{e}r-Rao lower...

  14. Efficient Use of Preisach Hysteresis Model in Computer Aided Design

    Directory of Open Access Journals (Sweden)

    IONITA, V.

    2013-05-01

    Full Text Available The paper presents a practical detailed analysis regarding the use of the classical Preisach hysteresis model, covering all the steps, from measuring the necessary data for the model identification to the implementation in a software code for Computer Aided Design (CAD in Electrical Engineering. An efficient numerical method is proposed and the hysteresis modeling accuracy is tested on magnetic recording materials. The procedure includes the correction of the experimental data, which are used for the hysteresis model identification, taking into account the demagnetizing effect for the sample that is measured in an open-circuit device (a vibrating sample magnetometer.

  15. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  16. High Efficiency, Low Emission Refrigeration System

    Energy Technology Data Exchange (ETDEWEB)

    Fricke, Brian A [ORNL; Sharma, Vishaldeep [ORNL

    2016-08-01

    Supermarket refrigeration systems account for approximately 50% of supermarket energy use, placing this class of equipment among the highest energy consumers in the commercial building domain. In addition, the commonly used refrigeration system in supermarket applications is the multiplex direct expansion (DX) system, which is prone to refrigerant leaks due to its long lengths of refrigerant piping. This leakage reduces the efficiency of the system and increases the impact of the system on the environment. The high Global Warming Potential (GWP) of the hydrofluorocarbon (HFC) refrigerants commonly used in these systems, coupled with the large refrigerant charge and the high refrigerant leakage rates leads to significant direct emissions of greenhouse gases into the atmosphere. Methods for reducing refrigerant leakage and energy consumption are available, but underutilized. Further work needs to be done to reduce costs of advanced system designs to improve market utilization. In addition, refrigeration system retrofits that result in reduced energy consumption are needed since the majority of applications address retrofits rather than new stores. The retrofit market is also of most concern since it involves large-volume refrigerant systems with high leak rates. Finally, alternative refrigerants for new and retrofit applications are needed to reduce emissions and reduce the impact on the environment. The objective of this Collaborative Research and Development Agreement (CRADA) between the Oak Ridge National Laboratory and Hill Phoenix is to develop a supermarket refrigeration system that reduces greenhouse gas emissions and has 25 to 30 percent lower energy consumption than existing systems. The outcomes of this project will include the design of a low emission, high efficiency commercial refrigeration system suitable for use in current U.S. supermarkets. In addition, a prototype low emission, high efficiency supermarket refrigeration system will be produced for

  17. High efficiency novel window air conditioner

    International Nuclear Information System (INIS)

    Bansal, Pradeep

    2015-01-01

    Highlights: • Use of novel refrigerant mixture of R32/R125 (85/15% molar conc.) to reduce global warming and improve energy efficiency. • Use of novel features such as electronically commuted motor (ECM) fan motor, slinger and sub-merged sub-cooler. • Energy savings of up to 0.1 Quads per year in USA and much more in Asia/Middle East where WACs are used in large numbers. • Payback period of only 1.4 years of the novel efficient WAC. - Abstract: This paper presents the results of an experimental and analytical evaluation of measures to raise the efficiency of window air conditioners (WAC). In order to achieve a higher energy efficiency ratio (EER), the original capacity of a baseline R410A unit was reduced by replacing the original compressor with a lower capacity but higher EER compressor, while all heat exchangers and the chassis from the original unit were retained. Subsequent major modifications included – replacing the alternating current fan motor with a brushless high efficiency electronically commutated motor (ECM) motor, replacing the capillary tube with a needle valve to better control the refrigerant flow and refrigerant set points, and replacing R410A with a ‘drop-in’ lower global warming potential (GWP) binary mixture of R32/R125 (85/15% molar concentration). All these modifications resulted in significant enhancement in the EER of the baseline WAC. Further, an economic analysis of the new WAC revealed an encouraging payback period

  18. High efficiency carbon nanotube thread antennas

    Science.gov (United States)

    Amram Bengio, E.; Senic, Damir; Taylor, Lauren W.; Tsentalovich, Dmitri E.; Chen, Peiyu; Holloway, Christopher L.; Babakhani, Aydin; Long, Christian J.; Novotny, David R.; Booth, James C.; Orloff, Nathan D.; Pasquali, Matteo

    2017-10-01

    Although previous research has explored the underlying theory of high-frequency behavior of carbon nanotubes (CNTs) and CNT bundles for antennas, there is a gap in the literature for direct experimental measurements of radiation efficiency. These measurements are crucial for any practical application of CNT materials in wireless communication. In this letter, we report a measurement technique to accurately characterize the radiation efficiency of λ/4 monopole antennas made from the CNT thread. We measure the highest absolute values of radiation efficiency for CNT antennas of any type, matching that of copper wire. To capture the weight savings, we propose a specific radiation efficiency metric and show that these CNT antennas exceed copper's performance by over an order of magnitude at 1 GHz and 2.4 GHz. We also report direct experimental observation that, contrary to metals, the radiation efficiency of the CNT thread improves significantly at higher frequencies. These results pave the way for practical applications of CNT thread antennas, particularly in the aerospace and wearable electronics industries where weight saving is a priority.

  19. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based...

  20. Efficient Computation of Transition State Resonances and Reaction Rates from a Quantum Normal Form

    NARCIS (Netherlands)

    Schubert, Roman; Waalkens, Holger; Wiggins, Stephen

    2006-01-01

    A quantum version of a recent formulation of transition state theory in phase space is presented. The theory developed provides an algorithm to compute quantum reaction rates and the associated Gamov-Siegert resonances with very high accuracy. The algorithm is especially efficient for

  1. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  2. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    Science.gov (United States)

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  3. High efficiency lithium-thionyl chloride cell

    Science.gov (United States)

    Doddapaneni, N.

    1982-08-01

    The polarization characteristics and the specific cathode capacity of Teflon bonded carbon electrodes in the Li/SOCl2 system have been evaluated. Doping of electrocatalysts such as cobalt and iron phthalocyanine complexes improved both cell voltage and cell rate capability. High efficiency Li/SOCl2 cells were thus achieved with catalyzed cathodes. The electrochemical reduction of SOCl2 seems to undergo modification at catalyzed cathode. For example, the reduction of SOCl2 at FePc catalyzed cathode involves 2-1/2 e-/mole of SOCl2. Furthermore, the reduction mechanism is simplified and unwanted chemical species are eliminated by the catalyst. Thus a potentially safer high efficiency Li/SOCl2 can be anticipated.

  4. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  5. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2014-01-01

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  6. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  7. Bioblendstocks that Enable High Efficiency Engine Designs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Robert L.; Fioroni, Gina M.; Ratcliff, Matthew A.; Zigler, Bradley T.; Farrell, John

    2016-11-03

    The past decade has seen a high level of innovation in production of biofuels from sugar, lipid, and lignocellulose feedstocks. As discussed in several talks at this workshop, ethanol blends in the E25 to E50 range could enable more highly efficient spark-ignited (SI) engines. This is because of their knock resistance properties that include not only high research octane number (RON), but also charge cooling from high heat of vaporization, and high flame speed. Emerging alcohol fuels such as isobutanol or mixed alcohols have desirable properties such as reduced gasoline blend vapor pressure, but also have lower RON than ethanol. These fuels may be able to achieve the same knock resistance benefits, but likely will require higher blend levels or higher RON hydrocarbon blendstocks. A group of very high RON (>150) oxygenates such as dimethyl furan, methyl anisole, and related compounds are also produced from biomass. While providing no increase in charge cooling, their very high octane numbers may provide adequate knock resistance for future highly efficient SI engines. Given this range of options for highly knock resistant fuels there appears to be a critical need for a fuel knock resistance metric that includes effects of octane number, heat of vaporization, and potentially flame speed. Emerging diesel fuels include highly branched long-chain alkanes from hydroprocessing of fats and oils, as well as sugar-derived terpenoids. These have relatively high cetane number (CN), which may have some benefits in designing more efficient CI engines. Fast pyrolysis of biomass can produce diesel boiling range streams that are high in aromatic, oxygen and acid contents. Hydroprocessing can be applied to remove oxygen and consequently reduce acidity, however there are strong economic incentives to leave up to 2 wt% oxygen in the product. This oxygen will primarily be present as low CN alkyl phenols and aryl ethers. While these have high heating value, their presence in diesel fuel

  8. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    International Nuclear Information System (INIS)

    Lu Liuyan; Lantz, Steven R.; Ren Zhuyin; Pope, Stephen B.

    2009-01-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f m pi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  9. High efficiency inverter and ballast circuits

    International Nuclear Information System (INIS)

    Nilssen, O.K.

    1984-01-01

    A high efficiency push-pull inverter circuit employing a pair of relatively high power switching transistors is described. The switching on and off of the transistors is precisely controlled to minimize power losses due to common-mode conduction or due to transient conditions that occur in the process of turning a transistor on or off. Two current feed-back transformers are employed in the transistor base drives; one being saturable for providing a positive feedback, and the other being non-saturable for providing a subtractive feedback

  10. Optimization of a high efficiency FEL amplifier

    International Nuclear Information System (INIS)

    Schneidmiller, E.A.; Yurkov, M.V.

    2014-10-01

    The problem of an efficiency increase of an FEL amplifier is now of great practical importance. Technique of undulator tapering in the post-saturation regime is used at the existing X-ray FELs LCLS and SACLA, and is planned for use at the European XFEL, Swiss FEL, and PAL XFEL. There are also discussions on the future of high peak and average power FELs for scientific and industrial applications. In this paper we perform detailed analysis of the tapering strategies for high power seeded FEL amplifiers. Application of similarity techniques allows us to derive universal law of the undulator tapering.

  11. Highly efficient fully transparent inverted OLEDs

    Science.gov (United States)

    Meyer, J.; Winkler, T.; Hamwi, S.; Schmale, S.; Kröger, M.; Görrn, P.; Johannes, H.-H.; Riedl, T.; Lang, E.; Becker, D.; Dobbertin, T.; Kowalsky, W.

    2007-09-01

    One of the unique selling propositions of OLEDs is their potential to realize highly transparent devices over the visible spectrum. This is because organic semiconductors provide a large Stokes-Shift and low intrinsic absorption losses. Hence, new areas of applications for displays and ambient lighting become accessible, for instance, the integration of OLEDs into the windshield or the ceiling of automobiles. The main challenge in the realization of fully transparent devices is the deposition of the top electrode. ITO is commonly used as transparent bottom anode in a conventional OLED. To obtain uniform light emission over the entire viewing angle and a low series resistance, a TCO such as ITO is desirable as top contact as well. However, sputter deposition of ITO on top of organic layers causes damage induced by high energetic particles and UV radiation. We have found an efficient process to protect the organic layers against the ITO rf magnetron deposition process of ITO for an inverted OLED (IOLED). The inverted structure allows the integration of OLEDs in more powerful n-channel transistors used in active matrix backplanes. Employing the green electrophosphorescent material Ir(ppy) 3 lead to IOLED with a current efficiency of 50 cd/A and power efficiency of 24 lm/W at 100 cd/m2. The average transmittance exceeds 80 % in the visible region. The on-set voltage for light emission is lower than 3 V. In addition, by vertical stacking we achieved a very high current efficiency of more than 70 cd/A for transparent IOLED.

  12. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  13. High Efficiency Colloidal Quantum Dot Phosphors

    Energy Technology Data Exchange (ETDEWEB)

    Kahen, Keith

    2013-12-31

    The project showed that non-Cd containing, InP-based nanocrystals (semiconductor materials with dimensions of ~6 nm) have high potential for enabling next-generation, nanocrystal-based, on chip phosphors for solid state lighting. Typical nanocrystals fall short of the requirements for on chip phosphors due to their loss of quantum efficiency under the operating conditions of LEDs, such as, high temperature (up to 150 °C) and high optical flux (up to 200 W/cm2). The InP-based nanocrystals invented during this project maintain high quantum efficiency (>80%) in polymer-based films under these operating conditions for emission wavelengths ranging from ~530 to 620 nm. These nanocrystals also show other desirable attributes, such as, lack of blinking (a common problem with nanocrystals which limits their performance) and no increase in the emission spectral width from room to 150 °C (emitters with narrower spectral widths enable higher efficiency LEDs). Prior to these nanocrystals, no nanocrystal system (regardless of nanocrystal type) showed this collection of properties; in fact, other nanocrystal systems are typically limited to showing only one desirable trait (such as high temperature stability) but being deficient in other properties (such as high flux stability). The project showed that one can reproducibly obtain these properties by generating a novel compositional structure inside of the nanomaterials; in addition, the project formulated an initial theoretical framework linking the compositional structure to the list of high performance optical properties. Over the course of the project, the synthetic methodology for producing the novel composition was evolved to enable the synthesis of these nanomaterials at a cost approximately equal to that required for forming typical conventional nanocrystals. Given the above results, the last major remaining step prior to scale up of the nanomaterials is to limit the oxidation of these materials during the tens of

  14. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  15. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    Energy Technology Data Exchange (ETDEWEB)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik [School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  16. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  17. High efficiency motors; Motores de alta eficiencia

    Energy Technology Data Exchange (ETDEWEB)

    Uranga Favela, Ivan Jaime [Energia Controlada de Mexico, S. A. de C. V., Mexico, D. F. (Mexico)

    1993-12-31

    This paper is a technical-financial study of the high efficiency and super-premium motors. As it is widely known, more than 60% of the electrical energy generated in the country is used for the operation of motors, in industry as well as in commerce. Therefore the importance that the motors have in the efficient energy use. [Espanol] El presente trabajo es un estudio tecnico-financiero de los motores de alta eficiencia y los motores super premium. Como es ampliamente conocido, mas del 60% de la energia electrica generada en el pais, es utilizada para accionar motores, dentro de la industria y el comercio. De alli la importancia que los motores tienen en el uso eficiente de la energia.

  18. High-efficiency organic glass scintillators

    Science.gov (United States)

    Feng, Patrick L.; Carlson, Joseph S.

    2017-12-19

    A new family of neutron/gamma discriminating scintillators is disclosed that comprises stable organic glasses that may be melt-cast into transparent monoliths. These materials have been shown to provide light yields greater than solution-grown trans-stilbene crystals and efficient PSD capabilities when combined with 0.01 to 0.05% by weight of the total composition of a wavelength-shifting fluorophore. Photoluminescence measurements reveal fluorescence quantum yields that are 2 to 5 times greater than conventional plastic or liquid scintillator matrices, which accounts for the superior light yield of these glasses. The unique combination of high scintillation light-yields, efficient neutron/gamma PSD, and straightforward scale-up via melt-casting distinguishes the developed organic glasses from existing scintillators.

  19. High efficiency motors; Motores de alta eficiencia

    Energy Technology Data Exchange (ETDEWEB)

    Uranga Favela, Ivan Jaime [Energia Controlada de Mexico, S. A. de C. V., Mexico, D. F. (Mexico)

    1992-12-31

    This paper is a technical-financial study of the high efficiency and super-premium motors. As it is widely known, more than 60% of the electrical energy generated in the country is used for the operation of motors, in industry as well as in commerce. Therefore the importance that the motors have in the efficient energy use. [Espanol] El presente trabajo es un estudio tecnico-financiero de los motores de alta eficiencia y los motores super premium. Como es ampliamente conocido, mas del 60% de la energia electrica generada en el pais, es utilizada para accionar motores, dentro de la industria y el comercio. De alli la importancia que los motores tienen en el uso eficiente de la energia.

  20. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  1. High efficiency beam splitting for H- accelerators

    International Nuclear Information System (INIS)

    Kramer, S.L.; Stipp, V.; Krieger, C.; Madsen, J.

    1985-01-01

    Beam splitting for high energy accelerators has typically involved a significant loss of beam and radiation. This paper reports on a new method of splitting beams for H - accelerators. This technique uses a high intensity flash of light to strip a fraction of the H - beam to H 0 which are then easily separated by a small bending magnet. A system using a 900-watt (average electrical power) flashlamp and a highly efficient collector will provide 10 -3 to 10 -2 splitting of a 50 MeV H - beam. Results on the operation and comparisons with stripping cross sections are presented. Also discussed is the possibility for developing this system to yield a higher stripping fraction

  2. High Quantum Efficiency OLED Lighting Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shiang, Joseph [General Electric (GE) Global Research, Fairfield, CT (United States)

    2011-09-30

    The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution process on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.

  3. High-efficiency concentrator silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Sinton, R.A.; Cuevas, A.; King, R.R.; Swanson, R.M. (Stanford Univ., CA (USA). Solid-State Electronics Lab.)

    1990-11-01

    This report presents results from extensive process development in high-efficiency Si solar cells. An advanced design for a 1.56-cm{sup 2} cell with front grids achieved 26% efficiency at 90 suns. This is especially significant since this cell does not require a prismatic cover glass. New designs for simplified backside-contact solar cells were advanced from a status of near-nonfunctionality to demonstrated 21--22% for one-sun cells in sizes up to 37.5 cm{sup 2}. An efficiency of 26% was achieved for similar 0.64-cm{sup 2} concentrator cells at 150 suns. More fundamental work on dopant-diffused regions is also presented here. The recombination vs. various process and physical parameters was studied in detail for boron and phosphorous diffusions. Emitter-design studies based solidly upon these new data indicate the performance vs design parameters for a variety of the cases of most interest to solar cell designers. Extractions of p-type bandgap narrowing and the surface recombination for p- and n-type regions from these studies have a generality that extends beyond solar cells into basic device modeling. 68 refs., 50 figs.

  4. Nanooptics for high efficient photon managment

    Science.gov (United States)

    Wyrowski, Frank; Schimmel, Hagen

    2005-09-01

    Optical systems for photon management, that is the generation of tailored electromagnetic fields, constitute one of the keys for innovation through photonics. An important subfield of photon management deals with the transformation of an incident light field into a field of specified intensity distribution. In this paper we consider some basic aspects of the nature of systems for those light transformations. It turns out, that the transversal redistribution of energy (TRE) is of central concern to achieve systems with high transformation efficiency. Besides established techniques nanostructured optical elements (NOE) are demanded to implement transversal energy redistribution. That builds a bridge between the needs of photon management, optical engineering, and nanooptics.

  5. Developing a computationally efficient dynamic multilevel hybrid optimization scheme using multifidelity model interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Castro, Joseph Pete Jr. (; .); Giunta, Anthony Andrew

    2006-01-01

    Many engineering application problems use optimization algorithms in conjunction with numerical simulators to search for solutions. The formulation of relevant objective functions and constraints dictate possible optimization algorithms. Often, a gradient based approach is not possible since objective functions and constraints can be nonlinear, nonconvex, non-differentiable, or even discontinuous and the simulations involved can be computationally expensive. Moreover, computational efficiency and accuracy are desirable and also influence the choice of solution method. With the advent and increasing availability of massively parallel computers, computational speed has increased tremendously. Unfortunately, the numerical and model complexities of many problems still demand significant computational resources. Moreover, in optimization, these expenses can be a limiting factor since obtaining solutions often requires the completion of numerous computationally intensive simulations. Therefore, we propose a multifidelity optimization algorithm (MFO) designed to improve the computational efficiency of an optimization method for a wide range of applications. In developing the MFO algorithm, we take advantage of the interactions between multi fidelity models to develop a dynamic and computational time saving optimization algorithm. First, a direct search method is applied to the high fidelity model over a reduced design space. In conjunction with this search, a specialized oracle is employed to map the design space of this high fidelity model to that of a computationally cheaper low fidelity model using space mapping techniques. Then, in the low fidelity space, an optimum is obtained using gradient or non-gradient based optimization, and it is mapped back to the high fidelity space. In this paper, we describe the theory and implementation details of our MFO algorithm. We also demonstrate our MFO method on some example problems and on two applications: earth penetrators and

  6. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  8. Efficient scatter model for simulation of ultrasound images from computed tomography data

    Science.gov (United States)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  9. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  10. High efficiency double sided solar cells

    International Nuclear Information System (INIS)

    Seddik, M.M.

    1990-06-01

    Silicon technology state of the art for single crystalline was given to be limited to less than 20% efficiency. A proposed new form of photovoltaic solar cell of high current high efficiency with double sided structures has been given. The new forms could be n ++ pn ++ or p ++ np ++ double side junctions. The idea of double sided devices could be understood as two solar cells connected back-to-back in parallel electrical connection, in which the current is doubled if the cell is illuminated from both sides by a V-shaped reflector. The cell is mounted to the reflector such that each face is inclined at an angle of 45 deg. C to each side of the reflector. The advantages of the new structure are: a) High power devices. b) Easy to fabricate. c) The cells are used vertically instead of horizontal use of regular solar cell which require large area to install. This is very important in power stations and especially for satellite installation. If the proposal is made real and proved to be experimentally feasible, it would be a new era for photovoltaic solar cells since the proposal has already been extended to even higher currents. The suggested structures could be stated as: n ++ pn ++ Vp ++ np ++ ;n ++ pn ++ Vn ++ pn ++ ORp ++ np ++ Vp ++ np ++ . These types of structures are formed in wedged shape to employ indirect illumination by either parabolic; conic or V-shaped reflectors. The advantages of these new forms are low cost; high power; less in size and space; self concentrating; ... etc. These proposals if it happens to find their ways to be achieved experimentally, I think they will offer a short path to commercial market and would have an incredible impact on solar cell technology and applications. (author). 12 refs, 5 figs

  11. Simple Motor Control Concept Results High Efficiency at High Velocities

    Science.gov (United States)

    Starin, Scott; Engel, Chris

    2013-09-01

    The need for high velocity motors in space applications for reaction wheels and detectors has stressed the limits of Brushless Permanent Magnet Motors (BPMM). Due to inherent hysteresis core losses, conventional BPMMs try to balance the need for torque verses hysteresis losses. Cong-less motors have significantly less hysteresis losses but suffer from lower efficiencies. Additionally, the inherent low inductance in cog-less motors result in high ripple currents or high switching frequencies, which lowers overall efficiency and increases performance demands on the control electronics.However, using a somewhat forgotten but fully qualified technology of Isotropic Magnet Motors (IMM), extremely high velocities may be achieved at low power input using conventional drive electronics. This paper will discuss the trade study efforts and empirical test data on a 34,000 RPM IMM.

  12. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  13. Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO

    Science.gov (United States)

    Kulkarni, Sameer; Beach, Timothy A.

    2017-01-01

    Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.

  14. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  15. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  16. High Efficiency, Illumination Quality OLEDs for Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Joseph Shiang; James Cella; Kelly Chichak; Anil Duggal; Kevin Janora; Chris Heller; Gautam Parthasarathy; Jeffery Youmans; Joseph Shiang

    2008-03-31

    The goal of the program was to demonstrate a 45 lumen per watt white light device based upon the use of multiple emission colors through the use of solution processing. This performance level is a dramatic extension of the team's previous 15 LPW large area illumination device. The fundamental material system was based upon commercial polymer materials. The team was largely able to achieve these goals, and was able to deliver to DOE a 90 lumen illumination source that had an average performance of 34 LPW a 1000 cd/m{sup 2} with peak performances near 40LPW. The average color temperature is 3200K and the calculated CRI 85. The device operated at a brightness of approximately 1000cd/m{sup 2}. The use of multiple emission colors particularly red and blue, provided additional degrees of design flexibility in achieving white light, but also required the use of a multilayered structure to separate the different recombination zones and prevent interconversion of blue emission to red emission. The use of commercial materials had the advantage that improvements by the chemical manufacturers in charge transport efficiency, operating life and material purity could be rapidly incorporated without the expenditure of additional effort. The program was designed to take maximum advantage of the known characteristics of these material and proceeded in seven steps. (1) Identify the most promising materials, (2) assemble them into multi-layer structures to control excitation and transport within the OLED, (3) identify materials development needs that would optimize performance within multilayer structures, (4) build a prototype that demonstrates the potential entitlement of the novel multilayer OLED architecture (5) integrate all of the developments to find the single best materials set to implement the novel multilayer architecture, (6) further optimize the best materials set, (7) make a large area high illumination quality white OLED. A photo of the final deliverable is shown

  17. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick

    2016-01-01

    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  18. Efficiently computing exact geodesic loops within finite steps.

    Science.gov (United States)

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  19. High efficiency thin-film solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Schock, Hans-Werner [Helmholtz Zentrum Berlin (Germany). Solar Energy

    2012-11-01

    Production of photovoltaics is growing worldwide on a gigawatt scale. Among the thin film technologies, Cu(In,Ga)S,Se{sub 2} (CIS or CIGS) based solar cells have been the focus of more and more attention. This paper aims to analyze the success of CIGS based solar cells and the potential of this technology for future photovoltaics large-scale production. Specific material properties make CIS unique and allow the preparation of the material with a wide range of processing options. The huge potential lies in the possibility to take advantage of modern thin film processing equipment and combine it with very high efficiencies beyond 20% already achieved on the laboratory scale. A sustainable development of this technology could be realized by modifying the materials and replacing indium by abundant elements. (orig.)

  20. Design of High Efficient MPPT Solar Inverter

    Directory of Open Access Journals (Sweden)

    Sunitha K. A.

    2017-01-01

    Full Text Available This work aims to design a High Efficient Maximum Power Point Tracking (MPPT Solar Inverter. A boost converter is designed in the system to boost the power from the photovoltaic panel. By this experimental setup a room consisting of 500 Watts load (eight fluorescent tubes is completely controlled. It is aimed to decrease the maintenance cost. A microcontroller is introduced for tracking the P&O (Perturb and Observe algorithm used for tracking the maximum power point. The duty cycle for the operation of the boost convertor is optimally adjusted by using MPPT controller. There is a MPPT charge controller to charge the battery as well as fed to inverter which runs the load. Both the P&O scheme with the fixed variation for the reference current and the intelligent MPPT algorithm were able to identify the global Maximum power point, however the performance of the MPPT algorithm was better.

  1. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  2. The CRRES high efficiency solar panel

    International Nuclear Information System (INIS)

    Trumble, T.M.

    1991-01-01

    This paper reports on the High Efficiency Solar Panel (HESP) experiments which is to provide both engineering and scientific information concerning the effects of space radiation on advanced gallium arsenide (GaAs) solar cells. The HESP experiment consists of an ambient panel, and annealing panel and a programmable load. This experiment, in conjunction with the radiation measurement experiments abroad the CREES, provides the first opportunity to simultaneously measure the trapped radiation belts and the results of radiation damage to solar cells. The engineering information will result in a design guide for selecting the optimum solar array characteristics for different orbits and different lifetimes. The scientific information will provide both correlation of laboratory damage effects to space damage effects and a better model for predicting effective solar cell panel lifetimes

  3. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  4. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.

    2016-01-01

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  5. Efficiency of High Order Spectral Element Methods on Petascale Architectures

    KAUST Repository

    Hutchinson, Maxwell

    2016-06-14

    High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.

  6. Zerodur polishing process for high surface quality and high efficiency

    International Nuclear Information System (INIS)

    Tesar, A.; Fuchs, B.

    1992-08-01

    Zerodur is a glass-ceramic composite importance in applications where temperature instabilities influence optical and mechanical performance, such as in earthbound and spaceborne telescope mirror substrates. Polished Zerodur surfaces of high quality have been required for laser gyro mirrors. Polished surface quality of substrates affects performance of high reflection coatings. Thus, the interest in improving Zerodur polished surface quality has become more general. Beyond eliminating subsurface damage, high quality surfaces are produced by reducing the amount of hydrated material redeposited on the surface during polishing. With the proper control of polishing parameters, such surfaces exhibit roughnesses of < l Angstrom rms. Zerodur polishing was studied to recommend a high surface quality polishing process which could be easily adapted to standard planetary continuous polishing machines and spindles. This summary contains information on a polishing process developed at LLNL which reproducibly provides high quality polished Zerodur surfaces at very high polishing efficiencies

  7. High-efficiency silicon solar cells for low-illumination applications

    OpenAIRE

    Glunz, S.W.; Dicker, J.; Esterle, M.; Hermle, M.; Isenberg, J.; Kamerewerd, F.; Knobloch, J.; Kray, D.; Leimenstoll, A.; Lutz, F.; Oßwald, D.; Preu, R.; Rein, S.; Schäffer, E.; Schetter, C.

    2002-01-01

    At Fraunhofer ISE the fabrication of high-efficiency solar cells was extended from a laboratory scale to a small pilot-line production. Primarily, the fabricated cells are used in small high-efficiency modules integrated in prototypes of solar-powered portable electronic devices such as cellular phones, handheld computers etc. Compared to other applications of high-efficiency cells such as solar cars and planes, the illumination densities found in these mainly indoor applications are signific...

  8. An accurate and computationally efficient small-scale nonlinear FEA of flexible risers

    OpenAIRE

    Rahmati, MT; Bahai, H; Alfano, G

    2016-01-01

    This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...

  9. Smoothing the payoff for efficient computation of Basket option prices

    KAUST Repository

    Bayer, Christian

    2017-07-22

    We consider the problem of pricing basket options in a multivariate Black–Scholes or Variance-Gamma model. From a numerical point of view, pricing such options corresponds to moderate and high-dimensional numerical integration problems with non-smooth integrands. Due to this lack of regularity, higher order numerical integration techniques may not be directly available, requiring the use of methods like Monte Carlo specifically designed to work for non-regular problems. We propose to use the inherent smoothing property of the density of the underlying in the above models to mollify the payoff function by means of an exact conditional expectation. The resulting conditional expectation is unbiased and yields a smooth integrand, which is amenable to the efficient use of adaptive sparse-grid cubature. Numerical examples indicate that the high-order method may perform orders of magnitude faster than Monte Carlo or Quasi Monte Carlo methods in dimensions up to 35.

  10. Computationally efficient model predictive control algorithms a neural network approach

    CERN Document Server

    Ławryńczuk, Maciej

    2014-01-01

    This book thoroughly discusses computationally efficient (suboptimal) Model Predictive Control (MPC) techniques based on neural models. The subjects treated include: ·         A few types of suboptimal MPC algorithms in which a linear approximation of the model or of the predicted trajectory is successively calculated on-line and used for prediction. ·         Implementation details of the MPC algorithms for feedforward perceptron neural models, neural Hammerstein models, neural Wiener models and state-space neural models. ·         The MPC algorithms based on neural multi-models (inspired by the idea of predictive control). ·         The MPC algorithms with neural approximation with no on-line linearization. ·         The MPC algorithms with guaranteed stability and robustness. ·         Cooperation between the MPC algorithms and set-point optimization. Thanks to linearization (or neural approximation), the presented suboptimal algorithms do not require d...

  11. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    Science.gov (United States)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  12. Highly efficient red electrophosphorescent devices at high current densities

    International Nuclear Information System (INIS)

    Wu Youzhi; Zhu Wenqing; Zheng Xinyou; Sun, Runguang; Jiang Xueyin; Zhang Zhilin; Xu Shaohong

    2007-01-01

    Efficiency decrease at high current densities in red electrophosphorescent devices is drastically restrained compared with that from conventional electrophosphorescent devices by using bis(2-methyl-8-quinolinato)4-phenylphenolate aluminum (BAlq) as a hole and exciton blocker. Ir complex, bis(2-(2'-benzo[4,5-α]thienyl) pyridinato-N,C 3' ) iridium (acetyl-acetonate) is used as an emitter, maximum external quantum efficiency (QE) of 7.0% and luminance of 10000cd/m 2 are obtained. The QE is still as high as 4.1% at higher current density J=100mA/cm 2 . CIE-1931 co-ordinates are 0.672, 0.321. A carrier trapping mechanism is revealed to dominate in the process of electroluminescence

  13. White LED with High Package Extraction Efficiency

    International Nuclear Information System (INIS)

    Yi Zheng; Stough, Matthew

    2008-01-01

    The goal of this project is to develop a high efficiency phosphor converting (white) Light Emitting Diode (pcLED) 1-Watt package through an increase in package extraction efficiency. A transparent/translucent monolithic phosphor is proposed to replace the powdered phosphor to reduce the scattering caused by phosphor particles. Additionally, a multi-layer thin film selectively reflecting filter is proposed between blue LED die and phosphor layer to recover inward yellow emission. At the end of the project we expect to recycle approximately 50% of the unrecovered backward light in current package construction, and develop a pcLED device with 80 lm/W e using our technology improvements and commercially available chip/package source. The success of the project will benefit luminous efficacy of white LEDs by increasing package extraction efficiency. In most phosphor-converting white LEDs, the white color is obtained by combining a blue LED die (or chip) with a powdered phosphor layer. The phosphor partially absorbs the blue light from the LED die and converts it into a broad green-yellow emission. The mixture of the transmitted blue light and green-yellow light emerging gives white light. There are two major drawbacks for current pcLEDs in terms of package extraction efficiency. The first is light scattering caused by phosphor particles. When the blue photons from the chip strike the phosphor particles, some blue light will be scattered by phosphor particles. Converted yellow emission photons are also scattered. A portion of scattered light is in the backward direction toward the die. The amount of this backward light varies and depends in part on the particle size of phosphors. The other drawback is that yellow emission from phosphor powders is isotropic. Although some backward light can be recovered by the reflector in current LED packages, there is still a portion of backward light that will be absorbed inside the package and further converted to heat. Heat generated

  14. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  15. Tailored Materials for High Efficiency CIDI Engines

    Energy Technology Data Exchange (ETDEWEB)

    Grant, G.J.; Jana, S.

    2012-03-30

    The overall goal of the project, Tailored Materials for High Efficiency Compression Ignition Direct Injection (CIDI) Engines, is to enable the implementation of new combustion strategies, such as homogeneous charge compression ignition (HCCI), that have the potential to significantly increase the energy efficiency of current diesel engines and decrease fuel consumption and environmental emissions. These strategies, however, are increasing the demands on conventional engine materials, either from increases in peak cylinder pressure (PCP) or from increases in the temperature of operation. The specific objective of this project is to investigate the application of a new material processing technology, friction stir processing (FSP), to improve the thermal and mechanical properties of engine components. The concept is to modify the surfaces of conventional, low-cost engine materials. The project focused primarily on FSP in aluminum materials that are compositional analogs to the typical piston and head alloys seen in small- to mid-sized CIDI engines. Investigations have been primarily of two types over the duration of this project: (1) FSP of a cast hypoeutectic Al-Si-Mg (A356/357) alloy with no introduction of any new components, and (2) FSP of Al-Cu-Ni alloys (Alloy 339) by physically stirring-in various quantities of carbon nanotubes/nanofibers or carbon fibers. Experimental work to date on aluminum systems has shown significant increases in fatigue lifetime and stress-level performance in aluminum-silicon alloys using friction processing alone, but work to demonstrate the addition of carbon nanotubes and fibers into aluminum substrates has shown mixed results due primarily to the difficulty in achieving porosity-free, homogeneous distributions of the particulate. A limited effort to understand the effects of FSP on steel materials was also undertaken during the course of this project. Processed regions were created in high-strength, low-alloyed steels up to 0.5 in

  16. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  17. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  18. High efficiency diffusion molecular retention tumor targeting.

    Directory of Open Access Journals (Sweden)

    Yanyan Guo

    Full Text Available Here we introduce diffusion molecular retention (DMR tumor targeting, a technique that employs PEG-fluorochrome shielded probes that, after a peritumoral (PT injection, undergo slow vascular uptake and extensive interstitial diffusion, with tumor retention only through integrin molecular recognition. To demonstrate DMR, RGD (integrin binding and RAD (control probes were synthesized bearing DOTA (for (111 In(3+, a NIR fluorochrome, and 5 kDa PEG that endows probes with a protein-like volume of 25 kDa and decreases non-specific interactions. With a GFP-BT-20 breast carcinoma model, tumor targeting by the DMR or i.v. methods was assessed by surface fluorescence, biodistribution of [(111In] RGD and [(111In] RAD probes, and whole animal SPECT. After a PT injection, both probes rapidly diffused through the normal and tumor interstitium, with retention of the RGD probe due to integrin interactions. With PT injection and the [(111In] RGD probe, SPECT indicated a highly tumor specific uptake at 24 h post injection, with 352%ID/g tumor obtained by DMR (vs 4.14%ID/g by i.v.. The high efficiency molecular targeting of DMR employed low probe doses (e.g. 25 ng as RGD peptide, which minimizes toxicity risks and facilitates clinical translation. DMR applications include the delivery of fluorochromes for intraoperative tumor margin delineation, the delivery of radioisotopes (e.g. toxic, short range alpha emitters for radiotherapy, or the delivery of photosensitizers to tumors accessible to light.

  19. High collection efficiency CVD diamond alpha detectors

    International Nuclear Information System (INIS)

    Bergonzo, P.; Foulon, F.; Marshall, R.D.; Jany, C.; Brambilla, A.; McKeag, R.D.; Jackman, R.B.

    1998-01-01

    Advances in Chemical Vapor Deposited (CVD) diamond have enabled the routine use of this material for sensor device fabrication, allowing exploitation of its unique combination of physical properties (low temperature susceptibility (> 500 C), high resistance to radiation damage (> 100 Mrad) and to corrosive media). A consequence of CVD diamond growth on silicon is the formation of polycrystalline films which has a profound influence on the physical and electronic properties with respect to those measured on monocrystalline diamond. The authors report the optimization of physical and geometrical device parameters for radiation detection in the counting mode. Sandwich and co-planar electrode geometries are tested and their performances evaluated with regard to the nature of the field profile and drift distances inherent in such devices. The carrier drift length before trapping was measured under alpha particles and values as high as 40% of the overall film thickness are reported. Further, by optimizing the device geometry, they show that a gain in collection efficiency, defined as the induced charge divided by the deposited charge within the material, can be achieved even though lower bias values are used

  20. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  1. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  2. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    Science.gov (United States)

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  3. High bandgap III-V alloys for high efficiency optoelectronics

    Energy Technology Data Exchange (ETDEWEB)

    Alberi, Kirstin; Mascarenhas, Angelo; Wanlass, Mark

    2017-01-10

    High bandgap alloys for high efficiency optoelectronics are disclosed. An exemplary optoelectronic device may include a substrate, at least one Al.sub.1-xIn.sub.xP layer, and a step-grade buffer between the substrate and at least one Al.sub.1-xIn.sub.xP layer. The buffer may begin with a layer that is substantially lattice matched to GaAs, and may then incrementally increase the lattice constant in each sequential layer until a predetermined lattice constant of Al.sub.1-xIn.sub.xP is reached.

  4. Performance of a high efficiency high power UHF klystron

    International Nuclear Information System (INIS)

    Konrad, G.T.

    1977-03-01

    A 500 kW c-w klystron was designed for the PEP storage ring at SLAC. The tube operates at 353.2 MHz, 62 kV, a microperveance of 0.75, and a gain of approximately 50 dB. Stable operation is required for a VSWR as high as 2 : 1 at any phase angle. The design efficiency is 70%. To obtain this value of efficiency, a second harmonic cavity is used in order to produce a very tightly bunched beam in the output gap. At the present time it is planned to install 12 such klystrons in PEP. A tube with a reduced size collector was operated at 4% duty at 500 kW. An efficiency of 63% was observed. The same tube was operated up to 200 kW c-w for PEP accelerator cavity tests. A full-scale c-w tube reached 500 kW at 65 kV with an efficiency of 55%. In addition to power and phase measurements into a matched load, some data at various load mismatches are presented

  5. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  6. Series-Tuned High Efficiency RF-Power Amplifiers

    DEFF Research Database (Denmark)

    Vidkjær, Jens

    2008-01-01

    An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits.......An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits....

  7. Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems

    Science.gov (United States)

    White, M. D.

    2011-12-01

    phase transitions. This paper describes and demonstrates a numerical solution scheme for ternary hydrate systems that seeks a balance between accuracy and computational efficiency. This scheme uses a generalize cubic equation of state, functional forms for the hydrate equilibria and cage occupancies, variable switching scheme for phase transitions, and kinetic exchange of hydrate formers (i.e., CH4, CO2, and N2) between the mobile phases (i.e., aqueous, liquid CO2, and gas) and hydrate phase. Accuracy of the scheme will be evaluated by comparing property values and phase equilibria against experimental data. Computational efficiency of the scheme will be evaluated by comparing the base scheme against variants. The application of interest will the production of a natural gas hydrate deposit from a geologic formation, using the guest molecule exchange process; where, a mixture of CO2 and N2 are injected into the formation. During the guest-molecule exchange, CO2 and N2 will predominately replace CH4 in the large and small cages of the sI structure, respectively.

  8. A strategy for improved computational efficiency of the method of anchored distributions

    Science.gov (United States)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  9. Computationally efficient models of neuromuscular recruitment and mechanics.

    Science.gov (United States)

    Song, D; Raphael, G; Lan, N; Loeb, G E

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  10. Computationally efficient models of neuromuscular recruitment and mechanics

    Science.gov (United States)

    Song, D.; Raphael, G.; Lan, N.; Loeb, G. E.

    2008-06-01

    We have improved the stability and computational efficiency of a physiologically realistic, virtual muscle (VM 3.*) model (Cheng et al 2000 J. Neurosci. Methods 101 117-30) by a simpler structure of lumped fiber types and a novel recruitment algorithm. In the new version (VM 4.0), the mathematical equations are reformulated into state-space representation and structured into a CMEX S-function in SIMULINK. A continuous recruitment scheme approximates the discrete recruitment of slow and fast motor units under physiological conditions. This makes it possible to predict force output during smooth recruitment and derecruitment without having to simulate explicitly a large number of independently recruited units. We removed the intermediate state variable, effective length (Leff), which had been introduced to model the delayed length dependency of the activation-frequency relationship, but which had little effect and could introduce instability under physiological conditions of use. Both of these changes greatly reduce the number of state variables with little loss of accuracy compared to the original VM. The performance of VM 4.0 was validated by comparison with VM 3.1.5 for both single-muscle force production and a multi-joint task. The improved VM 4.0 model is more suitable for the analysis of neural control of movements and for design of prosthetic systems to restore lost or impaired motor functions. VM 4.0 is available via the internet and includes options to use the original VM model, which remains useful for detailed simulations of single motor unit behavior.

  11. Energy Efficient Graphene Based High Performance Capacitors.

    Science.gov (United States)

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  13. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    Science.gov (United States)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  14. Highly efficient silicon light emitting diode

    NARCIS (Netherlands)

    Le Minh, P.; Holleman, J.; Wallinga, Hans

    2002-01-01

    In this paper, we describe the fabrication, using standard silicon processing techniques, of silicon light-emitting diodes (LED) that efficiently emit photons with energy around the silicon bandgap. The improved efficiency had been explained by the spatial confinement of charge carriers due to a

  15. A novel high efficiency solar photovoltalic pump

    NARCIS (Netherlands)

    Diepens, J.F.L.; Smulders, P.T.; Vries, de D.A.

    1993-01-01

    The daily average overall efficiency of a solar pump system is not only influenced by the maximum efficiency of the components of the system, but just as much by the correct matching of the components to the local irradiation pattern. Normally centrifugal pumps are used in solar pump systems. The

  16. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  17. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  18. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  19. High efficiency targets for high gain inertial confinement fusion

    International Nuclear Information System (INIS)

    Gardner, J.H.; Bodner, S.E.

    1986-01-01

    Rocket efficiencies as high as 15% are possible using short wavelength lasers and moderately high aspect ratio pellet designs. These designs are made possible by two recent breakthroughs in physics constraints. First is the development of the Induced Spatial Incoherence (ISI) technique which allows uniform illumination of the pellet and relaxes the constraint of thermal smoothing, permitting the use of short wavelength laser light. Second is the discovery that the Rayleigh-Taylor growth rate is considerably reduced at the short laser wavelengths. By taking advantage of the reduced constraints imposed by nonuniform laser illumination and Rayleigh-Taylor instability, pellets using 1/4 micron laser light and initial aspect ratios of about 10 (with in flight aspect ratios of about 150 to 200) may produce energy gains as high as 200 to 250

  20. High power klystrons for efficient reliable high power amplifiers

    Science.gov (United States)

    Levin, M.

    1980-11-01

    This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.

  1. The green computing book tackling energy efficiency at large scale

    CERN Document Server

    Feng, Wu-chun

    2014-01-01

    Low-Power, Massively Parallel, Energy-Efficient Supercomputers The Blue Gene TeamCompiler-Driven Energy Efficiency Mahmut Kandemir and Shekhar Srikantaiah An Adaptive Run-Time System for Improving Energy Efficiency Chung-Hsing Hsu, Wu-chun Feng, and Stephen W. PooleEnergy-Efficient Multithreading through Run-Time Adaptation Exploring Trade-Offs between Energy Savings and Reliability in Storage Systems Ali R. Butt, Puranjoy Bhattacharjee, Guanying Wang, and Chris GniadyCross-Layer Power Management Zhikui Wang and Parthasarathy Ranganathan Energy-Efficient Virtualized Systems Ripal Nathuji and K

  2. The computational optimization of heat exchange efficiency in stack chimneys

    Energy Technology Data Exchange (ETDEWEB)

    Van Goch, T.A.J.

    2012-02-15

    For many industrial processes, the chimney is the final step before hot fumes, with high thermal energy content, are discharged into the atmosphere. Tapping into this energy and utilizing it for heating or cooling applications, could improve sustainability, efficiency and/or reduce operational costs. Alternatively, an unused chimney, like the monumental chimney at the Eindhoven University of Technology, could serve as an 'energy channeler' once more; it can enhance free cooling by exploiting the stack effect. This study aims to identify design parameters that influence annual heat exchange in such stack chimney applications and optimize these parameters for specific scenarios to maximize the performance. Performance is defined by annual heat exchange, system efficiency and costs. The energy required for the water pump as compared to the energy exchanged, defines the system efficiency, which is expressed in an efficiency coefficient (EC). This study is an example of applying building performance simulation (BPS) tools for decision support in the early phase of the design process. In this study, BPS tools are used to provide design guidance, performance evaluation and optimization. A general method for optimization of simulation models will be studied, and applied in two case studies with different applications (heating/cooling), namely; (1) CERES case: 'Eindhoven University of Technology monumental stack chimney equipped with a heat exchanger, rejects heat to load the cold source of the aquifer system on the campus of the university and/or provides free cooling to the CERES building'; and (2) Industrial case: 'Heat exchanger in an industrial stack chimney, which recoups heat for use in e.g. absorption cooling'. The main research question, addressing the concerns of both cases, is expressed as follows: 'what is the optimal set of design parameters so heat exchange in stack chimneys is optimized annually for the cases in which a

  3. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  4. High efficiency quasi-monochromatic infrared emitter

    Energy Technology Data Exchange (ETDEWEB)

    Brucoli, Giovanni; Besbes, Mondher; Benisty, Henri, E-mail: henri.benisty@institutoptique.fr; Greffet, Jean-Jacques [Laboratoire Charles Fabry, UMR 8501, Institut d’Optique, CNRS, Université Paris-Sud 11, 2, Avenue Augustin Fresnel, 91127 Palaiseau Cedex (France); Bouchon, Patrick; Haïdar, Riad [Office National d’Études et de Recherches Aérospatiales, Chemin de la Hunière, 91761 Palaiseau (France)

    2014-02-24

    Incandescent radiation sources are widely used as mid-infrared emitters owing to the lack of alternative for compact and low cost sources. A drawback of miniature hot systems such as membranes is their low efficiency, e.g., for battery powered systems. For targeted narrow-band applications such as gas spectroscopy, the efficiency is even lower. In this paper, we introduce design rules valid for very generic membranes demonstrating that their energy efficiency for use as incandescent infrared sources can be increased by two orders of magnitude.

  5. Efficient quantum computation in a network with probabilistic gates and logical encoding

    DEFF Research Database (Denmark)

    Borregaard, J.; Sørensen, A. S.; Cirac, J. I.

    2017-01-01

    An approach to efficient quantum computation with probabilistic gates is proposed and analyzed in both a local and nonlocal setting. It combines heralded gates previously studied for atom or atomlike qubits with logical encoding from linear optical quantum computation in order to perform high......-fidelity quantum gates across a quantum network. The error-detecting properties of the heralded operations ensure high fidelity while the encoding makes it possible to correct for failed attempts such that deterministic and high-quality gates can be achieved. Importantly, this is robust to photon loss, which...... is typically the main obstacle to photonic-based quantum information processing. Overall this approach opens a path toward quantum networks with atomic nodes and photonic links....

  6. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits.

    Science.gov (United States)

    Ujfalussy, Balázs B; Makara, Judit K; Branco, Tiago; Lengyel, Máté

    2015-12-24

    Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.

  7. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  8. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  9. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  10. Energy efficiency of high-rise buildings

    Science.gov (United States)

    Zhigulina, Anna Yu.; Ponomarenko, Alla M.

    2018-03-01

    The article is devoted to analysis of tendencies and advanced technologies in the field of energy supply and energy efficiency of tall buildings, to the history of the emergence of the concept of "efficiency" and its current interpretation. Also the article show the difference of evaluation criteria of the leading rating systems LEED and BREEAM. Authors reviewed the latest technologies applied in the construction of energy efficient buildings. Methodological approach to the design of tall buildings taking into account energy efficiency needs to include the primary energy saving; to seek the possibility of production and accumulation of alternative electric energy by converting energy from the sun and wind with the help of special technical devices; the application of regenerative technologies.

  11. High Efficiency Refrigeration Process, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — It has been proposed by NASA JSC studies, that the most mass efficient (non-nuclear) method of Lunar habitat cooling is via photovoltaic (PV) direct vapor...

  12. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  13. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  14. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  15. High Power High Efficiency Diode Laser Stack for Processing

    Science.gov (United States)

    Gu, Yuanyuan; Lu, Hui; Fu, Yueming; Cui, Yan

    2018-03-01

    High-power diode lasers based on GaAs semiconductor bars are well established as reliable and highly efficient laser sources. As diode laser is simple in structure, small size, longer life expectancy with the advantages of low prices, it is widely used in the industry processing, such as heat treating, welding, hardening, cladding and so on. Respectively, diode laser could make it possible to establish the practical application because of rectangular beam patterns which are suitable to make fine bead with less power. At this power level, it can have many important applications, such as surgery, welding of polymers, soldering, coatings and surface treatment of metals. But there are some applications, which require much higher power and brightness, e.g. hardening, key hole welding, cutting and metal welding. In addition, High power diode lasers in the military field also have important applications. So all developed countries have attached great importance to high-power diode laser system and its applications. This is mainly due their low performance. In this paper we will introduce the structure and the principle of the high power diode stack.

  16. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  17. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  18. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  19. CFD application to advanced design for high efficiency spacer grid

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, Kazuo, E-mail: kazuo3_ikeda@ndc.mhi.co.jp

    2014-11-15

    Highlights: • A new LDV was developed to investigate the local velocity in a rod bundle and inside a spacer grid. • The design information that utilizes for high efficiency spacer grid has been obtained. • CFD methodology that predicts flow field in a PWR fuel has been developed. • The high efficiency spacer grid was designed using the CFD methodology. - Abstract: Pressurized water reactor (PWR) fuels have been developed to meet the needs of the market. A spacer grid is a key component to improve thermal hydraulic performance of a PWR fuel assembly. Mixing structures (vanes) of a spacer grid promote coolant mixing and enhance heat removal from fuel rods. A larger mixing vane would improve mixing effect, which would increase the departure from nucleate boiling (DNB) benefit for fuel. However, the increased pressure loss at large mixing vanes would reduce the coolant flow at the mixed fuel core, which would reduce the DNB margin. The solution is to develop a spacer grid whose pressure loss is equal to or less than the current spacer grid and that has higher critical heat flux (CHF) performance. For this reason, a requirement of design tool for predicting the pressure loss and CHF performance of spacer grids has been increased. The author and co-workers have been worked for development of high efficiency spacer grid using Computational Fluid Dynamics (CFD) for nearly 20 years. A new laser Doppler velocimetry (LDV), which is miniaturized with fiber optics embedded in a fuel cladding, was developed to investigate the local velocity profile in a rod bundle and inside a spacer grid. The rod-embedded fiber LDV (rod LDV) can be inserted in an arbitrary grid cell instead of a fuel rod, and has the advantage of not disturbing the flow field since it is the same shape as a fuel rod. The probe volume of the rod LDV is small enough to measure spatial velocity profile in a rod gap and inside a spacer grid. According to benchmark experiments such as flow velocity

  20. CFD application to advanced design for high efficiency spacer grid

    International Nuclear Information System (INIS)

    Ikeda, Kazuo

    2014-01-01

    Highlights: • A new LDV was developed to investigate the local velocity in a rod bundle and inside a spacer grid. • The design information that utilizes for high efficiency spacer grid has been obtained. • CFD methodology that predicts flow field in a PWR fuel has been developed. • The high efficiency spacer grid was designed using the CFD methodology. - Abstract: Pressurized water reactor (PWR) fuels have been developed to meet the needs of the market. A spacer grid is a key component to improve thermal hydraulic performance of a PWR fuel assembly. Mixing structures (vanes) of a spacer grid promote coolant mixing and enhance heat removal from fuel rods. A larger mixing vane would improve mixing effect, which would increase the departure from nucleate boiling (DNB) benefit for fuel. However, the increased pressure loss at large mixing vanes would reduce the coolant flow at the mixed fuel core, which would reduce the DNB margin. The solution is to develop a spacer grid whose pressure loss is equal to or less than the current spacer grid and that has higher critical heat flux (CHF) performance. For this reason, a requirement of design tool for predicting the pressure loss and CHF performance of spacer grids has been increased. The author and co-workers have been worked for development of high efficiency spacer grid using Computational Fluid Dynamics (CFD) for nearly 20 years. A new laser Doppler velocimetry (LDV), which is miniaturized with fiber optics embedded in a fuel cladding, was developed to investigate the local velocity profile in a rod bundle and inside a spacer grid. The rod-embedded fiber LDV (rod LDV) can be inserted in an arbitrary grid cell instead of a fuel rod, and has the advantage of not disturbing the flow field since it is the same shape as a fuel rod. The probe volume of the rod LDV is small enough to measure spatial velocity profile in a rod gap and inside a spacer grid. According to benchmark experiments such as flow velocity

  1. Computationally efficient SVM multi-class image recognition with confidence measures

    International Nuclear Information System (INIS)

    Makili, Lazaro; Vega, Jesus; Dormido-Canto, Sebastian; Pastor, Ignacio; Murari, Andrea

    2011-01-01

    Typically, machine learning methods produce non-qualified estimates, i.e. the accuracy and reliability of the predictions are not provided. Transductive predictors are very recent classifiers able to provide, simultaneously with the prediction, a couple of values (confidence and credibility) to reflect the quality of the prediction. Usually, a drawback of the transductive techniques for huge datasets and large dimensionality is the high computational time. To overcome this issue, a more efficient classifier has been used in a multi-class image classification problem in the TJ-II stellarator database. It is based on the creation of a hash function to generate several 'one versus the rest' classifiers for every class. By using Support Vector Machines as the underlying classifier, a comparison between the pure transductive approach and the new method has been performed. In both cases, the success rates are high and the computation time with the new method is up to 0.4 times the old one.

  2. Designing high efficient solar powered lighting systems

    DEFF Research Database (Denmark)

    Poulsen, Peter Behrensdorff; Thorsteinsson, Sune; Lindén, Johannes

    2016-01-01

    Some major challenges in the development of L2L products is the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report the development of 2 Three-Port-Converters respec...

  3. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  4. Efficient one-way quantum computations for quantum error correction

    International Nuclear Information System (INIS)

    Huang Wei; Wei Zhaohui

    2009-01-01

    We show how to explicitly construct an O(nd) size and constant quantum depth circuit which encodes any given n-qubit stabilizer code with d generators. Our construction is derived using the graphic description for stabilizer codes and the one-way quantum computation model. Our result demonstrates how to use cluster states as scalable resources for many multi-qubit entangled states and how to use the one-way quantum computation model to improve the design of quantum algorithms.

  5. Accurate and efficient computation of synchrotron radiation functions

    International Nuclear Information System (INIS)

    MacLeod, Allan J.

    2000-01-01

    We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures

  6. An efficient algorithm for nucleolus and prekernel computation in some classes of TU-games

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Kuipers, J.

    1998-01-01

    We consider classes of TU-games. We show that we can efficiently compute an allocation in the intersection of the prekernel and the least core of the game if we can efficiently compute the minimum excess for any given allocation. In the case where the prekernel of the game contains exactly one core

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  10. Efficient and Highly Aldehyde Selective Wacker Oxidation

    KAUST Repository

    Teo, Peili; Wickens, Zachary K.; Dong, Guangbin; Grubbs, Robert H.

    2012-01-01

    A method for efficient and aldehyde-selective Wacker oxidation of aryl-substituted olefins using PdCl 2(MeCN) 2, 1,4-benzoquinone, and t-BuOH in air is described. Up to a 96% yield of aldehyde can be obtained, and up to 99% selectivity can be achieved with styrene-related substrates. © 2012 American Chemical Society.

  11. Efficient and Highly Aldehyde Selective Wacker Oxidation

    KAUST Repository

    Teo, Peili

    2012-07-06

    A method for efficient and aldehyde-selective Wacker oxidation of aryl-substituted olefins using PdCl 2(MeCN) 2, 1,4-benzoquinone, and t-BuOH in air is described. Up to a 96% yield of aldehyde can be obtained, and up to 99% selectivity can be achieved with styrene-related substrates. © 2012 American Chemical Society.

  12. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  13. Efficient CUDA Polynomial Preconditioned Conjugate Gradient Solver for Finite Element Computation of Elasticity Problems

    Directory of Open Access Journals (Sweden)

    Jianfei Zhang

    2013-01-01

    Full Text Available Graphics processing unit (GPU has obtained great success in scientific computations for its tremendous computational horsepower and very high memory bandwidth. This paper discusses the efficient way to implement polynomial preconditioned conjugate gradient solver for the finite element computation of elasticity on NVIDIA GPUs using compute unified device architecture (CUDA. Sliced block ELLPACK (SBELL format is introduced to store sparse matrix arising from finite element discretization of elasticity with fewer padding zeros than traditional ELLPACK-based formats. Polynomial preconditioning methods have been investigated both in convergence and running time. From the overall performance, the least-squares (L-S polynomial method is chosen as a preconditioner in PCG solver to finite element equations derived from elasticity for its best results on different example meshes. In the PCG solver, mixed precision algorithm is used not only to reduce the overall computational, storage requirements and bandwidth but to make full use of the capacity of the GPU devices. With SBELL format and mixed precision algorithm, the GPU-based L-S preconditioned CG can get a speedup of about 7–9 to CPU-implementation.

  14. Limits on efficient computation in the physical world

    Science.gov (United States)

    Aaronson, Scott Joel

    More than a speculative technology, quantum computing seems to challenge our most basic intuitions about how the physical world should behave. In this thesis I show that, while some intuitions from classical computer science must be jettisoned in the light of modern physics, many others emerge nearly unscathed; and I use powerful tools from computational complexity theory to help determine which are which. In the first part of the thesis, I attack the common belief that quantum computing resembles classical exponential parallelism, by showing that quantum computers would face serious limitations on a wider range of problems than was previously known. In particular, any quantum algorithm that solves the collision problem---that of deciding whether a sequence of n integers is one-to-one or two-to-one---must query the sequence O (n1/5) times. This resolves a question that was open for years; previously no lower bound better than constant was known. A corollary is that there is no "black-box" quantum algorithm to break cryptographic hash functions or solve the Graph Isomorphism problem in polynomial time. I also show that relative to an oracle, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform "quantum advice states"; and that any quantum algorithm needs O (2n/4/n) queries to find a local minimum of a black-box function on the n-dimensional hypercube. Surprisingly, the latter result also leads to new classical lower bounds for the local search problem. Finally, I give new lower bounds on quantum one-way communication complexity, and on the quantum query complexity of total Boolean functions and recursive Fourier sampling. The second part of the thesis studies the relationship of the quantum computing model to physical reality. I first examine the arguments of Leonid Levin, Stephen Wolfram, and others who believe quantum computing to be fundamentally impossible. I find their arguments unconvincing without a "Sure

  15. High efficiency nebulization for helium inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Jorabchi, Kaveh; McCormick, Ryan; Levine, Jonathan A.; Liu Huiying; Nam, S.-H.; Montaser, Akbar

    2006-01-01

    A pneumatically-driven, high efficiency nebulizer is explored for helium inductively coupled plasma mass spectrometry. The aerosol characteristics and analyte transport efficiencies of the high efficiency nebulizer for nebulization with helium are measured and compared to the results obtained with argon. Analytical performance indices of the helium inductively coupled plasma mass spectrometry are evaluated in terms of detection limits and precision. The helium inductively coupled plasma mass spectrometry detection limits obtained with the high efficiency nebulizer at 200 μL/min are higher than those achieved with the ultrasonic nebulizer consuming 2 mL/min solution, however, precision is generally better with high efficiency nebulizer (1-4% vs. 3-8% with ultrasonic nebulizer). Detection limits with the high efficiency nebulizer at 200 μL/min solution uptake rate approach those using ultrasonic nebulizer upon efficient desolvation with a heated spray chamber followed by a Peltier-cooled multipass condenser

  16. Recent Advances in High Efficiency Solar Cells

    Institute of Scientific and Technical Information of China (English)

    Yoshio; Ohshita; Hidetoshi; Suzuki; Kenichi; Nishimura; Masafumi; Yamaguchi

    2007-01-01

    1 Results The conversion efficiency of sunlight to electricity is limited around 25%,when we use single junction solar cells. In the single junction cells,the major energy losses arise from the spectrum mismatching. When the photons excite carriers with energy well in excess of the bandgap,these excess energies were converted to heat by the rapid thermalization. On the other hand,the light with lower energy than that of the bandgap cannot be absorbed by the semiconductor,resulting in the losses. One way...

  17. High efficiency cyclotron trap assisted positron moderator

    OpenAIRE

    Gerchow, L.; Cooke, D. A.; Braccini, S.; Döbeli, M.; Kirch, K.; Köster, U.; Müller, A.; Van Der Meulen, N. P.; Vermeulen, C.; Rubbia, A.; Crivelli, P.

    2017-01-01

    We report the realisation of a cyclotron trap assisted positron tungsten moderator for the conversion of positrons with a broad keV- few MeV energy spectrum to a mono-energetic eV beam with an efficiency of 1.8(2)% defined as the ratio of the slow positrons divided by the $\\beta^+$ activity of the radioactive source. This is an improvement of almost two orders of magnitude compared to the state of the art of tungsten moderators. The simulation validated with this measurement suggests that usi...

  18. Design Strategies for Ultra-high Efficiency Photovoltaics

    Science.gov (United States)

    Warmann, Emily Cathryn

    While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches. This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems. Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired

  19. Computational methods for more fuel-efficient ship

    NARCIS (Netherlands)

    Koren, B.

    2008-01-01

    The flow of water around a ship powered by a combustion engine is a key factor in the ship's fuel consumption. The simulation of flow patterns around ship hulls is therefore an important aspect of ship design. While lengthy computations are required for such simulations, research by Jeroen Wackers

  20. Efficient computation in networks of spiking neurons: simulations and theory

    International Nuclear Information System (INIS)

    Natschlaeger, T.

    1999-01-01

    One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)

  1. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  2. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Science.gov (United States)

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  3. Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction

    Science.gov (United States)

    2016-05-11

    applications of the Split Bregman method: Segmen- tation and surface reconstruction. J. Sci. Comput., 45:272– 293, October 2010. [17] Stephen Boyd and...Garcia, Gretchen Greene, Fabrizia Guglielmetti, Christopher Hanley, George Hawkins , et al. The second-generation guide star cata- log: description

  4. Efficient Computation of Exposure Profiles for Counterparty Credit Risk

    NARCIS (Netherlands)

    de Graaf, C.S.L.; Feng, Q.; Kandhai, D.; Oosterlee, C.W.

    2014-01-01

    Three computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk management in the

  5. Efficient computation of exposure profiles for counterparty credit risk

    NARCIS (Netherlands)

    C.S.L. de Graaf (Kees); Q. Feng (Qian); B.D. Kandhai; C.W. Oosterlee (Cornelis)

    2014-01-01

    htmlabstractThree computational techniques for approximation of counterparty exposure for financial derivatives are presented. The exposure can be used to quantify so-called Credit Valuation Adjustment (CVA) and Potential Future Exposure (PFE), which are of utmost importance for modern risk

  6. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  7. Using Weighted Graphs for Computationally Efficient WLAN Location Determination

    DEFF Research Database (Denmark)

    Thomsen, Bent; Hansen, Rene

    2007-01-01

    use of existing WLAN infrastructures. The technique consists of building a radio map of signal strength measurements which is searched to determine a position estimate. While the fingerprinting technique has produced good positioning accuracy results, the technique incurs a substantial computational...

  8. A high-efficiency electromechanical battery

    Science.gov (United States)

    Post, Richard F.; Fowler, T. K.; Post, Stephen F.

    1993-03-01

    In our society there is a growing need for efficient cost-effective means for storing electrical energy. The electric auto is a prime example. Storage systems for the electric utilities, and for wind or solar power, are other examples. While electrochemical cells could in principle supply these needs, the existing E-C batteries have well-known limitations. This article addresses an alternative, the electromechanical battery (EMB). An EMB is a modular unit consisting of an evacuated housing containing a fiber-composite rotor. The rotor is supported by magnetic bearings and contains an integrally mounted permanent magnet array. This article addresses design issues for EMBs with rotors made up of nested cylinders. Issues addressed include rotational stability, stress distributions, generator/motor power and efficiency, power conversion, and cost. It is concluded that the use of EMBs in electric autos could result in a fivefold reduction (relative to the IC engine) in the primary energy input required for urban driving, with a concomitant major positive impact on our economy and on air pollution.

  9. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  10. Efficient Adjoint Computation of Hybrid Systems of Differential Algebraic Equations with Applications in Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Shrirang [Argonne National Lab. (ANL), Argonne, IL (United States); Anitescu, Mihai [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Zhang, Hong [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    Sensitivity analysis is an important tool to describe power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this work, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating trajectory sensitivities of larger systems and is consistent, within machine precision, with the function whose sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as DC exciters, by deriving and implementing the adjoint jump conditions that arise from state and time-dependent discontinuities. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach.

  11. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  12. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  13. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  14. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  16. Development of high efficiency neutron detectors

    International Nuclear Information System (INIS)

    Pickrell, M.M.; Menlove, H.O.

    1993-01-01

    The authors have designed a novel neutron detector system using conventional 3 He detector tubes and composites of polyethylene and graphite. At this time the design consists entirely of MCNP simulations of different detector configurations and materials. These detectors are applicable to low-level passive and active neutron assay systems such as the passive add-a-source and the 252 Cf shuffler. Monte Carlo simulations of these neutron detector designs achieved efficiencies of over 35% for assay chambers that can accommodate 55-gal. drums. Only slight increases in the number of detector tubes and helium pressure are required. The detectors also have reduced die-away times. Potential applications are coincident and multiplicity neutron counting for waste disposal and safeguards. The authors will present the general design philosophy, underlying physics, calculation mechanics, and results

  17. High efficient white organic light emitting diodes

    Energy Technology Data Exchange (ETDEWEB)

    Seidel, Stefan; Krause, Ralf [Department of Materials Science VI, University of Erlangen-Nuremberg (Germany); Siemens AG, CT MM 1, Erlangen (Germany); Kozlowski, Fryderyk; Schmid, Guenter; Hunze, Arvid [Siemens AG, CT MM 1, Erlangen (Germany); Winnacker, Albrecht [Department of Materials Science VI, University of Erlangen-Nuremberg (Germany)

    2007-07-01

    Due to the rapid progress in the last years the performance of organic light emitting diodes (OLEDs) has reached a level where general lighting presents a most interesting application target. We demonstrate, how the color coordinates of the emission spectrum can be adjusted using a combinatorial evaporation tool to lie on the desired black body curve representing cold and warm white, respectively. The evaluation includes phosphorescent and fluorescent dye approaches to optimize lifetime and efficiency, simultaneously. Detailed results are presented with respect to variation of layer thicknesses and dopant concentrations of each layer within the OLED stack. The most promising approach contains phosphorescent red and green dyes combined with a fluorescent blue one as blue phosphorescent dopants are not yet stable enough to achieve long lifetimes.

  18. High-Efficient Circuits for Ternary Addition

    Directory of Open Access Journals (Sweden)

    Reza Faghih Mirzaee

    2014-01-01

    Full Text Available New ternary adders, which are fundamental components of ternary addition, are presented in this paper. They are on the basis of a logic style which mostly generates binary signals. Therefore, static power dissipation reaches its minimum extent. Extensive different analyses are carried out to examine how efficient the new designs are. For instance, the ternary ripple adder constructed by the proposed ternary half and full adders consumes 2.33 μW less power than the one implemented by the previous adder cells. It is almost twice faster as well. Due to their unique superior characteristics for ternary circuitry, carbon nanotube field-effect transistors are used to form the novel circuits, which are entirely suitable for practical applications.

  19. A high-efficiency superconductor distributed amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Herr, Q P, E-mail: quentin.herr@ngc.co [Northrop Grumman Corporation, 7323 Aviation Boulevard, Baltimore, MD 21240 (United States)

    2010-02-15

    A superconductor output amplifier that converts single-flux-quantum signals to a non-return-to-zero pattern is reported using a twelve-stage distributed amplifier configuration. The output amplitude is measured to be 1.75 mV over a wide bias current range of {+-} 12%. The bit error rate is measured using a Delta-Sigma data pattern to be less than 1 x 10{sup -9} at 10 Gb s{sup -1} per channel. Analysis of the eye diagram suggests that the actual bit error rate may be much lower. The amplifier has power efficiency of 12% neglecting the termination resistor, which may be eliminated from the circuit with a small modification. (rapid communication)

  20. Efficiency using computer simulation of Reverse Threshold Model Theory on assessing a “One Laptop Per Child” computer versus desktop computer

    Directory of Open Access Journals (Sweden)

    Supat Faarungsang

    2017-04-01

    Full Text Available The Reverse Threshold Model Theory (RTMT model was introduced based on limiting factor concepts, but its efficiency compared to the Conventional Model (CM has not been published. This investigation assessed the efficiency of RTMT compared to CM using computer simulation on the “One Laptop Per Child” computer and a desktop computer. Based on probability values, it was found that RTMT was more efficient than CM among eight treatment combinations and an earlier study verified that RTMT gives complete elimination of random error. Furthermore, RTMT has several advantages over CM and is therefore proposed to be applied to most research data.

  1. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  2. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    OpenAIRE

    Xing Liu; Chaowei Yuan; Zhen Yang; Enda Peng

    2015-01-01

    Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of ener...

  3. Efficient Strategy Computation in Zero-Sum Asymmetric Repeated Games

    KAUST Repository

    Li, Lichun

    2017-03-06

    Zero-sum asymmetric games model decision making scenarios involving two competing players who have different information about the game being played. A particular case is that of nested information, where one (informed) player has superior information over the other (uninformed) player. This paper considers the case of nested information in repeated zero-sum games and studies the computation of strategies for both the informed and uninformed players for finite-horizon and discounted infinite-horizon nested information games. For finite-horizon settings, we exploit that for both players, the security strategy, and also the opponent\\'s corresponding best response depend only on the informed player\\'s history of actions. Using this property, we refine the sequence form, and formulate an LP computation of player strategies that is linear in the size of the uninformed player\\'s action set. For the infinite-horizon discounted game, we construct LP formulations to compute the approximated security strategies for both players, and provide a bound on the performance difference between the approximated security strategies and the security strategies. Finally, we illustrate the results on a network interdiction game between an informed system administrator and uniformed intruder.

  4. High Efficiency Regenerative Helium Compressor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Helium plays several critical rolls in spacecraft propulsion. High pressure helium is commonly used to pressurize propellant fuel tanks. Helium cryocoolers can be...

  5. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  6. Highly Efficient, Durable Regenerative Solid Oxide Stack, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Precision Combustion, Inc. (PCI) proposes to develop a highly efficient regenerative solid oxide stack design. Novel structural elements allow direct internal...

  7. Energy-efficient cloud computing : autonomic resource provisioning for datacenters

    OpenAIRE

    Tesfatsion, Selome Kostentinos

    2018-01-01

    Energy efficiency has become an increasingly important concern in data centers because of issues associated with energy consumption, such as capital costs, operating expenses, and environmental impact. While energy loss due to suboptimal use of facilities and non-IT equipment has largely been reduced through the use of best-practice technologies, addressing energy wastage in IT equipment still requires the design and implementation of energy-aware resource management systems. This thesis focu...

  8. Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures

    Science.gov (United States)

    2017-10-04

    to the memory architectures of CPUs and GPUs to obtain good performance and result in good memory performance using cache management. These methods ...Accomplishments: The PI and students has developed new methods for path and ray tracing and their Report Date: 14-Oct-2017 INVESTIGATOR(S): Phone...The efficiency of our method makes it a good candidate for forming hybrid schemes with wave-based models. One possibility is to couple the ray curve

  9. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  10. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  11. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  12. A computational study of high entropy alloys

    Science.gov (United States)

    Wang, Yang; Gao, Michael; Widom, Michael; Hawk, Jeff

    2013-03-01

    As a new class of advanced materials, high-entropy alloys (HEAs) exhibit a wide variety of excellent materials properties, including high strength, reasonable ductility with appreciable work-hardening, corrosion and oxidation resistance, wear resistance, and outstanding diffusion-barrier performance, especially at elevated and high temperatures. In this talk, we will explain our computational approach to the study of HEAs that employs the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) method. The KKR-CPA method uses Green's function technique within the framework of multiple scattering theory and is uniquely designed for the theoretical investigation of random alloys from the first principles. The application of the KKR-CPA method will be discussed as it pertains to the study of structural and mechanical properties of HEAs. In particular, computational results will be presented for AlxCoCrCuFeNi (x = 0, 0.3, 0.5, 0.8, 1.0, 1.3, 2.0, 2.8, and 3.0), and these results will be compared with experimental information from the literature.

  13. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  14. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  15. Usage of super high speed computer for clarification of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu

    1999-01-01

    This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)

  16. High-efficiency airfoil rudders applied to submarines

    Directory of Open Access Journals (Sweden)

    ZHOU Yimei

    2017-03-01

    Full Text Available Modern submarine design puts forward higher and higher requirements for control surfaces, and this creates a requirement for designers to constantly innovate new types of rudder so as to improve the efficiency of control surfaces. Adopting the high-efficiency airfoil rudder is one of the most effective measures for improving the efficiency of control surfaces. In this paper, we put forward an optimization method for a high-efficiency airfoil rudder on the basis of a comparative analysis of the various strengths and weaknesses of the airfoil, and the numerical calculation method is adopted to analyze the influence rule of the hydrodynamic characteristics and wake field by using the high-efficiency airfoil rudder and the conventional NACA rudder comparatively; at the same time, a model load test in a towing tank was carried out, and the test results and simulation calculation obtained good consistency:the error between them was less than 10%. The experimental results show that the steerage of a high-efficiency airfoil rudder is increased by more than 40% when compared with the conventional rudder, but the total resistance is close:the error is no more than 4%. Adopting a high-efficiency airfoil rudder brings much greater lifting efficiency than the total resistance of the boat. The results show that high-efficiency airfoil rudder has obvious advantages for improving the efficiency of control, giving it good application prospects.

  17. Increasing the computational efficient of digital cross correlation by a vectorization method

    Science.gov (United States)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  18. Computationally Efficient Power Allocation Algorithm in Multicarrier-Based Cognitive Radio Networks: OFDM and FBMC Systems

    Directory of Open Access Journals (Sweden)

    Shaat Musbah

    2010-01-01

    Full Text Available Cognitive Radio (CR systems have been proposed to increase the spectrum utilization by opportunistically access the unused spectrum. Multicarrier communication systems are promising candidates for CR systems. Due to its high spectral efficiency, filter bank multicarrier (FBMC can be considered as an alternative to conventional orthogonal frequency division multiplexing (OFDM for transmission over the CR networks. This paper addresses the problem of resource allocation in multicarrier-based CR networks. The objective is to maximize the downlink capacity of the network under both total power and interference introduced to the primary users (PUs constraints. The optimal solution has high computational complexity which makes it unsuitable for practical applications and hence a low complexity suboptimal solution is proposed. The proposed algorithm utilizes the spectrum holes in PUs bands as well as active PU bands. The performance of the proposed algorithm is investigated for OFDM and FBMC based CR systems. Simulation results illustrate that the proposed resource allocation algorithm with low computational complexity achieves near optimal performance and proves the efficiency of using FBMC in CR context.

  19. Energy-Efficient Office Buildings at High Latitudes

    Energy Technology Data Exchange (ETDEWEB)

    Lerum, V.

    1996-12-31

    This doctoral thesis describes a method for energy efficient office building design at high latitudes and cold climates. The method combines daylighting, passive solar heating, solar protection, and ventilative cooling. The thesis focuses on optimal design of an equatorial-facing fenestration system. A spreadsheet framework linking existing simplified methods is used. The daylight analysis uses location specific data on frequency distribution of diffuse daylight on vertical surfaces to estimate energy savings from optimal window and room configurations in combination with a daylight-responsive electric lighting system. The passive solar heating analysis is a generalization of a solar load ratio method adapted to cold climates by combining it with the Norwegian standard NS3031 for winter months when the solar savings fraction is negative. The emphasis is on very high computational efficiency to permit rapid and comprehensive examination of a large number of options early in design. The procedure is illustrated for a location in Trondheim, Norway, testing the relative significance of various design improvement options relative to a base case. The method is also tested for two other locations in Norway, at latitudes 58 and 70 degrees North. The band of latitudes between these limits covers cities in Alaska, Canada, Greenland, Iceland, Scandinavia, Finland, Russia, and Northern Japan. A comprehensive study of the ``whole building approach`` shows the impact of integrated daylighting and low-energy design strategies. In general, consumption of lighting electricity may be reduced by 50-80%, even at extremely high latitudes. The reduced internal heat from electric lights is replaced by passive solar heating. 113 refs., 85 figs., 25 tabs.

  20. Work-Efficient Parallel Skyline Computation for the GPU

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira

    2015-01-01

    offers the potential for parallelizing skyline computation across thousands of cores. However, attempts to port skyline algorithms to the GPU have prioritized throughput and failed to outperform sequential algorithms. In this paper, we introduce a new skyline algorithm, designed for the GPU, that uses...... a global, static partitioning scheme. With the partitioning, we can permit controlled branching to exploit transitive relationships and avoid most point-to-point comparisons. The result is a non-traditional GPU algorithm, SkyAlign, that prioritizes work-effciency and respectable throughput, rather than...

  1. A new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon

    International Nuclear Information System (INIS)

    Klein, K.M.; Park, C.; Yang, S.; Morris, S.; Do, V.; Tasch, F.

    1992-01-01

    We have developed a new computationally-efficient two-dimensional model for boron implantation into single-crystal silicon. This paper reports that this new model is based on the dual Pearson semi-empirical implant depth profile model and the UT-MARLOWE Monte Carlo boron ion implantation model. This new model can predict with very high computational efficiency two-dimensional as-implanted boron profiles as a function of energy, dose, tilt angle, rotation angle, masking edge orientation, and masking edge thickness

  2. An Efficient Computational Technique for Fractal Vehicular Traffic Flow

    Directory of Open Access Journals (Sweden)

    Devendra Kumar

    2018-04-01

    Full Text Available In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method. Some illustrative examples are taken to describe the success of the suggested techniques. The results derived with the aid of the suggested schemes reveal that the present schemes are very efficient for obtaining the non-differentiable solution to fractal vehicular traffic flow problem.

  3. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  4. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Science.gov (United States)

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  5. High Efficiency, High Density Terrestrial Panel. [for solar cell modules

    Science.gov (United States)

    Wohlgemuth, J.; Wihl, M.; Rosenfield, T.

    1979-01-01

    Terrestrial panels were fabricated using rectangular cells. Packing densities in excess of 90% with panel conversion efficiencies greater than 13% were obtained. Higher density panels can be produced on a cost competitive basis with the standard salami panels.

  6. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    Science.gov (United States)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  7. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  8. Efficient relaxed-Jacobi smoothers for multigrid on parallel computers

    Science.gov (United States)

    Yang, Xiang; Mittal, Rajat

    2017-03-01

    In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.

  9. Efficient Computational Design of a Scaffold for Cartilage Cell Regeneration

    DEFF Research Database (Denmark)

    Tajsoleiman, Tannaz; Jafar Abdekhodaie, Mohammad; Gernaey, Krist V.

    2018-01-01

    Due to the sensitivity of mammalian cell cultures, understanding the influence of operating conditions during a tissue generation procedure is crucial. In this regard, a detailed study of scaffold based cell culture under a perfusion flow is presented with the aid of mathematical modelling...... and computational fluid dynamics (CFD). With respect to the complexity of the case study, this work focuses solely on the effect of nutrient and metabolite concentrations, and the possible influence of fluid-induced shear stress on a targeted cell (cartilage) culture. The simulation set up gives the possibility...... of predicting the cell culture behavior under various operating conditions and scaffold designs. Thereby, the exploitation of the predictive simulation into a newly developed stochastic routine provides the opportunity of exploring improved scaffold geometry designs. This approach was applied on a common type...

  10. 40 CFR 761.71 - High efficiency boilers.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false High efficiency boilers. 761.71... PROHIBITIONS Storage and Disposal § 761.71 High efficiency boilers. (a) To burn mineral oil dielectric fluid containing a PCB concentration of ≥50 ppm, but boiler shall comply with the following...

  11. Efficient Sustainable Operation Mechanism of Distributed Desktop Integration Storage Based on Virtualization with Ubiquitous Computing

    Directory of Open Access Journals (Sweden)

    Hyun-Woo Kim

    2015-06-01

    Full Text Available Following the rapid growth of ubiquitous computing, many jobs that were previously manual have now been automated. This automation has increased the amount of time available for leisure; diverse services are now being developed for this leisure time. In addition, the development of small and portable devices like smartphones, diverse Internet services can be used regardless of time and place. Studies regarding diverse virtualization are currently in progress. These studies aim to determine ways to efficiently store and process the big data generated by the multitude of devices and services in use. One topic of such studies is desktop storage virtualization, which integrates distributed desktop resources and provides these resources to users to integrate into distributed legacy desktops via virtualization. In the case of desktop storage virtualization, high availability of virtualization is necessary and important for providing reliability to users. Studies regarding hierarchical structures and resource integration are currently in progress. These studies aim to create efficient data distribution and storage for distributed desktops based on resource integration environments. However, studies regarding efficient responses to server faults occurring in desktop-based resource integration environments have been insufficient. This paper proposes a mechanism for the sustainable operation of desktop storage (SODS for high operational availability. It allows for the easy addition and removal of desktops in desktop-based integration environments. It also activates alternative servers when a fault occurs within a system.

  12. The Design of High Efficiency Crossflow Hydro Turbines: A Review and Extension

    Directory of Open Access Journals (Sweden)

    Ram Adhikari

    2018-01-01

    Full Text Available Efficiency is a critical consideration in the design of hydro turbines. The crossflow turbine is the cheapest and easiest hydro turbine to manufacture and so is commonly used in remote power systems for developing countries. A longstanding problem for practical crossflow turbines is their lower maximum efficiency compared to their more advanced counterparts, such as Pelton and Francis turbines. This paper reviews the experimental and computational studies relevant to the design of high efficiency crossflow turbines. We concentrate on the studies that have contributed to designs with efficiencies in the range of 88–90%. Many recent studies have been conducted on turbines of low maximum efficiency, which we believe is due to misunderstanding of design principles for achieving high efficiencies. We synthesize the key results of experimental and computational fluid dynamics studies to highlight the key fundamental design principles for achieving efficiencies of about 90%, as well as future research and development areas to further improve the maximum efficiency. The main finding of this review is that the total conversion of head into kinetic energy in the nozzle and the matching of nozzle and runner designs are the two main design requirements for the design of high efficiency turbines.

  13. HIGH EFFICIENCY DESULFURIZATION OF SYNTHESIS GAS

    Energy Technology Data Exchange (ETDEWEB)

    Anirban Mukherjee; Kwang-Bok Yi; Elizabeth J. Podlaha; Douglas P. Harrison

    2001-11-01

    Mixed metal oxides containing CeO{sub 2} and ZrO{sub 2} are being studied as high temperature desulfurization sorbents capable of achieving the DOE Vision 21 target of 1 ppmv of less H{sub 2}S. The research is justified by recent results in this laboratory that showed that reduced CeO{sub 2}, designated CeO{sub n} (1.5 < n < 2.0), is capable of achieving the 1 ppmv target in highly reducing gas atmospheres. The addition of ZrO{sub 2} has improved the performance of oxidation catalysts and three-way automotive catalysts containing CeO{sub 2}, and should have similar beneficial effects on CeO{sub 2} desulfurization sorbents. An electrochemical method for synthesizing CeO{sub 2}-ZrO{sub 2} has been developed and the products have been characterized by XRD and TEM during year 01. Nanocrystalline particles having a diameter of about 5 nm and containing from approximately 10 mol% to 80 mol% ZrO{sub 2} have been prepared. XRD showed the product to be a solid solution at low ZrO{sub 2} contents with a separate ZrO{sub 2} phase emerging at higher ZrO{sub 2} levels. Phase separation did not occur when the solid solutions were heat treated at 700 C. A flow reactor system constructed of quartz and teflon has been constructed, and a gas chromatograph equipped with a pulsed flame photometric detector (PFPD) suitable for measuring sub-ppmv levels of H{sub 2}S has been purchased with LSU matching funds. Preliminary desulfurization tests using commercial CeO{sub 2} and CeO{sub 2}-ZrO{sub 2} in highly reducing gas compositions has confirmed that CeO{sub 2}-ZrO{sub 2} is more effective than CeO{sub 2} in removing H{sub 2}S. At 700 C the product H{sub 2}S concentration using CeO{sub 2}-ZrO{sub 2} sorbent was near the 0.1 ppmv PFPD detection limit during the prebreakthrough period.

  14. Highly efficient induction of chirality in intramolecular

    Science.gov (United States)

    Cossio; Arrieta; Lecea; Alajarin; Vidal; Tovar

    2000-06-16

    Highly stereocontrolled, intramolecular [2 + 2] cycloadditions between ketenimines and imines leading to 1,2-dihydroazeto[2, 1-b]quinazolines have been achieved. The source of stereocontrol is a chiral carbon atom adjacent either to the iminic carbon or nitrogen atom. In the first case, the stereocontrol stems from the preference for the axial conformer in the first transition structure. In the second case, the origin of the stereocontrol lies on the two-electron stabilizing interaction between the C-C bond being formed and the sigma orbital corresponding to the polar C-X bond, X being an electronegative atom. These models can be extended to other related systems for predicting the stereochemical outcome in this intramolecular reaction.

  15. High Efficiency, Low Cost Scintillators for PET

    International Nuclear Information System (INIS)

    Kanai Shah

    2007-01-01

    Inorganic scintillation detectors coupled to PMTs are an important element of medical imaging applications such as positron emission tomography (PET). Performance as well as cost of these systems is limited by the properties of the scintillation detectors available at present. The Phase I project was aimed at demonstrating the feasibility of producing high performance scintillators using a low cost fabrication approach. Samples of these scintillators were produced and their performance was evaluated. Overall, the Phase I effort was very successful. The Phase II project will be aimed at advancing the new scintillation technology for PET. Large samples of the new scintillators will be produced and their performance will be evaluated. PET modules based on the new scintillators will also be built and characterized

  16. Compact and highly efficient laser pump cavity

    Science.gov (United States)

    Chang, Jim J.; Bass, Isaac L.; Zapata, Luis E.

    1999-01-01

    A new, compact, side-pumped laser pump cavity design which uses non-conventional optics for injection of laser-diode light into a laser pump chamber includes a plurality of elongated light concentration channels. In one embodiment, the light concentration channels are compound parabolic concentrators (CPC) which have very small exit apertures so that light will not escape from the pumping chamber and will be multiply reflected through the laser rod. This new design effectively traps the pump radiation inside the pump chamber that encloses the laser rod. It enables more uniform laser pumping and highly effective recycle of pump radiation, leading to significantly improved laser performance. This new design also effectively widens the acceptable radiation wavelength of the diodes, resulting in a more reliable laser performance with lower cost.

  17. PVT: an efficient computational procedure to speed up next-generation sequence analysis.

    Science.gov (United States)

    Maji, Ranjan Kumar; Sarkar, Arijita; Khatua, Sunirmal; Dasgupta, Subhasis; Ghosh, Zhumur

    2014-06-04

    High-throughput Next-Generation Sequencing (NGS) techniques are advancing genomics and molecular biology research. This technology generates substantially large data which puts up a major challenge to the scientists for an efficient, cost and time effective solution to analyse such data. Further, for the different types of NGS data, there are certain common challenging steps involved in analysing those data. Spliced alignment is one such fundamental step in NGS data analysis which is extremely computational intensive as well as time consuming. There exists serious problem even with the most widely used spliced alignment tools. TopHat is one such widely used spliced alignment tools which although supports multithreading, does not efficiently utilize computational resources in terms of CPU utilization and memory. Here we have introduced PVT (Pipelined Version of TopHat) where we take up a modular approach by breaking TopHat's serial execution into a pipeline of multiple stages, thereby increasing the degree of parallelization and computational resource utilization. Thus we address the discrepancies in TopHat so as to analyze large NGS data efficiently. We analysed the SRA dataset (SRX026839 and SRX026838) consisting of single end reads and SRA data SRR1027730 consisting of paired-end reads. We used TopHat v2.0.8 to analyse these datasets and noted the CPU usage, memory footprint and execution time during spliced alignment. With this basic information, we designed PVT, a pipelined version of TopHat that removes the redundant computational steps during 'spliced alignment' and breaks the job into a pipeline of multiple stages (each comprising of different step(s)) to improve its resource utilization, thus reducing the execution time. PVT provides an improvement over TopHat for spliced alignment of NGS data analysis. PVT thus resulted in the reduction of the execution time to ~23% for the single end read dataset. Further, PVT designed for paired end reads showed an

  18. Smoothing the payoff for efficient computation of Basket option prices

    KAUST Repository

    Bayer, Christian; Siebenmorgen, Markus; Tempone, Raul

    2017-01-01

    We consider the problem of pricing basket options in a multivariate Black–Scholes or Variance-Gamma model. From a numerical point of view, pricing such options corresponds to moderate and high-dimensional numerical integration problems with non

  19. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  20. HIGH EFFICIENCY DESULFURIZATION OF SYNTHESIS GAS

    Energy Technology Data Exchange (ETDEWEB)

    Kwang-Bok Yi; Anirban Mukherjee; Elizabeth J. Podlaha; Douglas P. Harrison

    2004-03-01

    Mixed metal oxides containing ceria and zirconia have been studied as high temperature desulfurization sorbents with the objective of achieving the DOE Vision 21 target of 1 ppmv or less H{sub 2}S in the product gas. The research was justified by recent results in this laboratory that showed that reduced CeO{sub 2}, designated CeOn (1.5 < n < 2.0), is capable of achieving the 1 ppmv target in highly reducing gas atmospheres. The addition of ZrO{sub 2} has improved the performance of oxidation catalysts and three-way automotive catalysts containing CeO{sub 2}, and was postulated to have similar beneficial effects on CeO{sub 2} desulfurization sorbents. An electrochemical method for synthesizing CeO{sub 2}-ZrO{sub 2} mixtures was developed and the products were characterized by XRD and TEM during year 01. Nanocrystalline particles having a diameter of about 5 nm and containing from approximately 10 mol% to 80 mol% ZrO{sub 2} were prepared. XRD analysis showed the product to be a solid solution at low ZrO{sub 2} contents with a separate ZrO{sub 2} phase emerging at higher ZrO{sub 2} levels. Unfortunately, the quantity of CeO{sub 2}-ZrO{sub 2} that could be prepared electrochemically was too small to permit desulfurization testing. Also during year 01 a laboratory-scale fixed-bed reactor was constructed for desulfurization testing. All components of the reactor and analytical systems that were exposed to low concentrations of H{sub 2}S were constructed of quartz, Teflon, or silcosteel. Reactor product gas composition as a function of time was determined using a Varian 3800 gas chromatograph equipped with a pulsed flame photometric detector (PFPD) for measuring low H{sub 2}S concentrations from approximately 0.1 to 10 ppmv, and a thermal conductivity detector (TCD) for higher concentrations of H{sub 2}S. Larger quantities of CeO{sub 2}-ZrO{sub 2} mixtures from other sources, including mixtures prepared in this laboratory using a coprecipitation procedure, were obtained

  1. Computer code validation by high temperature chemistry

    International Nuclear Information System (INIS)

    Alexander, C.A.; Ogden, J.S.

    1988-01-01

    At least five of the computer codes utilized in analysis of severe fuel damage-type events are directly dependent upon or can be verified by high temperature chemistry. These codes are ORIGEN, CORSOR, CORCON, VICTORIA, and VANESA. With the exemption of CORCON and VANESA, it is necessary that verification experiments be performed on real irradiated fuel. For ORIGEN, the familiar knudsen effusion cell is the best choice and a small piece of known mass and known burn-up is selected and volatilized completely into the mass spectrometer. The mass spectrometer is used in the integral mode to integrate the entire signal from preselected radionuclides, and from this integrated signal the total mass of the respective nuclides can be determined. For CORSOR and VICTORIA, experiments with flowing high pressure hydrogen/steam must flow over the irradiated fuel and then enter the mass spectrometer. For these experiments, a high pressure-high temperature molecular beam inlet must be employed. Finally, in support of VANESA-CORCON, the very highest temperature and molten fuels must be contained and analyzed. Results from all types of experiments will be discussed and their applicability to present and future code development will also be covered

  2. An efficient network for interconnecting remote monitoring instruments and computers

    International Nuclear Information System (INIS)

    Halbig, J.K.; Gainer, K.E.; Klosterbuer, S.F.

    1994-01-01

    Remote monitoring instrumentation must be connected with computers and other instruments. The cost and intrusiveness of installing cables in new and existing plants presents problems for the facility and the International Atomic Energy Agency (IAEA). The authors have tested a network that could accomplish this interconnection using mass-produced commercial components developed for use in industrial applications. Unlike components in the hardware of most networks, the components--manufactured and distributed in North America, Europe, and Asia--lend themselves to small and low-powered applications. The heart of the network is a chip with three microprocessors and proprietary network software contained in Read Only Memory. In addition to all nonuser levels of protocol, the software also contains message authentication capabilities. This chip can be interfaced to a variety of transmission media, for example, RS-485 lines, fiber topic cables, rf waves, and standard ac power lines. The use of power lines as the transmission medium in a facility could significantly reduce cabling costs

  3. Efficient universal quantum channel simulation in IBM's cloud quantum computer

    Science.gov (United States)

    Wei, Shi-Jie; Xin, Tao; Long, Gui-Lu

    2018-07-01

    The study of quantum channels is an important field and promises a wide range of applications, because any physical process can be represented as a quantum channel that transforms an initial state into a final state. Inspired by the method of performing non-unitary operators by the linear combination of unitary operations, we proposed a quantum algorithm for the simulation of the universal single-qubit channel, described by a convex combination of "quasi-extreme" channels corresponding to four Kraus operators, and is scalable to arbitrary higher dimension. We demonstrated the whole algorithm experimentally using the universal IBM cloud-based quantum computer and studied the properties of different qubit quantum channels. We illustrated the quantum capacity of the general qubit quantum channels, which quantifies the amount of quantum information that can be protected. The behavior of quantum capacity in different channels revealed which types of noise processes can support information transmission, and which types are too destructive to protect information. There was a general agreement between the theoretical predictions and the experiments, which strongly supports our method. By realizing the arbitrary qubit channel, this work provides a universally- accepted way to explore various properties of quantum channels and novel prospect for quantum communication.

  4. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  5. All passive architecture for high efficiency cascaded Raman conversion

    Science.gov (United States)

    Balaswamy, V.; Arun, S.; Chayran, G.; Supradeepa, V. R.

    2018-02-01

    Cascaded Raman fiber lasers have offered a convenient method to obtain scalable, high-power sources at various wavelength regions inaccessible with rare-earth doped fiber lasers. A limitation previously was the reduced efficiency of these lasers. Recently, new architectures have been proposed to enhance efficiency, but this came at the cost of enhanced complexity, requiring an additional low-power, cascaded Raman laser. In this work, we overcome this with a new, all-passive architecture for high-efficiency cascaded Raman conversion. We demonstrate our architecture with a fifth-order cascaded Raman converter from 1117nm to 1480nm with output power of ~64W and efficiency of 60%.

  6. Toward High-Power Klystrons With RF Power Conversion Efficiency on the Order of 90%

    CERN Document Server

    Baikov, Andrey Yu; Syratchev, Igor

    2015-01-01

    The increase in efficiency of RF power generation for future large accelerators is considered a high priority issue. The vast majority of the existing commercial high-power RF klystrons operates in the electronic efficiency range between 40% and 55%. Only a few klystrons available on the market are capable of operating with 65% efficiency or above. In this paper, a new method to achieve 90% RF power conversion efficiency in a klystron amplifier is presented. The essential part of this method is a new bunching technique - bunching with bunch core oscillations. Computer simulations confirm that the RF production efficiency above 90% can be reached with this new bunching method. The results of a preliminary study of an L-band, 20-MW peak RF power multibeam klystron for Compact Linear Collider with the efficiency above 85% are presented.

  7. Computer simulations of high pressure systems

    International Nuclear Information System (INIS)

    Wilkins, M.L.

    1977-01-01

    Numerical methods are capable of solving very difficult problems in solid mechanics and gas dynamics. In the design of engineering structures, critical decisions are possible if the behavior of materials is correctly described in the calculation. Problems of current interest require accurate analysis of stress-strain fields that range from very small elastic displacement to very large plastic deformation. A finite difference program is described that solves problems over this range and in two and three space-dimensions and time. A series of experiments and calculations serve to establish confidence in the plasticity formulation. The program can be used to design high pressure systems where plastic flow occurs. The purpose is to identify material properties, strength and elongation, that meet the operating requirements. An objective is to be able to perform destructive testing on a computer rather than on the engineering structure. Examples of topical interest are given

  8. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  9. Efficient Minimum-Phase Prefilter Computation Using Fast QL-Factorization

    DEFF Research Database (Denmark)

    Hansen, Morten; Christensen, Lars P.B.

    2009-01-01

    This paper presents a novel approach for computing both the minimum-phase filter and the associated all-pass filter in a computationally efficient way using the fast QL-factorization. A desirable property of this approach is that the complexity is independent on the size of the matrix which is QL...

  10. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  11. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  12. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  13. Processing-Efficient Distributed Adaptive RLS Filtering for Computationally Constrained Platforms

    Directory of Open Access Journals (Sweden)

    Noor M. Khan

    2017-01-01

    Full Text Available In this paper, a novel processing-efficient architecture of a group of inexpensive and computationally incapable small platforms is proposed for a parallely distributed adaptive signal processing (PDASP operation. The proposed architecture runs computationally expensive procedures like complex adaptive recursive least square (RLS algorithm cooperatively. The proposed PDASP architecture operates properly even if perfect time alignment among the participating platforms is not available. An RLS algorithm with the application of MIMO channel estimation is deployed on the proposed architecture. Complexity and processing time of the PDASP scheme with MIMO RLS algorithm are compared with sequentially operated MIMO RLS algorithm and liner Kalman filter. It is observed that PDASP scheme exhibits much lesser computational complexity parallely than the sequential MIMO RLS algorithm as well as Kalman filter. Moreover, the proposed architecture provides an improvement of 95.83% and 82.29% decreased processing time parallely compared to the sequentially operated Kalman filter and MIMO RLS algorithm for low doppler rate, respectively. Likewise, for high doppler rate, the proposed architecture entails an improvement of 94.12% and 77.28% decreased processing time compared to the Kalman and RLS algorithms, respectively.

  14. Cross-scale Efficient Tensor Contractions for Coupled Cluster Computations Through Multiple Programming Model Backends

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Epifanovsky, Evgeny [Q-Chem, Inc., Pleasanton, CA (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Krylov, Anna I. [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Chemistry

    2016-07-26

    Coupled-cluster methods provide highly accurate models of molecular structure by explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix-matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts to extend the Libtensor framework to work in the distributed memory environment in a scalable and energy efficient manner. We achieve up to 240 speedup compared with the best optimized shared memory implementation. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures, (Cray XC30&XC40, BlueGene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance. Nevertheless, we preserve a uni ed interface to both programming models to maintain the productivity of computational quantum chemists.

  15. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  16. The thermodynamic efficiency of computations made in cells across the range of life

    Science.gov (United States)

    Kempes, Christopher P.; Wolpert, David; Cohen, Zachary; Pérez-Mercader, Juan

    2017-11-01

    Biological organisms must perform computation as they grow, reproduce and evolve. Moreover, ever since Landauer's bound was proposed, it has been known that all computation has some thermodynamic cost-and that the same computation can be achieved with greater or smaller thermodynamic cost depending on how it is implemented. Accordingly an important issue concerning the evolution of life is assessing the thermodynamic efficiency of the computations performed by organisms. This issue is interesting both from the perspective of how close life has come to maximally efficient computation (presumably under the pressure of natural selection), and from the practical perspective of what efficiencies we might hope that engineered biological computers might achieve, especially in comparison with current computational systems. Here we show that the computational efficiency of translation, defined as free energy expended per amino acid operation, outperforms the best supercomputers by several orders of magnitude, and is only about an order of magnitude worse than the Landauer bound. However, this efficiency depends strongly on the size and architecture of the cell in question. In particular, we show that the useful efficiency of an amino acid operation, defined as the bulk energy per amino acid polymerization, decreases for increasing bacterial size and converges to the polymerization cost of the ribosome. This cost of the largest bacteria does not change in cells as we progress through the major evolutionary shifts to both single- and multicellular eukaryotes. However, the rates of total computation per unit mass are non-monotonic in bacteria with increasing cell size, and also change across different biological architectures, including the shift from unicellular to multicellular eukaryotes. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  17. Computational Study on the Effect of Shroud Shape on the Efficiency of the Gas Turbine Stage

    Science.gov (United States)

    Afanas'ev, I. V.; Granovskii, A. V.

    2018-03-01

    The last stages of powerful power gas turbines play an important role in the development of power and efficiency of the whole unit as well as in the distribution of the flow parameters behind the last stage, which determines the efficient operation of the exhaust diffusers. Therefore, much attention is paid to improving the efficiency of the last stages of gas turbines as well as the distribution of flow parameters. Since the long blades of the last stages of multistage high-power gas turbines could fall into the resonance frequency range in the course of operation, which results in the destruction of the blades, damping wires or damping bolts are used for turning out of resonance frequencies. However, these damping elements cause additional energy losses leading to a reduction in the efficiency of the stage. To minimize these losses, dampening shrouds are used instead of wires and bolts at the periphery of the working blades. However, because of the strength problems, designers have to use, instead of the most efficient full shrouds, partial shrouds that do not provide for significantly reducing the losses in the tip clearance between the blade and the turbine housing. In this paper, a computational study is performed concerning an effect that the design of the shroud of the turbine-working blade exerted on the flow structure in the vicinity of the shroud and on the efficiency of the stage as a whole. The analysis of the flow structure has shown that a significant part of the losses under using the shrouds is associated with the formation of vortex zones in the cavities on the turbine housing before the shrouds, between the ribs of the shrouds, and in the cavities at the outlet behind the shrouds. All the investigated variants of a partial shrouding are inferior in efficiency to the stages with shrouds that completely cover the tip section of the working blade. The stage with a unshrouded working blade was most efficient at the values of the relative tip clearance

  18. Computational Properties of the Hippocampus Increase the Efficiency of Goal-Directed Foraging through Hierarchical Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Eric Chalmers

    2016-12-01

    Full Text Available The mammalian brain is thought to use a version of Model-based Reinforcement Learning (MBRL to guide goal-directed behavior, wherein animals consider goals and make plans to acquire desired outcomes. However, conventional MBRL algorithms do not fully explain animals’ ability to rapidly adapt to environmental changes, or learn multiple complex tasks. They also require extensive computation, suggesting that goal-directed behavior is cognitively expensive. We propose here that key features of processing in the hippocampus support a flexible MBRL mechanism for spatial navigation that is computationally efficient and can adapt quickly to change. We investigate this idea by implementing a computational MBRL framework that incorporates features inspired by computational properties of the hippocampus: a hierarchical representation of space, forward sweeps through future spatial trajectories, and context-driven remapping of place cells. We find that a hierarchical abstraction of space greatly reduces the computational load (mental effort required for adaptation to changing environmental conditions, and allows efficient scaling to large problems. It also allows abstract knowledge gained at high levels to guide adaptation to new obstacles. Moreover, a context-driven remapping mechanism allows learning and memory of multiple tasks. Simulating dorsal or ventral hippocampal lesions in our computational framework qualitatively reproduces behavioral deficits observed in rodents with analogous lesions. The framework may thus embody key features of how the brain organizes model-based RL to efficiently solve navigation and other difficult tasks.

  19. Stabilization void-fill encapsulation high-efficiency particulate filters

    International Nuclear Information System (INIS)

    Alexander, R.G.; Stewart, W.E.; Phillips, S.J.; Serkowski, M.M.; England, J.L.; Boynton, H.C.

    1994-05-01

    This report discusses high-efficiency particulate air (HEPA) filter systems that which are contaminated with radionuclides are part of the nuclear fuel processing systems conducted by the US Department of Energy (DOE) and require replacement and safe and efficient disposal for plant safety. Two K-3 HEPA filters were removed from service, placed burial boxes, buried, and safely and efficiently stabilized remotely which reduced radiation exposure to personnel and the environment

  20. Design of High Efficiency Illumination for LED Lighting

    OpenAIRE

    Chang, Yong-Nong; Cheng, Hung-Liang; Kuo, Chih-Ming

    2013-01-01

    A high efficiency illumination for LED street lighting is proposed. For energy saving, this paper uses Class-E resonant inverter as main electric circuit to improve efficiency. In addition, single dimming control has the best efficiency, simplest control scheme and lowest circuit cost among other types of dimming techniques. Multiple serial-connected transformers used to drive the LED strings as they can provide galvanic isolation and have the advantage of good current distribution against de...

  1. Scalable optical packet switch architecture for low latency and high load computer communication networks

    NARCIS (Netherlands)

    Calabretta, N.; Di Lucente, S.; Nazarathy, Y.; Raz, O.; Dorren, H.J.S.

    2011-01-01

    High performance computer and data-centers require PetaFlop/s processing speed and Petabyte storage capacity with thousands of low-latency short link interconnections between computers nodes. Switch matrices that operate transparently in the optical domain are a potential way to efficiently

  2. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  3. On efficiently computing multigroup multi-layer neutron reflection and transmission conditions

    International Nuclear Information System (INIS)

    Abreu, Marcos P. de

    2007-01-01

    In this article, we present an algorithm for efficient computation of multigroup discrete ordinates neutron reflection and transmission conditions, which replace a multi-layered boundary region in neutron multiplication eigenvalue computations with no spatial truncation error. In contrast to the independent layer-by-layer algorithm considered thus far in our computations, the algorithm here is based on an inductive approach developed by the present author for deriving neutron reflection and transmission conditions for a nonactive boundary region with an arbitrary number of arbitrarily thick layers. With this new algorithm, we were able to increase significantly the computational efficiency of our spectral diamond-spectral Green's function method for solving multigroup neutron multiplication eigenvalue problems with multi-layered boundary regions. We provide comparative results for a two-group reactor core model to illustrate the increased efficiency of our spectral method, and we conclude this article with a number of general remarks. (author)

  4. Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.

    Science.gov (United States)

    Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars

    2015-07-15

    Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  6. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  7. The Energy Efficiency of High Intensity Proton Driver Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Yakovlev, Vyacheslav [Fermilab; Grillenberger, Joachim [PSI, Villigen; Kim, Sang-Ho [ORNL, Oak Ridge (main); Seidel, Mike [PSI, Villigen; Yoshii, Masahito [JAEA, Ibaraki

    2017-05-01

    For MW class proton driver accelerators the energy efficiency is an important aspect; the talk reviews the efficiency of different accelerator concepts including s.c./n.c. linac, rapid cycling synchrotron, cyclotron; the potential of these concepts for very high beam power is discussed.

  8. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  9. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  10. Computer model of copper resistivity will improve the efficiency of field-compression devices

    International Nuclear Information System (INIS)

    Burgess, T.J.

    1977-01-01

    By detonating a ring of high explosive around an existing magnetic field, we can, under certain conditions, compress the field and multiply its strength tremendously. In this way, we can duplicate for a fraction of a second the extreme pressures that normally exist only in the interior of stars and planets. Under such pressures, materials may exhibit behavior that will confirm or alter current notions about the fundamental structure of matter and the ongoing processes in planetary interiors. However, we cannot design an efficient field-compression device unless we can calculate the electrical resistivity of certain basic metal components, which interact with the field. To aid in the design effort, we have developed a computer code that calculates the resistivity of copper and other metals over the wide range of temperatures and pressures found in a field-compression device

  11. High Efficiency Lighting with Integrated Adaptive Control (HELIAC), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project is the continued development of the High Efficiency Lighting with Integrated Adaptive Control (HELIAC) system. Solar radiation is not a viable...

  12. High Efficiency Lighting with Integrated Adaptive Control (HELIAC), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed project is the development of High Efficiency Lighting with Integrated Adaptive Control (HELIAC) systems to drive plant growth. Solar...

  13. Efficiency of poly-generating high temperature fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Margalef, Pere; Brown, Tim; Brouwer, Jacob; Samuelsen, Scott [National Fuel Cell Research Center (NFCRC), University of California, Irvine, CA 92697-3550 (United States)

    2011-02-15

    High temperature fuel cells can be designed and operated to poly-generate electricity, heat, and useful chemicals (e.g., hydrogen) in a variety of configurations. The highly integrated and synergistic nature of poly-generating high temperature fuel cells, however, precludes a simple definition of efficiency for analysis and comparison of performance to traditional methods. There is a need to develop and define a methodology to calculate each of the co-product efficiencies that is useful for comparative analyses. Methodologies for calculating poly-generation efficiencies are defined and discussed. The methodologies are applied to analysis of a Hydrogen Energy Station (H{sub 2}ES) showing that high conversion efficiency can be achieved for poly-generation of electricity and hydrogen. (author)

  14. An Improved, Highly Efficient Method for the Synthesis of Bisphenols

    Directory of Open Access Journals (Sweden)

    L. S. Patil

    2011-01-01

    Full Text Available An efficient synthesis of bisphenols is described by condensation of substituted phenols with corresponding cyclic ketones in presence of cetyltrimethylammonium chloride and 3-mercaptopropionic acid as a catalyst in extremely high purity and yields.

  15. High Efficiency S-Band 20 Watt Amplifier

    Data.gov (United States)

    National Aeronautics and Space Administration — This project includes the design and build of a prototype 20 W, high efficiency, S-Band amplifier.   The design will incorporate the latest semiconductor technology,...

  16. Personal computers in high energy physics

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1987-01-01

    The role of personal computers within HEP is expanding as their capabilities increase and their cost decreases. Already they offer greater flexibility than many low-cost graphics terminals for a comparable cost and in addition they can significantly increase the productivity of physicists and programmers. This talk will discuss existing uses for personal computers and explore possible future directions for their integration into the overall computing environment. (orig.)

  17. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  18. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  19. Process development for high-efficiency silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Gee, J.M.; Basore, P.A.; Buck, M.E.; Ruby, D.S.; Schubert, W.K.; Silva, B.L.; Tingley, J.W.

    1991-12-31

    Fabrication of high-efficiency silicon solar cells in an industrial environment requires a different optimization than in a laboratory environment. Strategies are presented for process development of high-efficiency silicon solar cells, with a goal of simplifying technology transfer into an industrial setting. The strategies emphasize the use of statistical experimental design for process optimization, and the use of baseline processes and cells for process monitoring and quality control. 8 refs.

  20. Highly efficient procedure for the transesterification of vegetable oil

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Xuezheng; Gao, Shan; He, Mingyuan [Shanghai Key Laboratory of Green Chemistry and Chemical Process, Department of Chemistry, East China Normal University, Shanghai 200062 (China); Yang, Jianguo [Shanghai Key Laboratory of Green Chemistry and Chemical Process, Department of Chemistry, East China Normal University, Shanghai 200062 (China); Energy Institute, Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2009-10-15

    The highly efficient procedure has been developed for the synthesis of biodiesel from vegetable oil and methanol. The KF/MgO has been selected as the most efficient catalyst for the reactions with the yield of 99.3%. Operational simplicity, without need of the purification of raw vegetable oil, low cost of the catalyst used, high activities, no saponification and reusability are the key features of this methodology. (author)

  1. The photonic nanowire: A highly efficient single-photon source

    DEFF Research Database (Denmark)

    Gregersen, Niels

    2014-01-01

    The photonic nanowire represents an attractive platform for a quantum light emitter. However, careful optical engineering using the modal method, which elegantly allows access to all relevant physical parameters, is crucial to ensure high efficiency.......The photonic nanowire represents an attractive platform for a quantum light emitter. However, careful optical engineering using the modal method, which elegantly allows access to all relevant physical parameters, is crucial to ensure high efficiency....

  2. Highly Efficient Spontaneous Emission from Self-Assembled Quantum Dots

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Lund-Hansen, Toke; Hvam, Jørn Märcher

    2006-01-01

    We present time resolved measurements of spontaneous emission (SE) from InAs/GaAs quantum dots (QDs). The measurements are interpreted using Fermi's Golden Rule and from this analysis we establish the parameters for high quantum efficiency.......We present time resolved measurements of spontaneous emission (SE) from InAs/GaAs quantum dots (QDs). The measurements are interpreted using Fermi's Golden Rule and from this analysis we establish the parameters for high quantum efficiency....

  3. Global climate change: Mitigation opportunities high efficiency large chiller technology

    Energy Technology Data Exchange (ETDEWEB)

    Stanga, M.V.

    1997-12-31

    This paper, comprised of presentation viewgraphs, examines the impact of high efficiency large chiller technology on world electricity consumption and carbon dioxide emissions. Background data are summarized, and sample calculations are presented. Calculations show that presently available high energy efficiency chiller technology has the ability to substantially reduce energy consumption from large chillers. If this technology is widely implemented on a global basis, it could reduce carbon dioxide emissions by 65 million tons by 2010.

  4. Use of Debye's series to determine the optimal edge-effect terms for computing the extinction efficiencies of spheroids.

    Science.gov (United States)

    Lin, Wushao; Bi, Lei; Liu, Dong; Zhang, Kejun

    2017-08-21

    The extinction efficiencies of atmospheric particles are essential to determining radiation attenuation and thus are fundamentally related to atmospheric radiative transfer. The extinction efficiencies can also be used to retrieve particle sizes or refractive indices through particle characterization techniques. This study first uses the Debye series to improve the accuracy of high-frequency extinction formulae for spheroids in the context of Complex angular momentum theory by determining an optimal number of edge-effect terms. We show that the optimal edge-effect terms can be accurately obtained by comparing the results from the approximate formula with their counterparts computed from the invariant imbedding Debye series and T-matrix methods. An invariant imbedding T-matrix method is employed for particles with strong absorption, in which case the extinction efficiency is equivalent to two plus the edge-effect efficiency. For weakly absorptive or non-absorptive particles, the T-matrix results contain the interference between the diffraction and higher-order transmitted rays. Therefore, the Debye series was used to compute the edge-effect efficiency by separating the interference from the transmission on the extinction efficiency. We found that the optimal number strongly depends on the refractive index and is relatively insensitive to the particle geometry and size parameter. By building a table of optimal numbers of edge-effect terms, we developed an efficient and accurate extinction simulator that has been fully tested for randomly oriented spheroids with various aspect ratios and a wide range of refractive indices.

  5. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John; Lee, Jon; Margulies, Susan

    2010-01-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  6. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  7. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R. [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J. [Elsam/Elkraft/TU Denmark (Denmark)

    1998-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  8. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J [Elsam/Elkraft/TU Denmark (Denmark)

    1999-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  9. High resolution computed tomography of positron emitters

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.

    1976-10-01

    High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter

  10. A primary study on the increasing of efficiency in the computer cooling system by means of external air

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. H.; Kim, M. H. [Silla University, Busan (Korea, Republic of)

    2009-07-01

    In recent years, since the continuing increase in the capacity of in personal computer such as the optimal performance, high quality and high resolution image, the computer system's components produce large amounts of heat during operation. This study analyzes and investigates an ability and efficiency of the cooling system inside the computer by means of Central Processing Unit (CPU) and power supply cooling fan. This research was conducted for increasing an ability of the cooling system inside the computer by making a structure which produces different air pressures in an air inflow tube. Consequently, when temperatures of the CPU and room inside computer were compared with a general personal computer, temperatures of the tested CPU, the room and the heat sink were as low as 5 .deg. C, 2.5 .deg. C and 7 .deg. C respectively. In addition to, Revolution Per Minute (RPM) was shown as low as 250 after 1 hour operation. This research explored the possibility of enhancing the effective cooling of high-performance computer systems.

  11. Concept for high speed computer printer

    Science.gov (United States)

    Stephens, J. W.

    1970-01-01

    Printer uses Kerr cell as light shutter for controlling the print on photosensitive paper. Applied to output data transfer, the information transfer rate of graphic computer printers could be increased to speeds approaching the data transfer rate of computer central processors /5000 to 10,000 lines per minute/.

  12. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1994-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two-fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local, the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, a fixed, uniform assignment of nodes to prallel processors will result in degraded computational efficiency due to the poor load balancing. A standard method for treating data-dependent models on vector architectures has been to use gather operations (or indirect adressing) to sort the nodes into subsets that (temporarily) share a common computational model. However, this method is not effective on distributed memory data parallel architectures, where indirect adressing involves expensive communication overhead. Another serious problem with this method involves software engineering challenges in the areas of maintainability and extensibility. For example, an implementation that was hand-tuned to achieve good computational efficiency would have to be rewritten whenever the decision tree governing the sorting was modified. Using an example based on the calculation of the wall-to-liquid and wall-to-vapor heat-transfer coefficients for three nonboiling flow regimes, we describe how the use of the Fortran 90 WHERE construct and automatic inlining of functions can be used to ameliorate this problem while improving both efficiency and software engineering. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. We discuss why developers should either wait for such solutions or consider alternative numerical algorithms, such as a neural network

  13. An innovative computationally efficient hydromechanical coupling approach for fault reactivation in geological subsurface utilization

    Science.gov (United States)

    Adams, M.; Kempka, T.; Chabab, E.; Ziegler, M.

    2018-02-01

    Estimating the efficiency and sustainability of geological subsurface utilization, i.e., Carbon Capture and Storage (CCS) requires an integrated risk assessment approach, considering the occurring coupled processes, beside others, the potential reactivation of existing faults. In this context, hydraulic and mechanical parameter uncertainties as well as different injection rates have to be considered and quantified to elaborate reliable environmental impact assessments. Consequently, the required sensitivity analyses consume significant computational time due to the high number of realizations that have to be carried out. Due to the high computational costs of two-way coupled simulations in large-scale 3D multiphase fluid flow systems, these are not applicable for the purpose of uncertainty and risk assessments. Hence, an innovative semi-analytical hydromechanical coupling approach for hydraulic fault reactivation will be introduced. This approach determines the void ratio evolution in representative fault elements using one preliminary base simulation, considering one model geometry and one set of hydromechanical parameters. The void ratio development is then approximated and related to one reference pressure at the base of the fault. The parametrization of the resulting functions is then directly implemented into a multiphase fluid flow simulator to carry out the semi-analytical coupling for the simulation of hydromechanical processes. Hereby, the iterative parameter exchange between the multiphase and mechanical simulators is omitted, since the update of porosity and permeability is controlled by one reference pore pressure at the fault base. The suggested procedure is capable to reduce the computational time required by coupled hydromechanical simulations of a multitude of injection rates by a factor of up to 15.

  14. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  15. Charge transport in highly efficient iridium cored electrophosphorescent dendrimers

    Science.gov (United States)

    Markham, Jonathan P. J.; Samuel, Ifor D. W.; Lo, Shih-Chun; Burn, Paul L.; Weiter, Martin; Bässler, Heinz

    2004-01-01

    Electrophosphorescent dendrimers are promising materials for highly efficient light-emitting diodes. They consist of a phosphorescent core onto which dendritic groups are attached. Here, we present an investigation into the optical and electronic properties of highly efficient phosphorescent dendrimers. The effect of dendrimer structure on charge transport and optical properties is studied using temperature-dependent charge-generation-layer time-of-flight measurements and current voltage (I-V) analysis. A model is used to explain trends seen in the I-V characteristics. We demonstrate that fine tuning the mobility by chemical structure is possible in these dendrimers and show that this can lead to highly efficient bilayer dendrimer light-emitting diodes with neat emissive layers. Power efficiencies of 20 lm/W were measured for devices containing a second-generation (G2) Ir(ppy)3 dendrimer with a 1,3,5-tris(2-N-phenylbenzimidazolyl)benzene electron transport layer.

  16. Very-High Efficiency, High Power Laser Diodes, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — AdTech Photonics, in collaboration with the Center for Advanced Studies in Photonics Research (CASPR) at UMBC, is pleased to submit this proposal entitled ?Very-High...

  17. [Characteristics of phosphorus uptake and use efficiency of rice with high yield and high phosphorus use efficiency].

    Science.gov (United States)

    Li, Li; Zhang, Xi-Zhou; Li, Tinx-Xuan; Yu, Hai-Ying; Ji, Lin; Chen, Guang-Deng

    2014-07-01

    A total of twenty seven middle maturing rice varieties as parent materials were divided into four types based on P use efficiency for grain yield in 2011 by field experiment with normal phosphorus (P) application. The rice variety with high yield and high P efficiency was identified by pot experiment with normal and low P applications, and the contribution rates of various P efficiencies to yield were investigated in 2012. There were significant genotype differences in yield and P efficiency of the test materials. GRLu17/AiTTP//Lu17_2 (QR20) was identified as a variety with high yield and high P efficiency, and its yields at the low and normal rates of P application were 1.96 and 1.92 times of that of Yuxiang B, respectively. The contribution rate of P accumulation to yield was greater than that of P grain production efficiency and P harvest index across field and pot experiments. The contribution rates of P accumulation and P grain production efficiency to yield were not significantly different under the normal P condition, whereas obvious differences were observed under the low P condition (66.5% and 26.6%). The minimal contribution to yield was P harvest index (11.8%). Under the normal P condition, the contribution rates of P accumulation to yield and P harvest index were the highest at the jointing-heading stage, which were 93.4% and 85.7%, respectively. In addition, the contribution rate of P accumulation to grain production efficiency was 41.8%. Under the low P condition, the maximal contribution rates of P accumulation to yield and grain production efficiency were observed at the tillering-jointing stage, which were 56.9% and 20.1% respectively. Furthermore, the contribution rate of P accumulation to P harvest index was 16.0%. The yield, P accumulation, and P harvest index of QR20 significantly increased under the normal P condition by 20.6%, 18.1% and 18.2% respectively compared with that in the low P condition. The rank of the contribution rates of P

  18. High Efficiency of Two Efficient QSDC with Authentication Is at the Cost of Their Security

    International Nuclear Information System (INIS)

    Su-Juan, Qin; Qiao-Yan, Wen; Luo-Ming, Meng; Fu-Chen, Zhu

    2009-01-01

    Two efficient protocols of quantum secure direct communication with authentication [Chin. Phys. Lett. 25 (2008) 2354] were recently proposed by Liu et al. to improve the efficiency of two protocols presented in [Phys. Rev. A 75 (2007) 026301] by four Pauli operations. We show that the high efficiency of the two protocols is at the expense of their security. The authenticator Trent can reach half the secret by a particular attack strategy in the first protocol. In the second protocol, not only Trent but also an eavesdropper outside can elicit half-information about the secret from the public declaration

  19. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  20. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  1. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  2. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu

    1996-01-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  3. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  4. Toward efficient computation of the expected relative entropy for nonlinear experimental design

    International Nuclear Information System (INIS)

    Coles, Darrell; Prange, Michael

    2012-01-01

    The expected relative entropy between prior and posterior model-parameter distributions is a Bayesian objective function in experimental design theory that quantifies the expected gain in information of an experiment relative to a previous state of knowledge. The expected relative entropy is a preferred measure of experimental quality because it can handle nonlinear data-model relationships, an important fact due to the ubiquity of nonlinearity in science and engineering and its effects on post-inversion parameter uncertainty. This objective function does not necessarily yield experiments that mediate well-determined systems, but, being a Bayesian quality measure, it rigorously accounts for prior information which constrains model parameters that may be only weakly constrained by the optimized dataset. Historically, use of the expected relative entropy has been limited by the computing and storage requirements associated with high-dimensional numerical integration. Herein, a bifocal algorithm is developed that makes these computations more efficient. The algorithm is demonstrated on a medium-sized problem of sampling relaxation phenomena and on a large problem of source–receiver selection for a 2D vertical seismic profile. The method is memory intensive but workarounds are discussed. (paper)

  5. Computationally Efficient 2D DOA Estimation for L-Shaped Array with Unknown Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Yang-Yang Dong

    2018-01-01

    Full Text Available Although L-shaped array can provide good angle estimation performance and is easy to implement, its two-dimensional (2D direction-of-arrival (DOA performance degrades greatly in the presence of mutual coupling. To deal with the mutual coupling effect, a novel 2D DOA estimation method for L-shaped array with low computational complexity is developed in this paper. First, we generalize the conventional mutual coupling model for L-shaped array and compensate the mutual coupling blindly via sacrificing a few sensors as auxiliary elements. Then we apply the propagator method twice to mitigate the effect of strong source signal correlation effect. Finally, the estimations of azimuth and elevation angles are achieved simultaneously without pair matching via the complex eigenvalue technique. Compared with the existing methods, the proposed method is computationally efficient without spectrum search or polynomial rooting and also has fine angle estimation performance for highly correlated source signals. Theoretical analysis and simulation results have demonstrated the effectiveness of the proposed method.

  6. Energy-Efficient FPGA-Based Parallel Quasi-Stochastic Computing

    Directory of Open Access Journals (Sweden)

    Ramu Seva

    2017-11-01

    Full Text Available The high performance of FPGA (Field Programmable Gate Array in image processing applications is justified by its flexible reconfigurability, its inherent parallel nature and the availability of a large amount of internal memories. Lately, the Stochastic Computing (SC paradigm has been found to be significantly advantageous in certain application domains including image processing because of its lower hardware complexity and power consumption. However, its viability is deemed to be limited due to its serial bitstream processing and excessive run-time requirement for convergence. To address these issues, a novel approach is proposed in this work where an energy-efficient implementation of SC is accomplished by introducing fast-converging Quasi-Stochastic Number Generators (QSNGs and parallel stochastic bitstream processing, which are well suited to leverage FPGA’s reconfigurability and abundant internal memory resources. The proposed approach has been tested on the Virtex-4 FPGA, and results have been compared with the serial and parallel implementations of conventional stochastic computation using the well-known SC edge detection and multiplication circuits. Results prove that by using this approach, execution time, as well as the power consumption are decreased by a factor of 3.5 and 4.5 for the edge detection circuit and multiplication circuit, respectively.

  7. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    Directory of Open Access Journals (Sweden)

    Gabriel Oltean

    Full Text Available The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms, efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer, and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination. The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each

  8. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms.

    Science.gov (United States)

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the

  9. Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms

    Science.gov (United States)

    Oltean, Gabriel; Ivanciu, Laura-Nicoleta

    2016-01-01

    The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the

  10. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  11. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  12. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2015-01-01

    Full Text Available Mobile cloud computing (MCC combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs. In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.

  13. The emerging High Efficiency Video Coding standard (HEVC)

    International Nuclear Information System (INIS)

    Raja, Gulistan; Khan, Awais

    2013-01-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC

  14. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....

  15. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  16. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  17. High quality, high efficiency welding technology for nuclear power plants

    International Nuclear Information System (INIS)

    Aoki, Shigeyuki; Nagura, Yasumi

    1996-01-01

    For nuclear power plants, it is required to ensure the safety under the high reliability and to attain the high rate of operation. In the manufacture and installation of the machinery and equipment, the welding techniques which become the basis exert large influence to them. For the purpose of improving joint performance and excluding human errors, welding heat input and the number of passes have been reduced, the automation of welding has been advanced, and at present, narrow gap arc welding and high energy density welding such as electron beam welding and laser welding have been put to practical use. Also in the welding of pipings, automatic gas metal arc welding is employed. As for the welding of main machinery and equipment, there are the welding of the joints that constitute pressure boundaries, the build-up welding on the internal surfaces of pressure vessels for separating primary water from them, and the sealing welding of heating tubes and tube plates in steam generators. These weldings are explained. The welding of pipings and the state of development and application of new welding methods are reported. (K.I.)

  18. High-concentration planar microtracking photovoltaic system exceeding 30% efficiency

    Science.gov (United States)

    Price, Jared S.; Grede, Alex J.; Wang, Baomin; Lipski, Michael V.; Fisher, Brent; Lee, Kyu-Tae; He, Junwen; Brulo, Gregory S.; Ma, Xiaokun; Burroughs, Scott; Rahn, Christopher D.; Nuzzo, Ralph G.; Rogers, John A.; Giebink, Noel C.

    2017-08-01

    Prospects for concentrating photovoltaic (CPV) power are growing as the market increasingly values high power conversion efficiency to leverage now-dominant balance of system and soft costs. This trend is particularly acute for rooftop photovoltaic power, where delivering the high efficiency of traditional CPV in the form factor of a standard rooftop photovoltaic panel could be transformative. Here, we demonstrate a fully automated planar microtracking CPV system 660× concentration ratio over a 140∘ full field of view. In outdoor testing over the course of two sunny days, the system operates automatically from sunrise to sunset, outperforming a 17%-efficient commercial silicon solar cell by generating >50% more energy per unit area per day in a direct head-to-head competition. These results support the technical feasibility of planar microtracking CPV to deliver a step change in the efficiency of rooftop solar panels at a commercially relevant concentration ratio.

  19. Development of high-efficiency solar cells on silicon web

    Science.gov (United States)

    Meier, D. L.; Greggi, J.; Okeeffe, T. W.; Rai-Choudhury, P.

    1986-01-01

    Work was performed to improve web base material with a goal of obtaining solar cell efficiencies in excess of 18% (AM1). Efforts in this program are directed toward identifying carrier loss mechanisms in web silicon, eliminating or reducing these mechanisms, designing a high efficiency cell structure with the aid of numerical models, and fabricating high efficiency web solar cells. Fabrication techniques must preserve or enhance carrier lifetime in the bulk of the cell and minimize recombination of carriers at the external surfaces. Three completed cells were viewed by cross-sectional transmission electron microscopy (TEM) in order to investigate further the relation between structural defects and electrical performance of web cells. Consistent with past TEM examinations, the cell with the highest efficiency (15.0%) had no dislocations but did have 11 twin planes.

  20. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  1. High-Efficiency Klystron Design for the CLIC Project

    CERN Document Server

    Mollard, Antoine; Peauger, Franck; Plouin, Juliette; Beunas, Armel; Marchesin, Rodolphe

    2017-01-01

    The CLIC project requests new type of RF sources for the high power conditioning of the accelerating cavities. We are working on the development of a new kind of high-efficiency klystron to fulfill this need. This work is performed under the EuCARD-2 European program and involves theoretical and experimental study of a brand new klystron concept.

  2. Efficient estimation for high similarities using odd sketches

    DEFF Research Database (Denmark)

    Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang

    2014-01-01

    . This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... and web duplicate detection tasks....

  3. Design of High Efficiency Illumination for LED Lighting

    Directory of Open Access Journals (Sweden)

    Yong-Nong Chang

    2013-01-01

    Full Text Available A high efficiency illumination for LED street lighting is proposed. For energy saving, this paper uses Class-E resonant inverter as main electric circuit to improve efficiency. In addition, single dimming control has the best efficiency, simplest control scheme and lowest circuit cost among other types of dimming techniques. Multiple serial-connected transformers used to drive the LED strings as they can provide galvanic isolation and have the advantage of good current distribution against device difference. Finally, a prototype circuit for driving 112 W LEDs in total was built and tested to verify the theoretical analysis.

  4. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems

  5. Development of a computer program to support an efficient non-regression test of a thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun Yeob; Jeong, Jae Jun [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Suh, Jae Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    During the development process of a thermal-hydraulic system code, a non-regression test (NRT) must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.

  6. High-Efficient Low-Cost Photovoltaics Recent Developments

    CERN Document Server

    Petrova-Koch, Vesselinka; Goetzberger, Adolf

    2009-01-01

    A bird's-eye view of the development and problems of recent photovoltaic cells and systems and prospects for Si feedstock is presented. High-efficient low-cost PV modules, making use of novel efficient solar cells (based on c-Si or III-V materials), and low cost solar concentrators are in the focus of this book. Recent developments of organic photovoltaics, which is expected to overcome its difficulties and to enter the market soon, are also included.

  7. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    Science.gov (United States)

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic

  8. Efficient approach to compute melting properties fully from ab initio with application to Cu

    Science.gov (United States)

    Zhu, Li-Fang; Grabowski, Blazej; Neugebauer, Jörg

    2017-12-01

    Applying thermodynamic integration within an ab initio-based free-energy approach is a state-of-the-art method to calculate melting points of materials. However, the high computational cost and the reliance on a good reference system for calculating the liquid free energy have so far hindered a general application. To overcome these challenges, we propose the two-optimized references thermodynamic integration using Langevin dynamics (TOR-TILD) method in this work by extending the two-stage upsampled thermodynamic integration using Langevin dynamics (TU-TILD) method, which has been originally developed to obtain anharmonic free energies of solids, to the calculation of liquid free energies. The core idea of TOR-TILD is to fit two empirical potentials to the energies from density functional theory based molecular dynamics runs for the solid and the liquid phase and to use these potentials as reference systems for thermodynamic integration. Because the empirical potentials closely reproduce the ab initio system in the relevant part of the phase space the convergence of the thermodynamic integration is very rapid. Therefore, the proposed approach improves significantly the computational efficiency while preserving the required accuracy. As a test case, we apply TOR-TILD to fcc Cu computing not only the melting point but various other melting properties, such as the entropy and enthalpy of fusion and the volume change upon melting. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional and the local-density approximation (LDA) are used. Using both functionals gives a reliable ab initio confidence interval for the melting point, the enthalpy of fusion, and entropy of fusion.

  9. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    Science.gov (United States)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  10. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  11. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...

  12. A new computationally-efficient computer program for simulating spectral gamma-ray logs

    International Nuclear Information System (INIS)

    Conaway, J.G.

    1995-01-01

    Several techniques to improve the accuracy of radionuclide concentration estimates as a function of depth from gamma-ray logs have appeared in the literature. Much of that work was driven by interest in uranium as an economic mineral. More recently, the problem of mapping and monitoring artificial gamma-emitting contaminants in the ground has rekindled interest in improving the accuracy of radioelement concentration estimates from gamma-ray logs. We are looking at new approaches to accomplishing such improvements. The first step in this effort has been to develop a new computational model of a spectral gamma-ray logging sonde in a borehole environment. The model supports attenuation in any combination of materials arranged in 2-D cylindrical geometry, including any combination of attenuating materials in the borehole, formation, and logging sonde. The model can also handle any distribution of sources in the formation. The model considers unscattered radiation only, as represented by the background-corrected area under a given spectral photopeak as a function of depth. Benchmark calculations using the standard Monte Carlo model MCNP show excellent agreement with total gamma flux estimates with a computation time of about 0.01% of the time required for the MCNP calculations. This model lacks the flexibility of MCNP, although for this application a great deal can be accomplished without that flexibility

  13. Efficient computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, David C.; Murthy, Durbha V.

    1991-01-01

    Aeroelastic analysis is multi-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic capability on a distributed memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a 3-D unsteady aerodynamic model and a parallel discretization. Efficiencies up to 85 percent were demonstrated using 32 processors. The effect of subtask ordering, problem size, and network topology are presented. A comparison to results on a shared memory computer indicates that higher speedup is achieved on the distributed memory system.

  14. High efficiency heat transport and power conversion system for cascade

    International Nuclear Information System (INIS)

    Maya, I.; Bourque, R.F.; Creedon, R.L.; Schultz, K.R.

    1985-02-01

    The Cascade ICF reactor features a flowing blanket of solid BeO and LiAlO 2 granules with very high temperature capability (up to approx. 2300 K). The authors present here the design of a high temperature granule transport and heat exchange system, and two options for high efficiency power conversion. The centrifugal-throw transport system uses the peripheral speed imparted to the granules by the rotating chamber to effect granule transport and requires no additional equipment. The heat exchanger design is a vacuum heat transfer concept utilizing gravity-induced flow of the granules over ceramic heat exchange surfaces. A reference Brayton power cycle is presented which achieves 55% net efficiency with 1300 K peak helium temperature. A modified Field steam cycle (a hybrid Rankine/Brayton cycle) is presented as an alternate which achieves 56% net efficiency

  15. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency

    Science.gov (United States)

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng

    2018-05-01

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  16. High-efficiency white OLEDs based on small molecules

    Science.gov (United States)

    Hatwar, Tukaram K.; Spindler, Jeffrey P.; Ricks, M. L.; Young, Ralph H.; Hamada, Yuuhiko; Saito, N.; Mameno, Kazunobu; Nishikawa, Ryuji; Takahashi, Hisakazu; Rajeswaran, G.

    2004-02-01

    Eastman Kodak Company and SANYO Electric Co., Ltd. recently demonstrated a 15" full-color, organic light-emitting diode display (OLED) using a high-efficiency white emitter combined with a color-filter array. Although useful for display applications, white emission from organic structures is also under consideration for other applications, such as solid-state lighting, where high efficiency and good color rendition are important. By incorporating adjacent blue and orange emitting layers in a multi-layer structure, highly efficient, stable white emission has been attained. With suitable host and dopant combinations, a luminance yield of 20 cd/A and efficiency of 8 lm/W have been achieved at a drive voltage of less than 8 volts and luminance level of 1000 cd/m2. The estimated external efficiency of this device is 6.3% and a high level of operational stability is observed. To our knowledge, this is the highest performance reported so far for white organic electroluminescent devices. We will review white OLED technology and discuss the fabrication and operating characteristics of these devices.

  17. Low Cost, High Efficiency, High Pressure Hydrogen Storage

    Energy Technology Data Exchange (ETDEWEB)

    Mark Leavitt

    2010-03-31

    A technical and design evaluation was carried out to meet DOE hydrogen fuel targets for 2010. These targets consisted of a system gravimetric capacity of 2.0 kWh/kg, a system volumetric capacity of 1.5 kWh/L and a system cost of $4/kWh. In compressed hydrogen storage systems, the vast majority of the weight and volume is associated with the hydrogen storage tank. In order to meet gravimetric targets for compressed hydrogen tanks, 10,000 psi carbon resin composites were used to provide the high strength required as well as low weight. For the 10,000 psi tanks, carbon fiber is the largest portion of their cost. Quantum Technologies is a tier one hydrogen system supplier for automotive companies around the world. Over the course of the program Quantum focused on development of technology to allow the compressed hydrogen storage tank to meet DOE goals. At the start of the program in 2004 Quantum was supplying systems with a specific energy of 1.1-1.6 kWh/kg, a volumetric capacity of 1.3 kWh/L and a cost of $73/kWh. Based on the inequities between DOE targets and Quantum’s then current capabilities, focus was placed first on cost reduction and second on weight reduction. Both of these were to be accomplished without reduction of the fuel system’s performance or reliability. Three distinct areas were investigated; optimization of composite structures, development of “smart tanks” that could monitor health of tank thus allowing for lower design safety factor, and the development of “Cool Fuel” technology to allow higher density gas to be stored, thus allowing smaller/lower pressure tanks that would hold the required fuel supply. The second phase of the project deals with three additional distinct tasks focusing on composite structure optimization, liner optimization, and metal.

  18. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  19. High Efficient Bidirectional Battery Converter for residential PV Systems

    DEFF Research Database (Denmark)

    Pham, Cam; Kerekes, Tamas; Teodorescu, Remus

    2012-01-01

    Photovoltaic (PV) installation is suited for the residential environment and the generation pattern follows the distribution of residential power consumption in daylight hours. In the cases of unbalance between generation and demand, the Smart PV with its battery storage can absorb or inject...... the power to balance it. High efficient bidirectional converter for the battery storage is required due high system cost and because the power is processed twice. A 1.5kW prototype is designed and built with CoolMOS and SiC diodes, >;95% efficiency has been obtained with 200 kHz hard switching....

  20. Formulation and Validation of an Efficient Computational Model for a Dilute, Settling Suspension Undergoing Rotational Mixing

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran; Crawford, Nathan C.; Fischer, Paul F.

    2017-04-11

    Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters. We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.

  1. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  2. An efficient algorithm to compute subsets of points in ℤ n

    OpenAIRE

    Pacheco Martínez, Ana María; Real Jurado, Pedro

    2012-01-01

    In this paper we show a more efficient algorithm than that in [8] to compute subsets of points non-congruent by isometries. This algorithm can be used to reconstruct the object from the digital image. Both algorithms are compared, highlighting the improvements obtained in terms of CPU time.

  3. Defect correction and multigrid for an efficient and accurate computation of airfoil flows

    NARCIS (Netherlands)

    Koren, B.

    1988-01-01

    Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction

  4. A computationally efficient 3D finite-volume scheme for violent liquid–gas sloshing

    CSIR Research Space (South Africa)

    Oxtoby, Oliver F

    2015-10-01

    Full Text Available We describe a semi-implicit volume-of-fluid free-surface-modelling methodology for flow problems involving violent free-surface motion. For efficient computation, a hybrid-unstructured edge-based vertex-centred finite volume discretisation...

  5. Innovative-Simplified Nuclear Power Plant Efficiency Evaluation with High-Efficiency Steam Injector System

    International Nuclear Information System (INIS)

    Shoji, Goto; Shuichi, Ohmori; Michitsugu, Mori

    2006-01-01

    It is possible to establish simplified system with reduced space and total equipment weight using high-efficiency Steam Injectors (SI) instead of low-pressure feedwater heaters in Nuclear Power Plant (NPP). The SI works as a heat exchanger through direct contact between feedwater from condensers and extracted steam from turbines. It can get higher pressure than supplied steam pressure. The maintenance and reliability are still higher than the feedwater ones because SI has no movable parts. This paper describes the analysis of the heat balance, plant efficiency and the operation of this Innovative-Simplified NPP with high-efficiency SI. The plant efficiency and operation are compared with the electric power of 1100 MWe-class BWR system and the Innovative-Simplified BWR system with SI. The SI model is adapted into the heat balance simulator with a simplified model. The results show that plant efficiencies of the Innovated-Simplified BWR system are almost equal to original BWR ones. The present research is one of the projects that are carried out by Tokyo Electric Power Company, Toshiba Corporation, and six Universities in Japan, funded from the Institute of Applied Energy (IAE) of Japan as the national public research-funded program. (authors)

  6. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  7. A metamaterial electromagnetic energy rectifying surface with high harvesting efficiency

    Directory of Open Access Journals (Sweden)

    Xin Duan

    2016-12-01

    Full Text Available A novel metamaterial rectifying surface (MRS for electromagnetic energy capture and rectification with high harvesting efficiency is presented. It is fabricated on a three-layer printed circuit board, which comprises an array of periodic metamaterial particles in the shape of mirrored split rings, a metal ground, and integrated rectifiers employing Schottky diodes. Perfect impedance matching is engineered at two interfaces, i.e. one between free space and the surface, and the other between the metamaterial particles and the rectifiers, which are connected through optimally positioned vias. Therefore, the incident electromagnetic power is captured with almost no reflection by the metamaterial particles, then channeled maximally to the rectifiers, and finally converted to direct current efficiently. Moreover, the rectifiers are behind the metal ground, avoiding the disturbance of high power incident electromagnetic waves. Such a MRS working at 2.45 GHz is designed, manufactured and measured, achieving a harvesting efficiency up to 66.9% under an incident power density of 5 mW/cm2, compared with a simulated efficiency of 72.9%. This high harvesting efficiency makes the proposed MRS an effective receiving device in practical microwave power transmission applications.

  8. A metamaterial electromagnetic energy rectifying surface with high harvesting efficiency

    Science.gov (United States)

    Duan, Xin; Chen, Xing; Zhou, Lin

    2016-12-01

    A novel metamaterial rectifying surface (MRS) for electromagnetic energy capture and rectification with high harvesting efficiency is presented. It is fabricated on a three-layer printed circuit board, which comprises an array of periodic metamaterial particles in the shape of mirrored split rings, a metal ground, and integrated rectifiers employing Schottky diodes. Perfect impedance matching is engineered at two interfaces, i.e. one between free space and the surface, and the other between the metamaterial particles and the rectifiers, which are connected through optimally positioned vias. Therefore, the incident electromagnetic power is captured with almost no reflection by the metamaterial particles, then channeled maximally to the rectifiers, and finally converted to direct current efficiently. Moreover, the rectifiers are behind the metal ground, avoiding the disturbance of high power incident electromagnetic waves. Such a MRS working at 2.45 GHz is designed, manufactured and measured, achieving a harvesting efficiency up to 66.9% under an incident power density of 5 mW/cm2, compared with a simulated efficiency of 72.9%. This high harvesting efficiency makes the proposed MRS an effective receiving device in practical microwave power transmission applications.

  9. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  10. Efficiency Evaluation of Handling of Geologic-Geophysical Information by Means of Computer Systems

    Science.gov (United States)

    Nuriyahmetova, S. M.; Demyanova, O. V.; Zabirova, L. M.; Gataullin, I. I.; Fathutdinova, O. A.; Kaptelinina, E. A.

    2018-05-01

    Development of oil and gas resources, considering difficult geological, geographical and economic conditions, requires considerable finance costs; therefore their careful reasons, application of the most perspective directions and modern technologies from the point of view of cost efficiency of planned activities are necessary. For ensuring high precision of regional and local forecasts and modeling of reservoirs of fields of hydrocarbonic raw materials, it is necessary to analyze huge arrays of the distributed information which is constantly changing spatial. The solution of this task requires application of modern remote methods of a research of the perspective oil-and-gas territories, complex use of materials remote, nondestructive the environment of geologic-geophysical and space methods of sounding of Earth and the most perfect technologies of their handling. In the article, the authors considered experience of handling of geologic-geophysical information by means of computer systems by the Russian and foreign companies. Conclusions that the multidimensional analysis of geologicgeophysical information space, effective planning and monitoring of exploration works requires broad use of geoinformation technologies as one of the most perspective directions in achievement of high profitability of an oil and gas industry are drawn.

  11. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    International Nuclear Information System (INIS)

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  12. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  13. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  14. On the Computation of the Efficient Frontier of the Portfolio Selection Problem

    Directory of Open Access Journals (Sweden)

    Clara Calvo

    2012-01-01

    Full Text Available An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality and semicontinuous variable constraints. The proposed method provides not only a numerical plotting of the frontier but also an analytical description of it, including the explicit equations of the arcs of parabola it comprises and the change points between them. This information is useful for performing a sensitivity analysis as well as for providing additional criteria to the investor in order to select an efficient portfolio. Computational results are provided to test the efficiency of the algorithm and to illustrate its applications. The procedure has been implemented in Mathematica.

  15. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  16. Highly efficient light management for perovskite solar cells.

    Science.gov (United States)

    Wang, Dong-Lin; Cui, Hui-Juan; Hou, Guo-Jiao; Zhu, Zhen-Gang; Yan, Qing-Bo; Su, Gang

    2016-01-06

    Organic-inorganic halide perovskite solar cells have enormous potential to impact the existing photovoltaic industry. As realizing a higher conversion efficiency of the solar cell is still the most crucial task, a great number of schemes were proposed to minimize the carrier loss by optimizing the electrical properties of the perovskite solar cells. Here, we focus on another significant aspect that is to minimize the light loss by optimizing the light management to gain a high efficiency for perovskite solar cells. In our scheme, the slotted and inverted prism structured SiO2 layers are adopted to trap more light into the solar cells, and a better transparent conducting oxide layer is employed to reduce the parasitic absorption. For such an implementation, the efficiency and the serviceable angle of the perovskite solar cell can be promoted impressively. This proposal would shed new light on developing the high-performance perovskite solar cells.

  17. A Low VSWR and High Efficiency Waveguide Feed Antenna Array

    Directory of Open Access Journals (Sweden)

    Zhao Xiao-Fang

    2018-01-01

    Full Text Available A low VSWR and high efficiency antenna array operating in the Ku band for satellite communications is presented in this paper. To achieve high radiation efficiency and broad enough bandwidth, all-metal radiation elements and full-corporate waveguide feeding network are employed. As the general milling method is used in the multilayer antenna array fabrication, the E-plane waveguide feeding network is adopted here to suppress the wave leakage caused by the imperfect connectivity between adjacent layers. A 4 × 8 elements array prototype was fabricated and tested for verification. The measured results of proposed antenna array show bandwidth of 6.9% (13.9–14.8 GHz for VSWR < 1.5. Furthermore, antenna gain and efficiency of higher than 22.2 dBi and 80% are also exhibited, respectively.

  18. Potential high efficiency solar cells: Applications from space photovoltaic research

    Science.gov (United States)

    Flood, D. J.

    1986-01-01

    NASA involvement in photovoltaic energy conversion research development and applications spans over two decades of continuous progress. Solar cell research and development programs conducted by the Lewis Research Center's Photovoltaic Branch have produced a sound technology base not only for the space program, but for terrestrial applications as well. The fundamental goals which have guided the NASA photovoltaic program are to improve the efficiency and lifetime, and to reduce the mass and cost of photovoltaic energy conversion devices and arrays for use in space. The major efforts in the current Lewis program are on high efficiency, single crystal GaAs planar and concentrator cells, radiation hard InP cells, and superlattice solar cells. A brief historical perspective of accomplishments in high efficiency space solar cells will be given, and current work in all of the above categories will be described. The applicability of space cell research and technology to terrestrial photovoltaics will be discussed.

  19. The thermodynamic characteristics of high efficiency, internal-combustion engines

    International Nuclear Information System (INIS)

    Caton, Jerald A.

    2012-01-01

    Highlights: ► The thermodynamics of an automotive engine are determined using a cycle simulation. ► The net indicated thermal efficiency increased from 37.0% to 53.9%. ► High compression ratio, lean mixtures and high EGR were the important features. ► Efficiency increased due to lower heat losses, and increased work conversion. ► The nitric oxides were essentially zero due to the low combustion temperatures. - Abstract: Recent advancements have demonstrated new combustion modes for internal combustion engines that exhibit low nitric oxide emissions and high thermal efficiencies. These new combustion modes involve various combinations of stratification, lean mixtures, high levels of EGR, multiple injections, variable valve timings, two fuels, and other such features. Although the exact combination of these features that provides the best design is not yet clear, the results (low emissions with high efficiencies) are of major interest. The current work is directed at determining some of the fundamental thermodynamic reasons for the relatively high efficiencies and to quantify these factors. Both the first and second laws are used in this assessment. An automotive engine (5.7 l) which included some of the features mentioned above (e.g., high compression ratios, lean mixtures, and high EGR) was evaluated using a thermodynamic cycle simulation. These features were examined for a moderate load (bmep = 900 kPa), moderate speed (2000 rpm) condition. By the use of lean operation, high EGR levels, high compression ratio and other features, the net indicated thermal efficiency increased from 37.0% to 53.9%. These increases are explained in a step-by-step fashion. The major reasons for these improvements include the higher compression ratio and the dilute charge (lean mixture, high EGR). The dilute charge resulted in lower temperatures which in turn resulted in lower heat loss. In addition, the lower temperatures resulted in higher ratios of the specific heats which

  20. Low cost highly available digital control computer

    International Nuclear Information System (INIS)

    Silvers, M.W.

    1986-01-01

    When designing digital controllers for critical plant control it is important to provide several features. Among these are reliability, availability, maintainability, environmental protection, and low cost. An examination of several applications has lead to a design that can be produced for approximately $20,000 (1000 control points). This design is compatible with modern concepts in distributed and hierarchical control. The canonical controller element is a dual-redundant self-checking computer that communicates with a cross-strapped, electrically isolated input/output system. The input/output subsystem comprises multiple intelligent input/output cards. These cards accept commands from the primary processor which are validated, executed, and acknowledged. Each card may be hot replaced to facilitate sparing. The implementation of the dual-redundant computer architecture is discussed. Called the FS-86, this computer can be used for a variety of applications. It has most recently found application in the upgrade of San Francisco's Bay Area Rapid Transit (BART) train control currently in progress and has been proposed for feedwater control in a boiling water reactor

  1. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  2. A nuclear standard high-efficiency adsorber for iodine

    International Nuclear Information System (INIS)

    Wang Jianmin; Qian Yinge

    1988-08-01

    The structure of a nuclear standard high-efficiency adsorber, adsorbent and its performance are introduced. The performance and structure were compared with the same kind product of other firms. The results show that the leakage rate is less than 0.005%

  3. Efficiency criteria for high reliability measured system structures

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.

    2012-01-01

    The procedures of structural redundancy are usually used to develop high reliability measured systems. To estimate efficiency of such structures the criteria to compare different systems has been developed. So it is possible to develop more exact system by inspection of redundant system data unit stochastic characteristics in accordance with the developed criteria [ru

  4. Optimization of high-efficiency components; Optimieren auf hohem Niveau

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Eva

    2009-07-01

    High efficiency is a common feature of modern current inverters and is not a unique selling proposition. Other factors that influence the buyer's decision are cost reduction, reliability and service, optimum grid integration, and the challenges of the competitive thin film technology. (orig.)

  5. Orion, a high efficiency 4π neutron detector

    International Nuclear Information System (INIS)

    Crema, E.; Piasecki, E.; Wang, X.M.; Doubre, H.; Galin, J.; Guerreau, D.; Pouthas, J.; Saint-Laurent, F.

    1990-01-01

    In intermediate energy heavy ion collisions the multiplicity of emitted neutrons is strongly connected to energy dissipation and to impact parameter. We present the 4π detector ORION, a high efficiency liquid scintillator detector which permits to get information on the multiplicity of neutrons measured event-wise and on the spatial distribution of these neutrons [fr

  6. High efficiency hydrodynamic DNA fragmentation in a bubbling system

    NARCIS (Netherlands)

    Li, Lanhui; Jin, Mingliang; Sun, Chenglong; Wang, Xiaoxue; Xie, Shuting; Zhou, Guofu; Van Den Berg, Albert; Eijkel, Jan C.T.; Shui, Lingling

    2017-01-01

    DNA fragmentation down to a precise fragment size is important for biomedical applications, disease determination, gene therapy and shotgun sequencing. In this work, a cheap, easy to operate and high efficiency DNA fragmentation method is demonstrated based on hydrodynamic shearing in a bubbling

  7. High efficiency confinement mode by electron cyclotron heating

    International Nuclear Information System (INIS)

    Funahashi, Akimasa

    1987-01-01

    In the medium size nuclear fusion experiment facility JFT-2M in the Japan Atomic Energy Research Institute, the research on the high efficiency plasma confinement mode has been advanced, and in the experiment in June, 1987, the formation of a high efficiency confinement mode was successfully controlled by electron cyclotron heating, for the first time in the world. This result further advanced the control of the formation of a high efficiency plasma confinement mode and the elucidation of the physical mechanism of that mode, and promoted the research and development of the plasma heating by electron cyclotron heating. In this paper, the recent results of the research on a high efficiency confinement mode at the JFT-2M are reported, and the role of the JFT-2M and the experiment on the improvement of core plasma performance are outlined. Now the plasma temperature exceeding 100 million deg C has been attained in large tokamaks, and in medium size facilities, the various measures for improving confinement performance are to be brought forth and their scientific basis is elucidated to assist large facilities. The JFT-2M started the operation in April, 1983, and has accumulated the results smoothly since then. (Kako, I.)

  8. Super Boiler: First Generation, Ultra-High Efficiency Firetube Boiler

    Energy Technology Data Exchange (ETDEWEB)

    None

    2006-06-01

    This factsheet describes a research project whose goal is to develop and demonstrate a first-generation ultra-high-efficiency, ultra-low emissions, compact gas-fired package boiler (Super Boiler), and formulate a long-range RD&D plan for advanced boiler technology out to the year 2020.

  9. High-efficient solar cells with porous silicon

    International Nuclear Information System (INIS)

    Migunova, A.A.

    2002-01-01

    It has been shown that the porous silicon is multifunctional high-efficient coating on silicon solar cells, modifies its surface and combines in it self antireflection and passivation properties., The different optoelectronic effects in solar cells with porous silicon were considered. The comparative parameters of uncovered photodetectors also solar cells with porous silicon and other coatings were resulted. (author)

  10. Benefits of high aerodynamic efficiency to orbital transfer vehicles

    Science.gov (United States)

    Andrews, D. G.; Norris, R. B.; Paris, S. W.

    1984-01-01

    The benefits and costs of high aerodynamic efficiency on aeroassisted orbital transfer vehicles (AOTV) are analyzed. Results show that a high lift to drag (L/D) AOTV can achieve significant velocity savings relative to low L/D aerobraked OTV's when traveling round trip between low Earth orbits (LEO) and alternate orbits as high as geosynchronous Earth orbit (GEO). Trajectory analysis is used to show the impact of thermal protection system technology and the importance of lift loading coefficient on vehicle performance. The possible improvements in AOTV subsystem technologies are assessed and their impact on vehicle inert weight and performance noted. Finally, the performance of high L/D AOTV concepts is compared with the performances of low L/D aeroassisted and all propulsive OTV concepts to assess the benefits of aerodynamic efficiency on this class of vehicle.

  11. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  12. High efficiency inductive output tubes with intense annular electron beams

    Science.gov (United States)

    Appanam Karakkad, J.; Matthew, D.; Ray, R.; Beaudoin, B. L.; Narayan, A.; Nusinovich, G. S.; Ting, A.; Antonsen, T. M.

    2017-10-01

    For mobile ionospheric heaters, it is necessary to develop highly efficient RF sources capable of delivering radiation in the frequency range from 3 to 10 MHz with an average power at a megawatt level. A promising source, which is capable of offering these parameters, is a grid-less version of the inductive output tube (IOT), also known as a klystrode. In this paper, studies analyzing the efficiency of grid-less IOTs are described. The basic trade-offs needed to reach high efficiency are investigated. In particular, the trade-off between the peak current and the duration of the current micro-pulse is analyzed. A particle in the cell code is used to self-consistently calculate the distribution in axial and transverse momentum and in total electron energy from the cathode to the collector. The efficiency of IOTs with collectors of various configurations is examined. It is shown that the efficiency of IOTs can be in the 90% range even without using depressed collectors.

  13. How high are option values in energy-efficiency investments?

    International Nuclear Information System (INIS)

    Sanstad, A.H.; Blumstein, C.; Stoft, S.E.; California Univ., Berkeley, CA,

    1995-01-01

    High implicit discount rates in consumers' energy-efficiency investments have long been a source of controversy. In several recent papers, Hassett and Metcalf argue that the uncertainty and irreversibility attendant to such investments, and the resulting option value, account for this anomalously high implicit discounting. Using their model and data, we show that, to the contrary, their analysis falls well short of providing an explanation of this pattern. (author)

  14. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  15. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  16. Computer-aided modeling framework for efficient model development, analysis and identification

    DEFF Research Database (Denmark)

    Heitzig, Martina; Sin, Gürkan; Sales Cruz, Mauricio

    2011-01-01

    Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided...... methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task....... The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main...

  17. In silico evaluation of highly efficient organic light-emitting materials

    Science.gov (United States)

    Kwak, H. Shaun; Giesen, David J.; Hughes, Thomas F.; Goldberg, Alexander; Cao, Yixiang; Gavartin, Jacob; Dixon, Steve; Halls, Mathew D.

    2016-09-01

    Design and development of highly efficient organic and organometallic dopants is one of the central challenges in the organic light-emitting diodes (OLEDs) technology. Recent advances in the computational materials science have made it possible to apply computer-aided evaluation and screening framework directly to the design space of organic lightemitting diodes (OLEDs). In this work, we will showcase two major components of the latest in silico framework for development of organometallic phosphorescent dopants - (1) rapid screening of dopants by machine-learned quantum mechanical models and (2) phosphorescence lifetime predictions with spin-orbit coupled calculations (SOC-TDDFT). The combined work of virtual screening and evaluation would significantly widen the design space for highly efficient phosphorescent dopants with unbiased measures to evaluate performance of the materials from first principles.

  18. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    Science.gov (United States)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  19. A hybrid model for the computationally-efficient simulation of the cerebellar granular layer

    Directory of Open Access Journals (Sweden)

    Anna eCattani

    2016-04-01

    Full Text Available The aim of the present paper is to efficiently describe the membrane potential dynamics of neural populations formed by species having a high density difference in specific brain areas. We propose a hybrid model whose main ingredients are a conductance-based model (ODE system and its continuous counterpart (PDE system obtained through a limit process in which the number of neurons confined in a bounded region of the brain tissue is sent to infinity. Specifically, in the discrete model, each cell is described by a set of time-dependent variables, whereas in the continuum model, cells are grouped into populations that are described by a set of continuous variables.Communications between populations, which translate into interactions among the discrete and the continuous models, are the essence of the hybrid model we present here. The cerebellum and cerebellum-like structures show in their granular layer a large difference in the relative density of neuronal species making them a natural testing ground for our hybrid model. By reconstructing the ensemble activity of the cerebellar granular layer network and by comparing our results to a more realistic computational network, we demonstrate that our description of the network activity, even though it is not biophysically detailed, is still capable of reproducing salient features of neural network dynamics. Our modeling approach yields a significant computational cost reduction by increasing the simulation speed at least $270$ times. The hybrid model reproduces interesting dynamics such as local microcircuit synchronization, traveling waves, center-surround and time-windowing.

  20. Efficiency and Loading Evaluation of High Efficiency Mist Eliminators (HEME) - 12003

    Energy Technology Data Exchange (ETDEWEB)

    Giffin, Paxton K.; Parsons, Michael S.; Waggoner, Charles A. [Institute for Clean Energy Technology, Mississippi State University, 205 Research Blvd Starkville, MS 39759 (United States)

    2012-07-01

    High efficiency mist eliminators (HEME) are filters primarily used to remove moisture and/or liquid aerosols from an air stream. HEME elements are designed to reduce aerosol and particulate load on primary High Efficiency Particulate Air (HEPA) filters and to have a liquid particle removal efficiency of approximately 99.5% for aerosols down to sub-micron size particulates. The investigation presented here evaluates the loading capacity of the element in the absence of a water spray cleaning system. The theory is that without the cleaning system, the HEME element will suffer rapid buildup of solid aerosols, greatly reducing the particle loading capacity. Evaluation consists of challenging the element with a waste surrogate dry aerosol and di-octyl phthalate (DOP) at varying intervals of differential pressure to examine the filtering efficiency of three different element designs at three different media velocities. Also, the elements are challenged with a liquid waste surrogate using Laskin nozzles and large dispersion nozzles. These tests allow the loading capacity of the unit to be determined and the effectiveness of washing down the interior of the elements to be evaluated. (authors)