WorldWideScience

Sample records for high coding efficiency

  1. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  2. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  3. The emerging High Efficiency Video Coding standard (HEVC)

    International Nuclear Information System (INIS)

    Raja, Gulistan; Khan, Awais

    2013-01-01

    High definition video (HDV) is becoming popular day by day. This paper describes the performance analysis of latest upcoming video standard known as High Efficiency Video Coding (HEVC). HEVC is designed to fulfil all the requirements for future high definition videos. In this paper, three configurations (intra only, low delay and random access) of HEVC are analyzed using various 480p, 720p and 1080p high definition test video sequences. Simulation results show the superior objective and subjective quality of HEVC

  4. Improved entropy encoding for high efficient video coding standard

    Directory of Open Access Journals (Sweden)

    B.S. Sunil Kumar

    2018-03-01

    Full Text Available The High Efficiency Video Coding (HEVC has better coding efficiency, but the encoding performance has to be improved to meet the growing multimedia applications. This paper improves the standard entropy encoding by introducing the optimized weighing parameters, so that higher rate of compression can be accomplished over the standard entropy encoding. The optimization is performed using the recently introduced firefly algorithm. The experimentation is carried out using eight benchmark video sequences and the PSNR for varying rate of data transmission is investigated. Comparative analysis based on the performance statistics is made with the standard entropy encoding. From the obtained results, it is clear that the originality of the decoded video sequence is preserved far better than the proposed method, though the compression rate is increased. Keywords: Entropy, Encoding, HEVC, PSNR, Compression

  5. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  6. High Efficiency EBCOT with Parallel Coding Architecture for JPEG2000

    Directory of Open Access Journals (Sweden)

    Chiang Jen-Shiun

    2006-01-01

    Full Text Available This work presents a parallel context-modeling coding architecture and a matching arithmetic coder (MQ-coder for the embedded block coding (EBCOT unit of the JPEG2000 encoder. Tier-1 of the EBCOT consumes most of the computation time in a JPEG2000 encoding system. The proposed parallel architecture can increase the throughput rate of the context modeling. To match the high throughput rate of the parallel context-modeling architecture, an efficient pipelined architecture for context-based adaptive arithmetic encoder is proposed. This encoder of JPEG2000 can work at 180 MHz to encode one symbol each cycle. Compared with the previous context-modeling architectures, our parallel architectures can improve the throughput rate up to 25%.

  7. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  8. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  9. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  10. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    Science.gov (United States)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  11. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    Science.gov (United States)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  12. Energy efficient rateless codes for high speed data transfer over free space optical channels

    Science.gov (United States)

    Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.

    2015-03-01

    Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.

  13. Investigating the structure preserving encryption of high efficiency video coding (HEVC)

    Science.gov (United States)

    Shahid, Zafar; Puech, William

    2013-02-01

    This paper presents a novel method for the real-time protection of new emerging High Efficiency Video Coding (HEVC) standard. Structure preserving selective encryption is being performed in CABAC entropy coding module of HEVC, which is significantly different from CABAC entropy coding of H.264/AVC. In CABAC of HEVC, exponential Golomb coding is replaced by truncated Rice (TR) up to a specific value for binarization of transform coefficients. Selective encryption is performed using AES cipher in cipher feedback mode on a plaintext of binstrings in a context aware manner. The encrypted bitstream has exactly the same bit-rate and is format complaint. Experimental evaluation and security analysis of the proposed algorithm is performed on several benchmark video sequences containing different combinations of motion, texture and objects.

  14. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  15. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  16. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...

  17. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  18. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    Science.gov (United States)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  19. An Effective Transform Unit Size Decision Method for High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Chou-Chen Wang

    2014-01-01

    Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.

  20. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    Science.gov (United States)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  1. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  2. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    Science.gov (United States)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  3. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  4. On video formats and coding efficiency

    NARCIS (Netherlands)

    Bellers, E.B.; Haan, de G.

    2001-01-01

    This paper examines the efficiency of MPEG-2 coding for interlaced and progressive video, and compares de-interlacing and picture rate up-conversion before and after coding. We found receiver side de-interlacing and picture rate up-conversion (i.e. after coding) to give better image quality at a

  5. Testing efficiency transfer codes for equivalence

    International Nuclear Information System (INIS)

    Vidmar, T.; Celik, N.; Cornejo Diaz, N.; Dlabac, A.; Ewa, I.O.B.; Carrazana Gonzalez, J.A.; Hult, M.; Jovanovic, S.; Lepy, M.-C.; Mihaljevic, N.; Sima, O.; Tzika, F.; Jurado Vargas, M.; Vasilopoulou, T.; Vidmar, G.

    2010-01-01

    Four general Monte Carlo codes (GEANT3, PENELOPE, MCNP and EGS4) and five dedicated packages for efficiency determination in gamma-ray spectrometry (ANGLE, DETEFF, GESPECOR, ETNA and EFFTRAN) were checked for equivalence by applying them to the calculation of efficiency transfer (ET) factors for a set of well-defined sample parameters, detector parameters and energies typically encountered in environmental radioactivity measurements. The differences between the results of the different codes never exceeded a few percent and were lower than 2% in the majority of cases.

  6. High Energy Transport Code HETC

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1985-09-01

    The physics contained in the High Energy Transport Code (HETC), in particular the collision models, are discussed. An application using HETC as part of the CALOR code system is also given. 19 refs., 5 figs., 3 tabs

  7. Efficient decoding of random errors for quantum expander codes

    OpenAIRE

    Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony

    2017-01-01

    We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...

  8. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong

    2013-03-01

    In this paper we consider a wireless multimedia system by mapping scalable video coded (SVC) bit stream upon superposition coded (SPC) signals, referred to as (SVC-SPC) architecture. Empirical experiments using a software-defined radio(SDR) emulator are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer should be approximately four times higher than the power allocated to the enhancement layer.

  9. Evaluation of the efficiency and fault density of software generated by code generators

    Science.gov (United States)

    Schreur, Barbara

    1993-01-01

    Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.

  10. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    . Zulfikar

    2012-10-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  11. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    Zulfikar .

    2015-05-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  12. Efficient topology optimization in MATLAB using 88 lines of code

    DEFF Research Database (Denmark)

    Andreassen, Erik; Clausen, Anders; Schevenels, Mattias

    2011-01-01

    The paper presents an efficient 88 line MATLAB code for topology optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. The original code has been extended by a density filter, and a considerable improvemen...... of the basic code to include recent PDE-based and black-and-white projection filtering methods. The complete 88 line code is included as an appendix and can be downloaded from the web site www.topopt.dtu.dk....

  13. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  14. Energy Efficiency Program Administrators and Building Energy Codes

    Science.gov (United States)

    Explore how energy efficiency program administrators have helped advance building energy codes at federal, state, and local levels—using technical, institutional, financial, and other resources—and discusses potential next steps.

  15. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  16. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    OpenAIRE

    Zulfikar, Z

    2012-01-01

    A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared....

  17. Energy Efficiency Requirements in Building Codes, Energy Efficiency Policies for New Buildings. IEA Information Paper

    Energy Technology Data Exchange (ETDEWEB)

    Laustsen, Jens

    2008-03-15

    The aim of this paper is to describe and analyse current approaches to encourage energy efficiency in building codes for new buildings. Based on this analysis the paper enumerates policy recommendations for enhancing how energy efficiency is addressed in building codes and other policies for new buildings. This paper forms part of the IEA work for the G8 Gleneagles Plan of Action. These recommendations reflect the study of different policy options for increasing energy efficiency in new buildings and examination of other energy efficiency requirements in standards or building codes, such as energy efficiency requirements by major renovation or refurbishment. In many countries, energy efficiency of buildings falls under the jurisdiction of the federal states. Different standards cover different regions or climatic conditions and different types of buildings, such as residential or simple buildings, commercial buildings and more complicated high-rise buildings. There are many different building codes in the world and the intention of this paper is not to cover all codes on each level in all countries. Instead, the paper details different regions of the world and different ways of standards. In this paper we also evaluate good practices based on local traditions. This project does not seek to identify one best practice amongst the building codes and standards. Instead, different types of codes and different parts of the regulation have been illustrated together with examples on how they have been successfully addressed. To complement this discussion of efficiency standards, this study illustrates how energy efficiency can be improved through such initiatives as efficiency labelling or certification, very best practice buildings with extremely low- or no-energy consumption and other policies to raise buildings' energy efficiency beyond minimum requirements. When referring to the energy saving potentials for buildings, this study uses the analysis of recent IEA

  18. Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Grasielli Barreto

    2017-03-01

    Full Text Available Underwater acoustic networks (UAN allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC codes and fountain codes (FC for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.

  19. Relative efficiency calculation of a HPGe detector using MCNPX code

    International Nuclear Information System (INIS)

    Medeiros, Marcos P.C.; Rebello, Wilson F.; Lopes, Jose M.; Silva, Ademir X.

    2015-01-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a 60 Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate a HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a 60 Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  20. Highly efficient high temperature electrolysis

    DEFF Research Database (Denmark)

    Hauch, Anne; Ebbesen, Sune; Jensen, Søren Højgaard

    2008-01-01

    High temperature electrolysis of water and steam may provide an efficient, cost effective and environmentally friendly production of H-2 Using electricity produced from sustainable, non-fossil energy sources. To achieve cost competitive electrolysis cells that are both high performing i.e. minimum...... internal resistance of the cell, and long-term stable, it is critical to develop electrode materials that are optimal for steam electrolysis. In this article electrolysis cells for electrolysis of water or steam at temperatures above 200 degrees C for production of H-2 are reviewed. High temperature...... electrolysis is favourable from a thermodynamic point of view, because a part of the required energy can be supplied as thermal heat, and the activation barrier is lowered increasing the H-2 production rate. Only two types of cells operating at high temperature (above 200 degrees C) have been described...

  1. HIGH EFFICIENCY TURBINE

    OpenAIRE

    VARMA, VIJAYA KRUSHNA

    2012-01-01

    Varma designed ultra modern and high efficiency turbines which can use gas, steam or fuels as feed to produce electricity or mechanical work for wide range of usages and applications in industries or at work sites. Varma turbine engines can be used in all types of vehicles. These turbines can also be used in aircraft, ships, battle tanks, dredgers, mining equipment, earth moving machines etc, Salient features of Varma Turbines. 1. Varma turbines are simple in design, easy to manufac...

  2. Polynomial Batch Codes for Efficient IT-PIR

    Directory of Open Access Journals (Sweden)

    Henry Ryan

    2016-10-01

    Full Text Available Private information retrieval (PIR is a way for clients to query a remote database without the database holder learning the clients’ query terms or the responses they generate. Compelling applications for PIR are abound in the cryptographic and privacy research literature, yet existing PIR techniques are notoriously inefficient. Consequently, no such PIRbased application to date has seen real-world at-scale deployment. This paper proposes new “batch coding” techniques to help address PIR’s efficiency problem. The new techniques exploit the connection between ramp secret sharing schemes and efficient information-theoretically secure PIR (IT-PIR protocols. This connection was previously observed by Henry, Huang, and Goldberg (NDSS 2013, who used ramp schemes to construct efficient “batch queries” with which clients can fetch several database records for the same cost as fetching a single record using a standard, non-batch query. The new techniques in this paper generalize and extend those of Henry et al. to construct “batch codes” with which clients can fetch several records for only a fraction the cost of fetching a single record using a standard non-batch query over an unencoded database. The batch codes are highly tuneable, providing a means to trade off (i lower server-side computation cost, (ii lower server-side storage cost, and/or (iii lower uni- or bi-directional communication cost, in exchange for a comparatively modest decrease in resilience to Byzantine database servers.

  3. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  4. Synaptic E-I Balance Underlies Efficient Neural Coding.

    Science.gov (United States)

    Zhou, Shanglin; Yu, Yuguo

    2018-01-01

    Both theoretical and experimental evidence indicate that synaptic excitation and inhibition in the cerebral cortex are well-balanced during the resting state and sensory processing. Here, we briefly summarize the evidence for how neural circuits are adjusted to achieve this balance. Then, we discuss how such excitatory and inhibitory balance shapes stimulus representation and information propagation, two basic functions of neural coding. We also point out the benefit of adopting such a balance during neural coding. We conclude that excitatory and inhibitory balance may be a fundamental mechanism underlying efficient coding.

  5. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  6. High-efficiency CARM

    Energy Technology Data Exchange (ETDEWEB)

    Bratman, V.L.; Kol`chugin, B.D.; Samsonov, S.V.; Volkov, A.B. [Institute of Applied Physics, Nizhny Novgorod (Russian Federation)

    1995-12-31

    The Cyclotron Autoresonance Maser (CARM) is a well-known variety of FEMs. Unlike the ubitron in which electrons move in a periodical undulator field, in the CARM the particles move along helical trajectories in a uniform magnetic field. Since it is much simpler to generate strong homogeneous magnetic fields than periodical ones for a relatively low electron energy ({Brit_pounds}{le}1-3 MeV) the period of particles` trajectories in the CARM can be sufficiently smaller than in the undulator in which, moreover, the field decreases rapidly in the transverse direction. In spite of this evident advantage, the number of papers on CARM is an order less than on ubitron, which is apparently caused by the low (not more than 10 %) CARM efficiency in experiments. At the same time, ubitrons operating in two rather complicated regimes-trapping and adiabatic deceleration of particles and combined undulator and reversed guiding fields - yielded efficiencies of 34 % and 27 %, respectively. The aim of this work is to demonstrate that high efficiency can be reached even for a simplest version of the CARM. In order to reduce sensitivity to an axial velocity spread of particles, a short interaction length where electrons underwent only 4-5 cyclotron oscillations was used in this work. Like experiments, a narrow anode outlet of a field-emission electron gun cut out the {open_quotes}most rectilinear{close_quotes} near-axis part of the electron beam. Additionally, magnetic field of a small correcting coil compensated spurious electron oscillations pumped by the anode aperture. A kicker in the form of a sloping to the axis frame with current provided a control value of rotary velocity at a small additional velocity spread. A simple cavity consisting of a cylindrical waveguide section restricted by a cut-off waveguide on the cathode side and by a Bragg reflector on the collector side was used as the CARM-oscillator microwave system.

  7. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    Science.gov (United States)

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  8. Direct calculation of current drive efficiency in FISIC code

    International Nuclear Information System (INIS)

    Wright, J.C.; Phillips, C.K.; Bonoli, P.T.

    1996-01-01

    Two-dimensional RF modeling codes use a parameterization (1) of current drive efficiencies to calculate fast wave driven currents. This parameterization assumes a uniform quasi-linear diffusion coefficient and requires a priori knowledge of the wave polarizations. These difficulties may be avoided by a direct calculation of the quasilinear diffusion coefficient from the Kennel-Englemann form with the field polarizations calculated by the full wave code, FISIC (2). Current profiles are calculated using the adjoint formulation (3). Comparisons between the two formulations are presented. copyright 1996 American Institute of Physics

  9. Efficient coding schemes with power allocation using space-time-frequency spreading

    Institute of Scientific and Technical Information of China (English)

    Jiang Haining; Luo Hanwen; Tian Jifeng; Song Wentao; Liu Xingzhao

    2006-01-01

    An efficient space-time-frequency (STF) coding strategy for multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) systems is presented for high bit rate data transmission over frequency selective fading channels. The proposed scheme is a new approach to space-time-frequency coded OFDM (COFDM) that combines OFDM with space-time coding, linear precoding and adaptive power allocation to provide higher quality of transmission in terms of the bit error rate performance and power efficiency. In addition to exploiting the maximum diversity gain in frequency, time and space, the proposed scheme enjoys high coding advantages and low-complexity decoding. The significant performance improvement of our design is confirmed by corroborating numerical simulations.

  10. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  11. High efficiency positron moderation

    International Nuclear Information System (INIS)

    Taqqu, D.

    1990-01-01

    A new positron moderation scheme is proposed. It makes use of electric and magnetic fields to confine the β + emitted by a radioactive source forcing them to slow down within a thin foil. A specific arrangement is described where an intermediary slowed-down beam of energy below 10 keV is produced. By directing it towards a standard moderator optimal conversion into slow positrons is achieved. This scheme is best applied to short lived β + emitters for which a 25% moderation efficiency can be reached. Within the state of the art technology a slow positron source intensity exceeding 2 x 10 10 e + /sec is achievable. (orig.)

  12. Energy Efficient Error-Correcting Coding for Wireless Systems

    NARCIS (Netherlands)

    Shao, X.

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required

  13. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  14. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  15. An Expert System for the Development of Efficient Parallel Code

    Science.gov (United States)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  16. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    Science.gov (United States)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  17. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  18. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  19. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  20. Efficient Power Allocation for Video over Superposition Coding

    KAUST Repository

    Lau, Chun Pong; Jamshaid, K.; Shihada, Basem

    2013-01-01

    are conducted to gain a better understanding of its efficiency, specifically, the impact of the received signal due to different power allocation ratios. Our experimental results show that to maintain high video quality, the power allocated to the base layer

  1. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry

    International Nuclear Information System (INIS)

    Chagren, S.; Tekaya, M.Ben; Reguigui, N.; Gharbi, F.

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a “virtual” reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60–2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. - Highlights: • The GEANT4 code of CERN has been used to transfer the peak efficiency from point to points and voluminous detection configurations in HPGe gamma

  2. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  3. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  4. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  5. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs.

    Directory of Open Access Journals (Sweden)

    Jamil Ahmad

    Full Text Available Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.

  6. New highly efficient piezoceramic materials

    International Nuclear Information System (INIS)

    Dantsiger, A.Ya.; Razumovskaya, O.N.; Reznichenko, L.A.; Grineva, L.D.; Devlikanova, R.U.; Dudkina, S.I.; Gavrilyachenko, S.V.; Dergunova, N.V.

    1993-01-01

    New high efficient piezoceramic materials with various combination of parameters inclusing high Curie point for high-temperature transducers using in atomic power engineering are worked. They can be used in systems for heated matters nondestructive testing, controllers for varied industrial power plants and other high-temperature equipment

  7. Computer code validation by high temperature chemistry

    International Nuclear Information System (INIS)

    Alexander, C.A.; Ogden, J.S.

    1988-01-01

    At least five of the computer codes utilized in analysis of severe fuel damage-type events are directly dependent upon or can be verified by high temperature chemistry. These codes are ORIGEN, CORSOR, CORCON, VICTORIA, and VANESA. With the exemption of CORCON and VANESA, it is necessary that verification experiments be performed on real irradiated fuel. For ORIGEN, the familiar knudsen effusion cell is the best choice and a small piece of known mass and known burn-up is selected and volatilized completely into the mass spectrometer. The mass spectrometer is used in the integral mode to integrate the entire signal from preselected radionuclides, and from this integrated signal the total mass of the respective nuclides can be determined. For CORSOR and VICTORIA, experiments with flowing high pressure hydrogen/steam must flow over the irradiated fuel and then enter the mass spectrometer. For these experiments, a high pressure-high temperature molecular beam inlet must be employed. Finally, in support of VANESA-CORCON, the very highest temperature and molten fuels must be contained and analyzed. Results from all types of experiments will be discussed and their applicability to present and future code development will also be covered

  8. Development of a computer program to support an efficient non-regression test of a thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun Yeob; Jeong, Jae Jun [School of Mechanical Engineering, Pusan National University, Busan (Korea, Republic of); Suh, Jae Seung [System Engineering and Technology Co., Daejeon (Korea, Republic of); Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    During the development process of a thermal-hydraulic system code, a non-regression test (NRT) must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.

  9. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  10. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  11. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli

    2017-06-08

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  12. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    Science.gov (United States)

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Efficient Coding and Energy Efficiency Are Promoted by Balanced Excitatory and Inhibitory Synaptic Currents in Neuronal Network.

    Science.gov (United States)

    Yu, Lianchun; Shen, Zhou; Wang, Chen; Yu, Yuguo

    2018-01-01

    Selective pressure may drive neural systems to process as much information as possible with the lowest energy cost. Recent experiment evidence revealed that the ratio between synaptic excitation and inhibition (E/I) in local cortex is generally maintained at a certain value which may influence the efficiency of energy consumption and information transmission of neural networks. To understand this issue deeply, we constructed a typical recurrent Hodgkin-Huxley network model and studied the general principles that governs the relationship among the E/I synaptic current ratio, the energy cost and total amount of information transmission. We observed in such a network that there exists an optimal E/I synaptic current ratio in the network by which the information transmission achieves the maximum with relatively low energy cost. The coding energy efficiency which is defined as the mutual information divided by the energy cost, achieved the maximum with the balanced synaptic current. Although background noise degrades information transmission and imposes an additional energy cost, we find an optimal noise intensity that yields the largest information transmission and energy efficiency at this optimal E/I synaptic transmission ratio. The maximization of energy efficiency also requires a certain part of energy cost associated with spontaneous spiking and synaptic activities. We further proved this finding with analytical solution based on the response function of bistable neurons, and demonstrated that optimal net synaptic currents are capable of maximizing both the mutual information and energy efficiency. These results revealed that the development of E/I synaptic current balance could lead a cortical network to operate at a highly efficient information transmission rate at a relatively low energy cost. The generality of neuronal models and the recurrent network configuration used here suggest that the existence of an optimal E/I cell ratio for highly efficient energy

  14. An efficient fractal image coding algorithm using unified feature and DCT

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2009-01-01

    Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.

  15. Unconventional, High-Efficiency Propulsors

    DEFF Research Database (Denmark)

    Andersen, Poul

    1996-01-01

    The development of ship propellers has generally been characterized by search for propellers with as high efficiency as possible and at the same time low noise and vibration levels and little or no cavitation. This search has lead to unconventional propulsors, like vane-wheel propulsors, contra-r...

  16. An Efficient Construction of Self-Dual Codes

    OpenAIRE

    Lee, Yoonjin; Kim, Jon-Lark

    2012-01-01

    We complete the building-up construction for self-dual codes by resolving the open cases over $GF(q)$ with $q \\equiv 3 \\pmod 4$, and over $\\Z_{p^m}$ and Galois rings $\\GR(p^m,r)$ with an odd prime $p$ satisfying $p \\equiv 3 \\pmod 4$ with $r$ odd. We also extend the building-up construction for self-dual codes to finite chain rings. Our building-up construction produces many new interesting self-dual codes. In particular, we construct 945 new extremal self-dual ternary $[32,16,9]$ codes, each ...

  17. Least reliable bits coding (LRBC) for high data rate satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  18. Preserving Envelope Efficiency in Performance Based Code Compliance

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Brian A. [Thornton Energy Consulting (United States); Sullivan, Greg P. [Efficiency Solutions (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-06-20

    The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringent than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.

  19. Overview of Ecological Agriculture with High Efficiency

    OpenAIRE

    Huang, Guo-qin; Zhao, Qi-guo; Gong, Shao-lin; Shi, Qing-hua

    2012-01-01

    From the presentation, connotation, characteristics, principles, pattern, and technologies of ecological agriculture with high efficiency, we conduct comprehensive and systematic analysis and discussion of the theoretical and practical progress of ecological agriculture with high efficiency. (i) Ecological agriculture with high efficiency was first advanced in China in 1991. (ii) Ecological agriculture with high efficiency highlights "high efficiency", "ecology", and "combination". (iii) Ecol...

  20. Fuel analysis code FAIR and its high burnup modelling capabilities

    International Nuclear Information System (INIS)

    Prasad, P.S.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1995-01-01

    A computer code FAIR has been developed for analysing performance of water cooled reactor fuel pins. It is capable of analysing high burnup fuels. This code has recently been used for analysing ten high burnup fuel rods irradiated at Halden reactor. In the present paper, the code FAIR and its various high burnup models are described. The performance of code FAIR in analysing high burnup fuels and its other applications are highlighted. (author). 21 refs., 12 figs

  1. High resolution tomography using analog coding

    International Nuclear Information System (INIS)

    Brownell, G.L.; Burnham, C.A.; Chesler, D.A.

    1985-01-01

    As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine

  2. FY16 ASME High Temperature Code Activities

    Energy Technology Data Exchange (ETDEWEB)

    Swindeman, M. J. [Chromtech Inc., Oak Ridge, TN (United States); Jetter, R. I. [R. I Jetter Consulting, Pebble Beach, CA (United States); Sham, T. -L. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-01

    One of the objectives of the ASME high temperature Code activities is to develop and validate both improvements and the basic features of Section III, Division 5, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to be used to assess whether or not a specific component under specified loading conditions will satisfy the elevated temperature design requirements for Class A components in Section III, Division 5, Subsection HB, Subpart B (HBB). There are many features and alternative paths of varying complexity in HBB. The initial focus of this task is a basic path through the various options for a single reference material, 316H stainless steel. However, the program will be structured for eventual incorporation all the features and permitted materials of HBB. Since this task has recently been initiated, this report focuses on the description of the initial path forward and an overall description of the approach to computer program development.

  3. High-power, high-efficiency FELs

    International Nuclear Information System (INIS)

    Sessler, A.M.

    1989-04-01

    High power, high efficiency FELs require tapering, as the particles loose energy, so as to maintain resonance between the electromagnetic wave and the particles. They also require focusing of the particles (usually done with curved pole faces) and focusing of the electromagnetic wave (i.e. optical guiding). In addition, one must avoid transverse beam instabilities (primarily resistive wall) and longitudinal instabilities (i.e sidebands). 18 refs., 7 figs., 3 tabs

  4. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  5. Energy and Environment Guide to Action - Chapter 4.3: Building Codes for Energy Efficiency

    Science.gov (United States)

    Provides guidance and recommendations for establishing, implementing, and evaluating state building codes for energy efficiency, which improve energy efficiency in new construction and major renovations. State success stories are included for reference.

  6. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  7. High Efficiency Room Air Conditioner

    Energy Technology Data Exchange (ETDEWEB)

    Bansal, Pradeep [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This project was undertaken as a CRADA project between UT-Battelle and Geberal Electric Company and was funded by Department of Energy to design and develop of a high efficiency room air conditioner. A number of novel elements were investigated to improve the energy efficiency of a state-of-the-art WAC with base capacity of 10,000 BTU/h. One of the major modifications was made by downgrading its capacity from 10,000 BTU/hr to 8,000 BTU/hr by replacing the original compressor with a lower capacity (8,000 BTU/hr) but high efficiency compressor having an EER of 9.7 as compared with 9.3 of the original compressor. However, all heat exchangers from the original unit were retained to provide higher EER. The other subsequent major modifications included- (i) the AC fan motor was replaced by a brushless high efficiency ECM motor along with its fan housing, (ii) the capillary tube was replaced with a needle valve to better control the refrigerant flow and refrigerant set points, and (iii) the unit was tested with a drop-in environmentally friendly binary mixture of R32 (90% molar concentration)/R125 (10% molar concentration). The WAC was tested in the environmental chambers at ORNL as per the design rating conditions of AHAM/ASHRAE (Outdoor- 95F and 40%RH, Indoor- 80F, 51.5%RH). All these modifications resulted in enhancing the EER of the WAC by up to 25%.

  8. Efficient coding and detection of ultra-long IDs for visible light positioning systems.

    Science.gov (United States)

    Zhang, Hualong; Yang, Chuanchuan

    2018-05-14

    Visible light positioning (VLP) is a promising technique to complement Global Navigation Satellite System (GNSS) such as Global positioning system (GPS) and BeiDou Navigation Satellite System (BDS) which features the advantage of low-cost and high accuracy. The situation becomes even more crucial for indoor environments, where satellite signals are weak or even unavailable. For large-scale application of VLP, there would be a considerable number of Light emitting diode (LED) IDs, which bring forward the demand of long LED ID detection. In particular, to provision indoor localization globally, a convenient way is to program a unique ID into each LED during manufacture. This poses a big challenge for image sensors, such as the CMOS camera in everybody's hands since the long ID covers the span of multiple frames. In this paper, we investigate the detection of ultra-long ID using rolling shutter cameras. By analyzing the pattern of data loss in each frame, we proposed a novel coding technique to improve the efficiency of LED ID detection. We studied the performance of Reed-Solomon (RS) code in this system and designed a new coding method which considered the trade-off between performance and decoding complexity. Coding technique decreases the number of frames needed in data processing, significantly reduces the detection time, and improves the accuracy of detection. Numerical and experimental results show that the detected LED ID can be much longer with the coding technique. Besides, our proposed coding method is proved to achieve a performance close to that of RS code while the decoding complexity is much lower.

  9. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  10. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  11. High-efficient electron linacs

    International Nuclear Information System (INIS)

    Glavatskikh, K.V.; Zverev, B.V.; Kalyuzhnyj, V.E.; Morozov, V.L.; Nikolaev, S.V.; Plotnikov, S.N.; Sobenin, N.P.; Vovna, V.A.; Gryzlov, A.V.

    1993-01-01

    Comparison analysis of ELA on running and still waves designed for 10 MeV energy and with high efficiency is carried out. It is shown, that from the point of view of dimensions ELA with a still wave or that of a combined type is more preferable. From the point of view of impedance characteristics in any variant with application of magnetron as HF-generator it is necessary to implement special requirements to the accelerating structure if no ferrite isolation is provided in HF-channel. 3 refs., 4 figs., 1 tab

  12. 77 FR 29322 - Updating State Residential Building Energy Efficiency Codes

    Science.gov (United States)

    2012-05-17

    ... Glass Company North America (AGC). However, DOE notes that PNA/AGC's comment was received late. Although... following. Steel-framed wall insulation Air barrier location Changes whose effect is unclear: Fenestration... code's primary regulation of a home's envelope thermal resistance, or the resistance of the ceilings...

  13. The development of efficient coding for an electronic mail system

    Science.gov (United States)

    Rice, R. F.

    1983-01-01

    Techniques for efficiently representing scanned electronic documents were investigated. Major results include the definition and preliminary performance results of a Universal System for Efficient Electronic Mail (USEEM), offering a potential order of magnitude improvement over standard facsimile techniques for representing textual material.

  14. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... in three layers: binary shape layer, opaque layer, and intermediate layer. Thus, the latter two layers replace the single transparency layer of MPEG-4 Part 2. Different encoding schemes are specifically designed for each layer, utilizing cross-layer correlations to reduce the bit rate. First, the binary...... demonstrating that the proposed techniques provide substantial bit rate savings coding shape and transparency when compared to the tools adopted in MPEG-4 Part 2....

  15. Designing an efficient LT-code with unequal error protection for image transmission

    Science.gov (United States)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  16. Evidence of translation efficiency adaptation of the coding regions of the bacteriophage lambda.

    Science.gov (United States)

    Goz, Eli; Mioduser, Oriah; Diament, Alon; Tuller, Tamir

    2017-08-01

    Deciphering the way gene expression regulatory aspects are encoded in viral genomes is a challenging mission with ramifications related to all biomedical disciplines. Here, we aimed to understand how the evolution shapes the bacteriophage lambda genes by performing a high resolution analysis of ribosomal profiling data and gene expression related synonymous/silent information encoded in bacteriophage coding regions.We demonstrated evidence of selection for distinct compositions of synonymous codons in early and late viral genes related to the adaptation of translation efficiency to different bacteriophage developmental stages. Specifically, we showed that evolution of viral coding regions is driven, among others, by selection for codons with higher decoding rates; during the initial/progressive stages of infection the decoding rates in early/late genes were found to be superior to those in late/early genes, respectively. Moreover, we argued that selection for translation efficiency could be partially explained by adaptation to Escherichia coli tRNA pool and the fact that it can change during the bacteriophage life cycle.An analysis of additional aspects related to the expression of viral genes, such as mRNA folding and more complex/longer regulatory signals in the coding regions, is also reported. The reported conclusions are likely to be relevant also to additional viruses. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  17. High-efficiency photovoltaic cells

    Science.gov (United States)

    Yang, H.T.; Zehr, S.W.

    1982-06-21

    High efficiency solar converters comprised of a two cell, non-lattice matched, monolithic stacked semiconductor configuration using optimum pairs of cells having bandgaps in the range 1.6 to 1.7 eV and 0.95 to 1.1 eV, and a method of fabrication thereof, are disclosed. The high band gap subcells are fabricated using metal organic chemical vapor deposition (MOCVD), liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) to produce the required AlGaAs layers of optimized composition, thickness and doping to produce high performance, heteroface homojunction devices. The low bandgap subcells are similarly fabricated from AlGa(As)Sb compositions by LPE, MBE or MOCVD. These subcells are then coupled to form a monolithic structure by an appropriate bonding technique which also forms the required transparent intercell ohmic contact (IOC) between the two subcells. Improved ohmic contacts to the high bandgap semiconductor structure can be formed by vacuum evaporating to suitable metal or semiconductor materials which react during laser annealing to form a low bandgap semiconductor which provides a low contact resistance structure.

  18. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    Energy Technology Data Exchange (ETDEWEB)

    Busch, John; Greenberg, Steve; Rubinstein, Francis; Denver, Andrea; Rawner, Esther; Franconi, Ellen; Huang, Joe; Neils, Danielle

    2000-09-30

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.

  19. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    International Nuclear Information System (INIS)

    Busch, John; Greenberg, Steve; Rubinstein, Francis; Denver, Andrea; Rawner, Esther; Franconi, Ellen; Huang, Joe; Neils, Danielle

    2000-01-01

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards

  20. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  1. Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels

    Directory of Open Access Journals (Sweden)

    IMTAWIL, V.

    2011-11-01

    Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.

  2. Current status of high energy nucleon-meson transport code

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi; Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    Current status of design code of accelerator (NMTC/JAERI code), outline of physical model and evaluation of accuracy of code were reported. To evaluate the nuclear performance of accelerator and strong spallation neutron origin, the nuclear reaction between high energy proton and target nuclide and behaviors of various produced particles are necessary. The nuclear design of spallation neutron system used a calculation code system connected the high energy nucleon{center_dot}meson transport code and the neutron{center_dot}photon transport code. NMTC/JAERI is described by the particle evaporation process under consideration of competition reaction of intranuclear cascade and fission process. Particle transport calculation was carried out for proton, neutron, {pi}- and {mu}-meson. To verify and improve accuracy of high energy nucleon-meson transport code, data of spallation and spallation neutron fragment by the integral experiment were collected. (S.Y.)

  3. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  4. High-Fidelity Coding with Correlated Neurons

    Science.gov (United States)

    da Silveira, Rava Azeredo; Berry, Michael J.

    2014-01-01

    Positive correlations in the activity of neurons are widely observed in the brain. Previous studies have shown these correlations to be detrimental to the fidelity of population codes, or at best marginally favorable compared to independent codes. Here, we show that positive correlations can enhance coding performance by astronomical factors. Specifically, the probability of discrimination error can be suppressed by many orders of magnitude. Likewise, the number of stimuli encoded—the capacity—can be enhanced more than tenfold. These effects do not necessitate unrealistic correlation values, and can occur for populations with a few tens of neurons. We further show that both effects benefit from heterogeneity commonly seen in population activity. Error suppression and capacity enhancement rest upon a pattern of correlation. Tuning of one or several effective parameters can yield a limit of perfect coding: the corresponding pattern of positive correlation leads to a ‘lock-in’ of response probabilities that eliminates variability in the subspace relevant for stimulus discrimination. We discuss the nature of this pattern and we suggest experimental tests to identify it. PMID:25412463

  5. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  6. Energy-Efficient Cluster Based Routing Protocol in Mobile Ad Hoc Networks Using Network Coding

    OpenAIRE

    Srinivas Kanakala; Venugopal Reddy Ananthula; Prashanthi Vempaty

    2014-01-01

    In mobile ad hoc networks, all nodes are energy constrained. In such situations, it is important to reduce energy consumption. In this paper, we consider the issues of energy efficient communication in MANETs using network coding. Network coding is an effective method to improve the performance of wireless networks. COPE protocol implements network coding concept to reduce number of transmissions by mixing the packets at intermediate nodes. We incorporate COPE into cluster based routing proto...

  7. Towards highly efficient water photoelectrolysis

    Science.gov (United States)

    Elavambedu Prakasam, Haripriya

    ethylene glycol resulted in remarkable growth characteristics of titania nanotube arrays, hexagonal closed packed up to 1 mm in length, with tube aspect ratios of approximately 10,000. For the first time, complete anodization of the starting titanium foil has been demonstrated resulting in back to back nanotube array membranes ranging from 360 mum--1 mm in length. The nanotubes exhibited growth rates of up to 15 mum/hr. A detailed study on the factors affecting the growth rate and nanotube dimensions is presented. It is suggested that faster high field ionic conduction through a thinner barrier layer is responsible for the higher growth rates observed in electrolytes containing ethylene glycol. Methods to fabricate free standing, titania nanotube array membranes ranging in thickness from 50 microm--1000 mum has also been an outcome of this dissertation. In an effort to combine the charge transport properties of titania with the light absorption properties of iron (III) oxide, films comprised of vertically oriented Ti-Fe-O nanotube arrays on FTO coated glass substrates have been successfully synthesized in ethylene glycol electrolytes. Depending upon the Fe content the bandgap of the resulting films varied from about 3.26 to 2.17 eV. The Ti-Fe oxide nanotube array films demonstrated a photocurrent of 2 mA/cm2 under global AM 1.5 illumination with a 1.2% (two-electrode) photoconversion efficiency, demonstrating a sustained, time-energy normalized hydrogen evolution rate by water splitting of 7.1 mL/W·hr in a 1 M KOH solution with a platinum counter electrode under an applied bias of 0.7 V. The Ti-Fe-O material architecture demonstrates properties useful for hydrogen generation by water photoelectrolysis and, more importantly, this dissertation demonstrates that the general nanotube-array synthesis technique can be extended to other ternary oxide compositions of interest for water photoelectrolysis.

  8. Coded aperture subreflector array for high resolution radar imaging

    Science.gov (United States)

    Lynch, Jonathan J.; Herrault, Florian; Kona, Keerti; Virbila, Gabriel; McGuire, Chuck; Wetzel, Mike; Fung, Helen; Prophet, Eric

    2017-05-01

    HRL Laboratories has been developing a new approach for high resolution radar imaging on stationary platforms. High angular resolution is achieved by operating at 235 GHz and using a scalable tile phased array architecture that has the potential to realize thousands of elements at an affordable cost. HRL utilizes aperture coding techniques to minimize the size and complexity of the RF electronics needed for beamforming, and wafer level fabrication and integration allow tiles containing 1024 elements to be manufactured with reasonable costs. This paper describes the results of an initial feasibility study for HRL's Coded Aperture Subreflector Array (CASA) approach for a 1024 element micromachined antenna array with integrated single-bit phase shifters. Two candidate electronic device technologies were evaluated over the 170 - 260 GHz range, GaN HEMT transistors and GaAs Schottky diodes. Array structures utilizing silicon micromachining and die bonding were evaluated for etch and alignment accuracy. Finally, the overall array efficiency was estimated to be about 37% (not including spillover losses) using full wave array simulations and measured device performance, which is a reasonable value at 235 GHz. Based on the measured data we selected GaN HEMT devices operated passively with 0V drain bias due to their extremely low DC power dissipation.

  9. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Science.gov (United States)

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  10. HIGH-EFFICIENCY INFRARED RECEIVER

    Directory of Open Access Journals (Sweden)

    A. K. Esman

    2016-01-01

    Full Text Available Recent research and development show promising use of high-performance solid-state receivers of the electromagnetic radiation. These receivers are based on the low-barrier Schottky diodes. The approach to the design of the receivers on the basis of delta-doped low-barrier Schottky diodes with beam leads without bias is especially actively developing because for uncooled receivers of the microwave radiation these diodes have virtually no competition. The purpose of this work is to improve the main parameters and characteristics that determine the practical relevance of the receivers of mid-infrared electromagnetic radiation at the operating room temperature by modifying the electrodes configuration of the diode and optimizing the distance between them. Proposed original design solution of the integrated receiver of mid-infrared radiation on the basis of the low-barrier Schottky diodes with beam leads allows to effectively adjust its main parameters and characteristics. Simulation of the electromagnetic characteristics of the proposed receiver by using the software package HFSS with the basic algorithm of a finite element method which implemented to calculate the behavior of electromagnetic fields on an arbitrary geometry with a predetermined material properties have shown that when the inner parts of the electrodes of the low-barrier Schottky diode is performed in the concentric elliptical convex-concave shape, it can be reduce the reflection losses to -57.75 dB and the standing wave ratio to 1.003 while increasing the directivity up to 23 at a wavelength of 6.09 μm. At this time, the rounded radii of the inner parts of the anode and cathode electrodes are equal 212 nm and 318 nm respectively and the gap setting between them is 106 nm. These parameters will improve the efficiency of the developed infrared optical-promising and electronic equipment for various purposes intended for work in the mid-infrared wavelength range. 

  11. High-efficiency wind turbine

    Science.gov (United States)

    Hein, L. A.; Myers, W. N.

    1980-01-01

    Vertical axis wind turbine incorporates several unique features to extract more energy from wind increasing efficiency 20% over conventional propeller driven units. System also features devices that utilize solar energy or chimney effluents during periods of no wind.

  12. High efficiency, long life terrestrial solar panel

    Science.gov (United States)

    Chao, T.; Khemthong, S.; Ling, R.; Olah, S.

    1977-01-01

    The design of a high efficiency, long life terrestrial module was completed. It utilized 256 rectangular, high efficiency solar cells to achieve high packing density and electrical output. Tooling for the fabrication of solar cells was in house and evaluation of the cell performance was begun. Based on the power output analysis, the goal of a 13% efficiency module was achievable.

  13. High efficiency turbine blade coatings

    Energy Technology Data Exchange (ETDEWEB)

    Youchison, Dennis L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gallis, Michail A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-06-01

    The development of advanced thermal barrier coatings (TBCs) of yttria stabilized zirconia (YSZ) that exhibit lower thermal conductivity through better control of electron beam - physical vapor deposition (EB-PVD) processing is of prime interest to both the aerospace and power industries. This report summarizes the work performed under a two-year Lab-Directed Research and Development (LDRD) project (38664) to produce lower thermal conductivity, graded-layer thermal barrier coatings for turbine blades in an effort to increase the efficiency of high temperature gas turbines. This project was sponsored by the Nuclear Fuel Cycle Investment Area. Therefore, particular importance was given to the processing of the large blades required for industrial gas turbines proposed for use in the Brayton cycle of nuclear plants powered by high temperature gas-cooled reactors (HTGRs). During this modest (~1 full-time equivalent (FTE)) project, the processing technology was developed to create graded TBCs by coupling ion beam-assisted deposition (IBAD) with substrate pivoting in the alumina-YSZ system. The Electron Beam - 1200 kW (EB-1200) PVD system was used to deposit a variety of TBC coatings with micron layered microstructures and reduced thermal conductivity below 1.5 W/m.K. The use of IBAD produced fully stoichiometric coatings at a reduced substrate temperature of 600°C and a reduced oxygen background pressure of 0.1 Pa. IBAD was also used to successfully demonstrate the transitioning of amorphous PVD-deposited alumina to the -phase alumina required as an oxygen diffusion barrier and for good adhesion to the substrate Ni2Al3 bondcoat. This process replaces the time consuming thermally grown oxide formation required before the YSZ deposition. In addition to the process technology, Direct Simulation Monte Carlo plume modeling and spectroscopic characterization of the PVD plumes were performed. The project consisted of five tasks. These included the

  14. Aerosol sampling and Transport Efficiency Calculation (ASTEC) and application to surtsey/DCH aerosol sampling system: Code version 1.0: Code description and user's manual

    International Nuclear Information System (INIS)

    Yamano, N.; Brockmann, J.E.

    1989-05-01

    This report describes the features and use of the Aerosol Sampling and Transport Efficiency Calculation (ASTEC) Code. The ASTEC code has been developed to assess aerosol transport efficiency source term experiments at Sandia National Laboratories. This code also has broad application for aerosol sampling and transport efficiency calculations in general as well as for aerosol transport considerations in nuclear reactor safety issues. 32 refs., 31 figs., 7 tabs

  15. 76 FR 42688 - Updating State Residential Building Energy Efficiency Codes

    Science.gov (United States)

    2011-07-19

    ... the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) considers high... determination could potentially impact States. The American Chemistry Council (ACC) submitted a written comment...-Conditioning Engineers. [[Page 42692

  16. 76 FR 64924 - Updating State Residential Building Energy Efficiency Codes

    Science.gov (United States)

    2011-10-19

    ... required to be high- efficacy. Air distribution systems--leakage control requirements. Hot water pipe... would have had higher leakage rates as a result of leaks that would not be detected by visual inspection... the vast majority of the supply air downstream to the rest of the distribution system. Section R403.2...

  17. Simulative Investigation on Spectral Efficiency of Unipolar Codes based OCDMA System using Importance Sampling Technique

    Science.gov (United States)

    Farhat, A.; Menif, M.; Rezig, H.

    2013-09-01

    This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.

  18. High-Speed Turbo-TCM-Coded Orthogonal Frequency-Division Multiplexing Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    Wang Yanxia

    2006-01-01

    Full Text Available One of the UWB proposals in the IEEE P802.15 WPAN project is to use a multiband orthogonal frequency-division multiplexing (OFDM system and punctured convolutional codes for UWB channels supporting a data rate up to 480 Mbps. In this paper, we improve the proposed system using turbo TCM with QAM constellation for higher data rate transmission. We construct a punctured parity-concatenated trellis codes, in which a TCM code is used as the inner code and a simple parity-check code is employed as the outer code. The result shows that the system can offer a much higher spectral efficiency, for example, 1.2 Gbps, which is 2.5 times higher than the proposed system. We identify several essential requirements to achieve the high rate transmission, for example, frequency and time diversity and multilevel error protection. Results are confirmed by density evolution.

  19. Applications of the Los Alamos High Energy Transport code

    International Nuclear Information System (INIS)

    Waters, L.; Gavron, A.; Prael, R.E.

    1992-01-01

    Simulation codes reliable through a large range of energies are essential to analyze the environment of vehicles and habitats proposed for space exploration. The LAHET monte carlo code has recently been expanded to track high energy hadrons with FLUKA, while retaining the original Los Alamos version of HETC at lower energies. Electrons and photons are transported with EGS4, and an interface to the MCNP monte carlo code is provided to analyze neutrons with kinetic energies less than 20 MeV. These codes are benchmarked by comparison of LAHET/MCNP calculations to data from the Brookhaven experiment E814 participant calorimeter

  20. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, K.; Takada, H.; Meigo, S.; Ikeda, Y.

    2001-01-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgrade version of NMTC/JAERI97. The available energy range of NMTC/JAM is, in principle, extended to 200 GeV for nucleons and mesons including the high energy nuclear reaction code JAM for the intra-nuclear cascade part. We compare the calculations by NMTC/JAM code with the experimental data of thin and thick targets for proton induced reactions up to several 10 GeV. The results of NMTC/JAM code show excellent agreement with the experimental data. From these code validation, it is concluded that NMTC/JAM is reliable in neutronics optimization study of the high intense spallation neutron utilization facility. (author)

  1. Energy-Efficient Cluster Based Routing Protocol in Mobile Ad Hoc Networks Using Network Coding

    Directory of Open Access Journals (Sweden)

    Srinivas Kanakala

    2014-01-01

    Full Text Available In mobile ad hoc networks, all nodes are energy constrained. In such situations, it is important to reduce energy consumption. In this paper, we consider the issues of energy efficient communication in MANETs using network coding. Network coding is an effective method to improve the performance of wireless networks. COPE protocol implements network coding concept to reduce number of transmissions by mixing the packets at intermediate nodes. We incorporate COPE into cluster based routing protocol to further reduce the energy consumption. The proposed energy-efficient coding-aware cluster based routing protocol (ECCRP scheme applies network coding at cluster heads to reduce number of transmissions. We also modify the queue management procedure of COPE protocol to further improve coding opportunities. We also use an energy efficient scheme while selecting the cluster head. It helps to increase the life time of the network. We evaluate the performance of proposed energy efficient cluster based protocol using simulation. Simulation results show that the proposed ECCRP algorithm reduces energy consumption and increases life time of the network.

  2. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  3. High efficiency motor selection handbook

    Science.gov (United States)

    McCoy, Gilbert A.; Litman, Todd; Douglass, John G.

    1990-10-01

    Substantial reductions in energy and operational costs can be achieved through the use of energy-efficient electric motors. A handbook was compiled to help industry identify opportunities for cost-effective application of these motors. It covers the economic and operational factors to be considered when motor purchase decisions are being made. Its audience includes plant managers, plant engineers, and others interested in energy management or preventative maintenance programs.

  4. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Directory of Open Access Journals (Sweden)

    Wiktor eMlynarski

    2014-03-01

    Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  5. Dynamic Allocation and Efficient Distribution of Data Among Multiple Clouds Using Network Coding

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Distributed storage has attracted large interest lately from both industry and researchers as a flexible, cost-efficient, high performance, and potentially secure solution for geographically distributed data centers, edge caching or sharing storage among users. This paper studies the benefits...... of random linear network coding to exploit multiple commercially available cloud storage providers simultaneously with the possibility to constantly adapt to changing cloud performance in order to optimize data retrieval times. The main contribution of this paper is a new data distribution mechanisms...... that cleverly stores and moves data among different clouds in order to optimize performance. Furthermore, we investigate the trade-offs among storage space, reliability and data retrieval speed for our proposed scheme. By means of real-world implementation and measurements using well-known and publicly...

  6. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  7. Jet formation and equatorial superrotation in Jupiter's atmosphere: Numerical modelling using a new efficient parallel code

    Science.gov (United States)

    Rivier, Leonard Gilles

    Using an efficient parallel code solving the primitive equations of atmospheric dynamics, the jet structure of a Jupiter like atmosphere is modeled. In the first part of this thesis, a parallel spectral code solving both the shallow water equations and the multi-level primitive equations of atmospheric dynamics is built. The implementation of this code called BOB is done so that it runs effectively on an inexpensive cluster of workstations. A one dimensional decomposition and transposition method insuring load balancing among processes is used. The Legendre transform is cache-blocked. A "compute on the fly" of the Legendre polynomials used in the spectral method produces a lower memory footprint and enables high resolution runs on relatively small memory machines. Performance studies are done using a cluster of workstations located at the National Center for Atmospheric Research (NCAR). BOB performances are compared to the parallel benchmark code PSTSWM and the dynamical core of NCAR's CCM3.6.6. In both cases, the comparison favors BOB. In the second part of this thesis, the primitive equation version of the code described in part I is used to study the formation of organized zonal jets and equatorial superrotation in a planetary atmosphere where the parameters are chosen to best model the upper atmosphere of Jupiter. Two levels are used in the vertical and only large scale forcing is present. The model is forced towards a baroclinically unstable flow, so that eddies are generated by baroclinic instability. We consider several types of forcing, acting on either the temperature or the momentum field. We show that only under very specific parametric conditions, zonally elongated structures form and persist resembling the jet structure observed near the cloud level top (1 bar) on Jupiter. We also study the effect of an equatorial heat source, meant to be a crude representation of the effect of the deep convective planetary interior onto the outer atmospheric layer. We

  8. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  9. High data rate coding for the space station telemetry links.

    Science.gov (United States)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  10. High efficiency focus neutron generator

    Science.gov (United States)

    Sadeghi, H.; Amrollahi, R.; Zare, M.; Fazelpour, S.

    2017-12-01

    In the present paper, the new idea to increase the neutron yield of plasma focus devices is investigated and the results are presented. Based on many studies, more than 90% of neutrons in plasma focus devices were produced by beam target interactions and only 10% of them were due to thermonuclear reactions. While propounding the new idea, the number of collisions between deuteron ions and deuterium gas atoms were increased remarkably well. The COMSOL Multiphysics 5.2 was used to study the given idea in the known 28 plasma focus devices. In this circumstance, the neutron yield of this system was also obtained and reported. Finally, it was found that in the ENEA device with 1 Hz working frequency, 1.1 × 109 and 1.1 × 1011 neutrons per second were produced by D-D and D-T reactions, respectively. In addition, in the NX2 device with 16 Hz working frequency, 1.34 × 1010 and 1.34 × 1012 neutrons per second were produced by D-D and D-T reactions, respectively. The results show that with regards to the sizes and energy of these devices, they can be used as the efficient neutron generators.

  11. Highly parallel line-based image coding for many cores.

    Science.gov (United States)

    Peng, Xiulian; Xu, Jizheng; Zhou, You; Wu, Feng

    2012-01-01

    Computers are developing along with a new trend from the dual-core and quad-core processors to ones with tens or even hundreds of cores. Multimedia, as one of the most important applications in computers, has an urgent need to design parallel coding algorithms for compression. Taking intraframe/image coding as a start point, this paper proposes a pure line-by-line coding scheme (LBLC) to meet the need. In LBLC, an input image is processed line by line sequentially, and each line is divided into small fixed-length segments. The compression of all segments from prediction to entropy coding is completely independent and concurrent at many cores. Results on a general-purpose computer show that our scheme can get a 13.9 times speedup with 15 cores at the encoder and a 10.3 times speedup at the decoder. Ideally, such near-linear speeding relation with the number of cores can be kept for more than 100 cores. In addition to the high parallelism, the proposed scheme can perform comparatively or even better than the H.264 high profile above middle bit rates. At near-lossless coding, it outperforms H.264 more than 10 dB. At lossless coding, up to 14% bit-rate reduction is observed compared with H.264 lossless coding at the high 4:4:4 profile.

  12. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  13. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Smith, Ralph [Department of Mathematics, North Carolina State University, Raleigh, NC 27695 (United States); Williams, Brian [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Figueroa, Victor [Sandia National Laboratories, Albuquerque, NM 87185 (United States)

    2016-11-01

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is to employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.

  14. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  15. Green Mobile Clouds: Network Coding and User Cooperation for Improved Energy Efficiency

    DEFF Research Database (Denmark)

    Heide, Janus; Fitzek, Frank; Pedersen, Morten Videbæk

    2012-01-01

    This paper highlights the benefits of user cooperation and network coding for energy saving in cellular networks. It is shown that these techniques allow for reliable and efficient multicast services from both a user and network perspective. The working principles and advantages in terms of energy...

  16. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  17. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  18. Coding efficiency of AVS 2.0 for CBAC and CABAC engines

    Science.gov (United States)

    Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik

    2015-12-01

    In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.

  19. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    Science.gov (United States)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  20. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  1. Novel secure and bandwidth efficient optical code division multiplexed system for future access networks

    Science.gov (United States)

    Singh, Simranjit

    2016-12-01

    In this paper, a spectrally coded optical code division multiple access (OCDMA) system using a hybrid modulation scheme has been investigated. The idea is to propose an effective approach for simultaneous improvement of the system capacity and security. Data formats, NRZ (non-return to zero), DQPSK (differential quadrature phase shift keying), and PoISk (polarisation shift keying) are used to get the orthogonal modulated signal. It is observed that the proposed hybrid modulation provides efficient utilisation of bandwidth, increases the data capacity and enhances the data confidentiality over existing OCDMA systems. Further, the proposed system performance is compared with the current state-of-the-art OCDMA schemes.

  2. Critical study of high efficiency deep grinding

    OpenAIRE

    Johnstone, lain

    2002-01-01

    The recent years, the aerospace industry in particular has embraced and actively pursued the development of stronger high performance materials, namely nickel based superalloys and hardwearing steels. This has resulted in a need for a more efficient method of machining, and this need was answered with the advent of High Efficiency Deep Grinding (HEDG). This relatively new process using Cubic Boron Nitride (CBN) electroplated grinding wheels has been investigated through experim...

  3. REVA Advanced Fuel Design and Codes and Methods - Increasing Reliability, Operating Margin and Efficiency in Operation

    Energy Technology Data Exchange (ETDEWEB)

    Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.

    2014-07-01

    Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)

  4. Extending DIRAC File Management with Erasure-Coding for efficient storage

    CERN Document Server

    Skipsey, Samuel Cadellin; Britton, David; Crooks, David; Roy, Gareth

    2015-01-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP\\cite{GridPP}, extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. ...

  5. Extending DIRAC File Management with Erasure-Coding for efficient storage.

    Science.gov (United States)

    Cadellin Skipsey, Samuel; Todev, Paulin; Britton, David; Crooks, David; Roy, Gareth

    2015-12-01

    The state of the art in Grid style data management is to achieve increased resilience of data via multiple complete replicas of data files across multiple storage endpoints. While this is effective, it is not the most space-efficient approach to resilience, especially when the reliability of individual storage endpoints is sufficiently high that only a few will be inactive at any point in time. We report on work performed as part of GridPP[1], extending the Dirac File Catalogue and file management interface to allow the placement of erasure-coded files: each file distributed as N identically-sized chunks of data striped across a vector of storage endpoints, encoded such that any M chunks can be lost and the original file can be reconstructed. The tools developed are transparent to the user, and, as well as allowing up and downloading of data to Grid storage, also provide the possibility of parallelising access across all of the distributed chunks at once, improving data transfer and IO performance. We expect this approach to be of most interest to smaller VOs, who have tighter bounds on the storage available to them, but larger (WLCG) VOs may be interested as their total data increases during Run 2. We provide an analysis of the costs and benefits of the approach, along with future development and implementation plans in this area. In general, overheads for multiple file transfers provide the largest issue for competitiveness of this approach at present.

  6. Perceptual scale expansion: an efficient angular coding strategy for locomotor space.

    Science.gov (United States)

    Durgin, Frank H; Li, Zhi

    2011-08-01

    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.

  7. Property-based Code Slicing for Efficient Verification of OSEK/VDX Operating Systems

    Directory of Open Access Journals (Sweden)

    Mingyu Park

    2012-12-01

    Full Text Available Testing is a de-facto verification technique in industry, but insufficient for identifying subtle issues due to its optimistic incompleteness. On the other hand, model checking is a powerful technique that supports comprehensiveness, and is thus suitable for the verification of safety-critical systems. However, it generally requires more knowledge and cost more than testing. This work attempts to take advantage of both techniques to achieve integrated and efficient verification of OSEK/VDX-based automotive operating systems. We propose property-based environment generation and model extraction techniques using static code analysis, which can be applied to both model checking and testing. The technique is automated and applied to an OSEK/VDX-based automotive operating system, Trampoline. Comparative experiments using random testing and model checking for the verification of assertions in the Trampoline kernel code show how our environment generation and abstraction approach can be utilized for efficient fault-detection.

  8. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    Science.gov (United States)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  9. The efficiency and fidelity of the in-core nuclear fuel management code FORMOSA-P

    International Nuclear Information System (INIS)

    Kropaczek, D.J.; Turinsky, P.J.

    1994-01-01

    The second-order generalized perturbation theory (GPT), nodal neutronic model utilized within the nuclear fuel management optimization code FORMOSA-P is presented within the context of prediction fidelity and computational efficiency versus forward solution. Key features of thr GPT neutronics model as implemented within the Simulated Annealing optimization adaptive control algorithm are discussed. Supporting results are then presented demonstrating the superior consistency of adaptive control for both global and local optimization searches. (authors). 15 refs., 1 fig., 4 tabs

  10. BBU code development for high-power microwave generators

    International Nuclear Information System (INIS)

    Houck, T.L.; Westenskow, G.A.; Yu, S.S.

    1992-01-01

    We are developing a two-dimensional, time-dependent computer code for the simulation of transverse instabilities in support of relativistic klystron-two beam accelerator research at LLNL. The code addresses transient effects as well as both cumulative and regenerative beam breakup modes. Although designed specifically for the transport of high current (kA) beams through traveling-wave structures, it is applicable to devices consisting of multiple combinations of standing-wave, traveling-wave, and induction accelerator structures. In this paper we compare code simulations to analytical solutions for the case where there is no rf coupling between cavities, to theoretical scaling parameters for coupled cavity structures, and to experimental data involving beam breakup in the two traveling-wave output structure of our microwave generator. (Author) 4 figs., tab., 5 refs

  11. UNIPIC code for simulations of high power microwave devices

    International Nuclear Information System (INIS)

    Wang Jianguo; Zhang Dianhui; Wang Yue; Qiao Hailiang; Li Xiaoze; Liu Chunliang; Li Yongdong; Wang Hongguang

    2009-01-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  12. UNIPIC code for simulations of high power microwave devices

    Science.gov (United States)

    Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze

    2009-03-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  13. Comparison study of inelastic analysis codes for high temperature structure

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Bum; Lee, H. Y.; Park, C. K.; Geon, G. P.; Lee, J. H

    2004-02-01

    LMR high temperature structures subjected to operating and transient loadings may exhibit very complex deformation behaviors due to the use of ductile material such as 316SS and the systematic analysis technology of high temperature structure for reliable safety assessment is essential. In this project, comparative study with developed inelastic analysis program NONSTA and the existing analysis codes was performed applying various types of loading including non-proportional loading. The performance of NONSTA was confirmed and the effect of inelastic constants on the analysis result was analyzed. Also, the applicability of the inelastic analysis was enlarged as a result of applying both the developed program and the existing codes to the analyses of the enhanced creep behavior and the elastic follow-up behavior of high temperature structures and the necessary items for improvements were deduced. Further studies on the improvement of NONSTA program and the decision of the proper values of inelastic constants are necessary.

  14. Using lattice tools and unfolding methods for hpge detector efficiency simulation with the Monte Carlo code MCNP5

    International Nuclear Information System (INIS)

    Querol, A.; Gallardo, S.; Ródenas, J.; Verdú, G.

    2015-01-01

    In environmental radioactivity measurements, High Purity Germanium (HPGe) detectors are commonly used due to their excellent resolution. Efficiency calibration of detectors is essential to determine activity of radionuclides. The Monte Carlo method has been proved to be a powerful tool to complement efficiency calculations. In aged detectors, efficiency is partially deteriorated due to the dead layer increasing and consequently, the active volume decreasing. The characterization of the radiation transport in the dead layer is essential for a realistic HPGe simulation. In this work, the MCNP5 code is used to calculate the detector efficiency. The F4MESH tally is used to determine the photon and electron fluence in the dead layer and the active volume. The energy deposited in the Ge has been analyzed using the ⁎F8 tally. The F8 tally is used to obtain spectra and to calculate the detector efficiency. When the photon fluence and the energy deposition in the crystal are known, some unfolding methods can be used to estimate the activity of a given source. In this way, the efficiency is obtained and serves to verify the value obtained by other methods. - Highlights: • The MCNP5 code is used to estimate the dead layer thickness of an HPGe detector. • The F4MESH tally is applied to verify where interactions occur into the Ge crystal. • PHD and the energy deposited are obtained with F8 and ⁎F8 tallies, respectively. • An average dead layer between 70 and 80 µm is obtained for the HPGe studied. • The efficiency is calculated applying the TSVD method to the response matrix.

  15. Progress of OLED devices with high efficiency at high luminance

    Science.gov (United States)

    Nguyen, Carmen; Ingram, Grayson; Lu, Zhenghong

    2014-03-01

    Organic light emitting diodes (OLEDs) have progressed significantly over the last two decades. For years, OLEDs have been promoted as the next generation technology for flat panel displays and solid-state lighting due to their potential for high energy efficiency and dynamic range of colors. Although high efficiency can readily be obtained at low brightness levels, a significant decline at high brightness is commonly observed. In this report, we will review various strategies for achieving highly efficient phosphorescent OLED devices at high luminance. Specifically, we will provide details regarding the performance and general working principles behind each strategy. We will conclude by looking at how some of these strategies can be combined to produce high efficiency white OLEDs at high brightness.

  16. Measure Guideline. High Efficiency Natural Gas Furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Brand, L. [Partnership for Advanced Residential Retrofit (PARR), Des Plaines, IL (United States); Rose, W. [Partnership for Advanced Residential Retrofit (PARR), Des Plaines, IL (United States)

    2012-10-01

    This measure guideline covers installation of high-efficiency gas furnaces, including: when to install a high-efficiency gas furnace as a retrofit measure; how to identify and address risks; and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  17. Measure Guideline: High Efficiency Natural Gas Furnaces

    Energy Technology Data Exchange (ETDEWEB)

    Brand, L.; Rose, W.

    2012-10-01

    This Measure Guideline covers installation of high-efficiency gas furnaces. Topics covered include when to install a high-efficiency gas furnace as a retrofit measure, how to identify and address risks, and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  18. Experiments on high efficiency aerosol filtration

    International Nuclear Information System (INIS)

    Mazzini, M.; Cuccuru, A.; Kunz, P.

    1977-01-01

    Research on high efficiency aerosol filtration by the Nuclear Engineering Institute of Pisa University and by CAMEN in collaboration with CNEN is outlined. HEPA filter efficiency was studied as a function of the type and size of the test aerosol, and as a function of flowrate (+-50% of the nominal value), air temperature (up to 70 0 C), relative humidity (up to 100%), and durability in a corrosive atmosphere (up to 140 hours in NaCl mist). In the selected experimental conditions these influences were appreciable but are not sufficient to be significant in industrial HEPA filter applications. Planned future research is outlined: measurement of the efficiency of two HEPA filters in series using a fixed particle size; dependence of the efficiency on air, temperatures up to 300-500 0 C; performance when subject to smoke from burning organic materials (natural rubber, neoprene, miscellaneous plastics). Such studies are relevant to possible accidental fires in a plutonium laboratory

  19. An Efficient Code-Timing Estimator for DS-CDMA Systems over Resolvable Multipath Channels

    Directory of Open Access Journals (Sweden)

    Jian Li

    2005-04-01

    Full Text Available We consider the problem of training-based code-timing estimation for the asynchronous direct-sequence code-division multiple-access (DS-CDMA system. We propose a modified large-sample maximum-likelihood (MLSML estimator that can be used for the code-timing estimation for the DS-CDMA systems over the resolvable multipath channels in closed form. Simulation results show that MLSML can be used to provide a high correct acquisition probability and a high estimation accuracy. Simulation results also show that MLSML can have very good near-far resistant capability due to employing a data model similar to that for adaptive array processing where strong interferences can be suppressed.

  20. Building Energy Efficiency in India: Compliance Evaluation of Energy Conservation Building Code

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Sha; Evans, Meredydd; Delgado, Alison

    2014-03-26

    India is experiencing unprecedented construction boom. The country doubled its floorspace between 2001 and 2005 and is expected to add 35 billion m2 of new buildings by 2050. Buildings account for 35% of total final energy consumption in India today, and building energy use is growing at 8% annually. Studies have shown that carbon policies will have little effect on reducing building energy demand. Chaturvedi et al. predicted that, if there is no specific sectoral policies to curb building energy use, final energy demand of the Indian building sector will grow over five times by the end of this century, driven by rapid income and population growth. The growing energy demand in buildings is accompanied by a transition from traditional biomass to commercial fuels, particularly an increase in electricity use. This also leads to a rapid increase in carbon emissions and aggravates power shortage in India. Growth in building energy use poses challenges to the Indian government. To curb energy consumption in buildings, the Indian government issued the Energy Conservation Building Code (ECBC) in 2007, which applies to commercial buildings with a connected load of 100 kW or 120kVA. It is predicted that the implementation of ECBC can help save 25-40% of energy, compared to reference buildings without energy-efficiency measures. However, the impact of ECBC depends on the effectiveness of its enforcement and compliance. Currently, the majority of buildings in India are not ECBC-compliant. The United Nations Development Programme projected that code compliance in India would reach 35% by 2015 and 64% by 2017. Whether the projected targets can be achieved depends on how the code enforcement system is designed and implemented. Although the development of ECBC lies in the hands of the national government – the Bureau of Energy Efficiency under the Ministry of Power, the adoption and implementation of ECBC largely relies on state and local governments. Six years after ECBC

  1. High efficiency, variable geometry, centrifugal cryogenic pump

    International Nuclear Information System (INIS)

    Forsha, M.D.; Nichols, K.E.; Beale, C.A.

    1994-01-01

    A centrifugal cryogenic pump has been developed which has a basic design that is rugged and reliable with variable speed and variable geometry features that achieve high pump efficiency over a wide range of head-flow conditions. The pump uses a sealless design and rolling element bearings to achieve high reliability and the ruggedness to withstand liquid-vapor slugging. The pump can meet a wide range of variable head, off-design flow requirements and maintain design point efficiency by adjusting the pump speed. The pump also has features that allow the impeller and diffuser blade heights to be adjusted. The adjustable height blades were intended to enhance the pump efficiency when it is operating at constant head, off-design flow rates. For small pumps, the adjustable height blades are not recommended. For larger pumps, they could provide off-design efficiency improvements. This pump was developed for supercritical helium service, but the design is well suited to any cryogenic application where high efficiency is required over a wide range of head-flow conditions

  2. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Science.gov (United States)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  3. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Directory of Open Access Journals (Sweden)

    Chapoutier Nicolas

    2017-01-01

    Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  4. High-Speed Soft-Decision Decoding of Two Reed-Muller Codes

    Science.gov (United States)

    Lin, Shu; Uehara, Gregory T.

    1996-01-01

    In his research, we have proposed the (64, 40, 8) subcode of the third-order Reed-Muller (RM) code to NASA for high-speed satellite communications. This RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. This report will summarize the key progress we have made toward achieving our eventual goal of implementing a decoder system based upon this code. In the first phase of study, we investigated the complexities of various sectionalized trellis diagrams for the proposed (64, 40, 8) RNI subcode. We found a specific 8-trellis diagram for this code which requires the least decoding complexity with a high possibility of achieving a decoding speed of 600 M bits per second (Mbps). The combination of a large number of states and a hi ch data rate will be made possible due to the utilization of a high degree of parallelism throughout the architecture. This trellis diagram will be presented and briefly described. In the second phase of study which was carried out through the past year, we investigated circuit architectures to determine the feasibility of VLSI implementation of a high-speed Viterbi decoder based on this 8-section trellis diagram. We began to examine specific design and implementation approaches to implement a fully custom integrated circuit (IC) which will be a key building block for a decoder system implementation. The key results will be presented in this report. This report will be divided into three primary sections. First, we will briefly describe the system block diagram in which the proposed decoder is assumed to be operating and present some of the key architectural approaches being used to

  5. Adaptive under relaxation factor of MATRA code for the efficient whole core analysis

    International Nuclear Information System (INIS)

    Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H.

    2013-01-01

    Such nonlinearities are handled in MATRA code using outer iteration with Picard scheme. The Picard scheme involves successive updating of the coefficient matrix based on the previously calculated values. The scheme is a simple and effective method for the nonlinear problem but the effectiveness greatly depends on the under-relaxing capability. Accuracy and speed of calculation are very sensitively dependent on the under-relaxation factor in outer-iteration updating the axial mass flow using the continuity equation. The under-relaxation factor in MATRA is generally utilized with a fixed value that is empirically determined. Adapting the under-relaxation factor to the outer iteration is expected to improve the calculation effectiveness of MATRA code rather than calculation with the fixed under-relaxation factor. The present study describes the implementation of adaptive under-relaxation within the subchannel code MATRA. Picard iterations with adaptive under-relaxation can accelerate the convergence for mass conservation in subchannel code MATRA. The most efficient approach for adaptive under relaxation appears to be very problem dependent

  6. BINGO: a code for the efficient computation of the scalar bi-spectrum

    Science.gov (United States)

    Hazra, Dhiraj Kumar; Sriramkumar, L.; Martin, Jérôme

    2013-05-01

    We present a new and accurate Fortran code, the BI-spectra and Non-Gaussianity Operator (BINGO), for the efficient numerical computation of the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. The code can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors. Focusing firstly on the equilateral limit, we illustrate the accuracy of BINGO by comparing the results from the code with the spectral dependence of the bi-spectrum expected in power law inflation. Then, considering an arbitrary triangular configuration, we contrast the numerical results with the analytical expression available in the slow roll limit, for, say, the case of the conventional quadratic potential. Considering a non-trivial scenario involving deviations from slow roll, we compare the results from the code with the analytical results that have recently been obtained in the case of the Starobinsky model in the equilateral limit. As an immediate application, we utilize BINGO to examine of the power of the non-Gaussianity parameter fNL to discriminate between various inflationary models that admit departures from slow roll and lead to similar features in the scalar power spectrum. We close with a summary and discussion on the implications of the results we obtain.

  7. Saving energy via high-efficiency fans.

    Science.gov (United States)

    Heine, Thomas

    2016-08-01

    Thomas Heine, sales and market manager for EC Upgrades, the retrofit arm of global provider of air movement solutions, ebm-papst A&NZ, discusses the retrofitting of high-efficiency fans to existing HVAC equipment to 'drastically reduce energy consumption'.

  8. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  9. Imitation Learning Based on an Intrinsic Motivation Mechanism for Efficient Coding

    Directory of Open Access Journals (Sweden)

    Jochen eTriesch

    2013-11-01

    Full Text Available A hypothesis regarding the development of imitation learning is presented that is rooted in intrinsic motivations. It is derived from a recently proposed form of intrinsically motivated learning (IML for efficient coding in active perception, wherein an agent learns to perform actions with its sense organs to facilitate efficient encoding of the sensory data. To this end, actions of the sense organs that improve the encoding of the sensory data trigger an internally generated reinforcement signal. Here it is argued that the same IML mechanism might also support the development of imitation when general actions beyond those of the sense organs are considered: The learner first observes a tutor performing a behavior and learns a model of the the behavior's sensory consequences. The learner then acts itself and receives an internally generated reinforcement signal reflecting how well the sensory consequences of its own behavior are encoded by the sensory model. Actions that are more similar to those of the tutor will lead to sensory signals that are easier to encode and produce a higher reinforcement signal. Through this, the learner's behavior is progressively tuned to make the sensory consequences of its actions match the learned sensory model. I discuss this mechanism in the context of human language acquisition and bird song learning where similar ideas have been proposed. The suggested mechanism also offers an account for the development of mirror neurons and makes a number of predictions. Overall, it establishes a connection between principles of efficient coding, intrinsic motivations and imitation.

  10. High efficiency inductive output tubes with intense annular electron beams

    Science.gov (United States)

    Appanam Karakkad, J.; Matthew, D.; Ray, R.; Beaudoin, B. L.; Narayan, A.; Nusinovich, G. S.; Ting, A.; Antonsen, T. M.

    2017-10-01

    For mobile ionospheric heaters, it is necessary to develop highly efficient RF sources capable of delivering radiation in the frequency range from 3 to 10 MHz with an average power at a megawatt level. A promising source, which is capable of offering these parameters, is a grid-less version of the inductive output tube (IOT), also known as a klystrode. In this paper, studies analyzing the efficiency of grid-less IOTs are described. The basic trade-offs needed to reach high efficiency are investigated. In particular, the trade-off between the peak current and the duration of the current micro-pulse is analyzed. A particle in the cell code is used to self-consistently calculate the distribution in axial and transverse momentum and in total electron energy from the cathode to the collector. The efficiency of IOTs with collectors of various configurations is examined. It is shown that the efficiency of IOTs can be in the 90% range even without using depressed collectors.

  11. XGC developments for a more efficient XGC-GENE code coupling

    Science.gov (United States)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  12. photon-plasma: A modern high-order particle-in-cell code

    International Nuclear Information System (INIS)

    Haugbølle, Troels; Frederiksen, Jacob Trier; Nordlund, Åke

    2013-01-01

    We present the photon-plasma code, a modern high order charge conserving particle-in-cell code for simulating relativistic plasmas. The code is using a high order implicit field solver and a novel high order charge conserving interpolation scheme for particle-to-cell interpolation and charge deposition. It includes powerful diagnostics tools with on-the-fly particle tracking, synthetic spectra integration, 2D volume slicing, and a new method to correctly account for radiative cooling in the simulations. A robust technique for imposing (time-dependent) particle and field fluxes on the boundaries is also presented. Using a hybrid OpenMP and MPI approach, the code scales efficiently from 8 to more than 250.000 cores with almost linear weak scaling on a range of architectures. The code is tested with the classical benchmarks particle heating, cold beam instability, and two-stream instability. We also present particle-in-cell simulations of the Kelvin-Helmholtz instability, and new results on radiative collisionless shocks

  13. High Efficiency Reversible Fuel Cell Power Converter

    DEFF Research Database (Denmark)

    Pittini, Riccardo

    as well as different dc-ac and dc-dc converter topologies are presented and analyzed. A new ac-dc topology for high efficiency data center applications is proposed and an efficiency characterization based on the fuel cell stack I-V characteristic curve is presented. The second part discusses the main...... converter components. Wide bandgap power semiconductors are introduced due to their superior performance in comparison to traditional silicon power devices. The analysis presents a study based on switching loss measurements performed on Si IGBTs, SiC JFETs, SiC MOSFETs and their respective gate drivers...

  14. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based......, and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. A review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning and cost are often...

  15. High Efficiency Centrifugal Compressor for Rotorcraft Applications

    Science.gov (United States)

    Medic, Gorazd; Sharma, Om P.; Jongwook, Joo; Hardin, Larry W.; McCormick, Duane C.; Cousins, William T.; Lurie, Elizabeth A.; Shabbir, Aamir; Holley, Brian M.; Van Slooten, Paul R.

    2017-01-01

    A centrifugal compressor research effort conducted by United Technologies Research Center under NASA Research Announcement NNC08CB03C is documented. The objectives were to identify key technical barriers to advancing the aerodynamic performance of high-efficiency, high work factor, compact centrifugal compressor aft-stages for turboshaft engines; to acquire measurements needed to overcome the technical barriers and inform future designs; to design, fabricate, and test a new research compressor in which to acquire the requisite flow field data. A new High-Efficiency Centrifugal Compressor stage -- splittered impeller, splittered diffuser, 90 degree bend, and exit guide vanes -- with aerodynamically aggressive performance and configuration (compactness) goals were designed, fabricated, and subquently tested at the NASA Glenn Research Center.

  16. High-Temperature High-Efficiency Solar Thermoelectric Generators

    Energy Technology Data Exchange (ETDEWEB)

    Baranowski, LL; Warren, EL; Toberer, ES

    2014-03-01

    Inspired by recent high-efficiency thermoelectric modules, we consider thermoelectrics for terrestrial applications in concentrated solar thermoelectric generators (STEGs). The STEG is modeled as two subsystems: a TEG, and a solar absorber that efficiently captures the concentrated sunlight and limits radiative losses from the system. The TEG subsystem is modeled using thermoelectric compatibility theory; this model does not constrain the material properties to be constant with temperature. Considering a three-stage TEG based on current record modules, this model suggests that 18% efficiency could be experimentally expected with a temperature gradient of 1000A degrees C to 100A degrees C. Achieving 15% overall STEG efficiency thus requires an absorber efficiency above 85%, and we consider two methods to achieve this: solar-selective absorbers and thermally insulating cavities. When the TEG and absorber subsystem models are combined, we expect that the STEG modeled here could achieve 15% efficiency with optical concentration between 250 and 300 suns.

  17. High energy particle transport code NMTC/JAM

    International Nuclear Information System (INIS)

    Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro

    2001-03-01

    We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)

  18. Bandwidth Efficient Overlapped FSK Coded Secure Command Transmission for Medical Implant Communication Systems

    Directory of Open Access Journals (Sweden)

    Selman KULAÇ

    2018-06-01

    Full Text Available Nowadays, wireless communication systems are exploited in most health care systems. Implantable Medical Systems (IMS also have wireless communication capability. However, it is very important that secure wireless communication should be provided in terms of both patient rights and patient health. Therefore, wireless transmission systems of IMS should also be robust against to eavesdroppers and adversaries. In this study, a specific overlapped and coded frequency shift keying (FSK modulation technique is developed and security containing with low complexity is provided by this proposed technique. The developed method is suitable for wireless implantable medical systems since it provides low complexity and security as well as bandwidth efficiency.

  19. High efficiency and broadband acoustic diodes

    Science.gov (United States)

    Fu, Congyi; Wang, Bohan; Zhao, Tianfei; Chen, C. Q.

    2018-01-01

    Energy transmission efficiency and working bandwidth are the two major factors limiting the application of current acoustic diodes (ADs). This letter presents a design of high efficiency and broadband acoustic diodes composed of a nonlinear frequency converter and a linear wave filter. The converter consists of two masses connected by a bilinear spring with asymmetric tension and compression stiffness. The wave filter is a linear mass-spring lattice (sonic crystal). Both numerical simulation and experiment show that the energy transmission efficiency of the acoustic diode can be improved by as much as two orders of magnitude, reaching about 61%. Moreover, the primary working band width of the AD is about two times of the cut-off frequency of the sonic crystal filter. The cut-off frequency dependent working band of the AD implies that the developed AD can be scaled up or down from macro-scale to micro- and nano-scale.

  20. High Efficiency, Low Emission Refrigeration System

    Energy Technology Data Exchange (ETDEWEB)

    Fricke, Brian A [ORNL; Sharma, Vishaldeep [ORNL

    2016-08-01

    Supermarket refrigeration systems account for approximately 50% of supermarket energy use, placing this class of equipment among the highest energy consumers in the commercial building domain. In addition, the commonly used refrigeration system in supermarket applications is the multiplex direct expansion (DX) system, which is prone to refrigerant leaks due to its long lengths of refrigerant piping. This leakage reduces the efficiency of the system and increases the impact of the system on the environment. The high Global Warming Potential (GWP) of the hydrofluorocarbon (HFC) refrigerants commonly used in these systems, coupled with the large refrigerant charge and the high refrigerant leakage rates leads to significant direct emissions of greenhouse gases into the atmosphere. Methods for reducing refrigerant leakage and energy consumption are available, but underutilized. Further work needs to be done to reduce costs of advanced system designs to improve market utilization. In addition, refrigeration system retrofits that result in reduced energy consumption are needed since the majority of applications address retrofits rather than new stores. The retrofit market is also of most concern since it involves large-volume refrigerant systems with high leak rates. Finally, alternative refrigerants for new and retrofit applications are needed to reduce emissions and reduce the impact on the environment. The objective of this Collaborative Research and Development Agreement (CRADA) between the Oak Ridge National Laboratory and Hill Phoenix is to develop a supermarket refrigeration system that reduces greenhouse gas emissions and has 25 to 30 percent lower energy consumption than existing systems. The outcomes of this project will include the design of a low emission, high efficiency commercial refrigeration system suitable for use in current U.S. supermarkets. In addition, a prototype low emission, high efficiency supermarket refrigeration system will be produced for

  1. High explosive programmed burn in the FLAG code

    Energy Technology Data Exchange (ETDEWEB)

    Mandell, D.; Burton, D.; Lund, C.

    1998-02-01

    The models used to calculate the programmed burn high-explosive lighting times for two- and three-dimensions in the FLAG code are described. FLAG uses an unstructured polyhedra grid. The calculations were compared to exact solutions for a square in two dimensions and for a cube in three dimensions. The maximum error was 3.95 percent in two dimensions and 4.84 percent in three dimensions. The high explosive lighting time model described has the advantage that only one cell at a time needs to be considered.

  2. High efficiency novel window air conditioner

    International Nuclear Information System (INIS)

    Bansal, Pradeep

    2015-01-01

    Highlights: • Use of novel refrigerant mixture of R32/R125 (85/15% molar conc.) to reduce global warming and improve energy efficiency. • Use of novel features such as electronically commuted motor (ECM) fan motor, slinger and sub-merged sub-cooler. • Energy savings of up to 0.1 Quads per year in USA and much more in Asia/Middle East where WACs are used in large numbers. • Payback period of only 1.4 years of the novel efficient WAC. - Abstract: This paper presents the results of an experimental and analytical evaluation of measures to raise the efficiency of window air conditioners (WAC). In order to achieve a higher energy efficiency ratio (EER), the original capacity of a baseline R410A unit was reduced by replacing the original compressor with a lower capacity but higher EER compressor, while all heat exchangers and the chassis from the original unit were retained. Subsequent major modifications included – replacing the alternating current fan motor with a brushless high efficiency electronically commutated motor (ECM) motor, replacing the capillary tube with a needle valve to better control the refrigerant flow and refrigerant set points, and replacing R410A with a ‘drop-in’ lower global warming potential (GWP) binary mixture of R32/R125 (85/15% molar concentration). All these modifications resulted in significant enhancement in the EER of the baseline WAC. Further, an economic analysis of the new WAC revealed an encouraging payback period

  3. High efficiency carbon nanotube thread antennas

    Science.gov (United States)

    Amram Bengio, E.; Senic, Damir; Taylor, Lauren W.; Tsentalovich, Dmitri E.; Chen, Peiyu; Holloway, Christopher L.; Babakhani, Aydin; Long, Christian J.; Novotny, David R.; Booth, James C.; Orloff, Nathan D.; Pasquali, Matteo

    2017-10-01

    Although previous research has explored the underlying theory of high-frequency behavior of carbon nanotubes (CNTs) and CNT bundles for antennas, there is a gap in the literature for direct experimental measurements of radiation efficiency. These measurements are crucial for any practical application of CNT materials in wireless communication. In this letter, we report a measurement technique to accurately characterize the radiation efficiency of λ/4 monopole antennas made from the CNT thread. We measure the highest absolute values of radiation efficiency for CNT antennas of any type, matching that of copper wire. To capture the weight savings, we propose a specific radiation efficiency metric and show that these CNT antennas exceed copper's performance by over an order of magnitude at 1 GHz and 2.4 GHz. We also report direct experimental observation that, contrary to metals, the radiation efficiency of the CNT thread improves significantly at higher frequencies. These results pave the way for practical applications of CNT thread antennas, particularly in the aerospace and wearable electronics industries where weight saving is a priority.

  4. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based...

  5. An Efficient Code-Based Threshold Ring Signature Scheme with a Leader-Participant Model

    Directory of Open Access Journals (Sweden)

    Guomin Zhou

    2017-01-01

    Full Text Available Digital signature schemes with additional properties have broad applications, such as in protecting the identity of signers allowing a signer to anonymously sign a message in a group of signers (also known as a ring. While these number-theoretic problems are still secure at the time of this research, the situation could change with advances in quantum computing. There is a pressing need to design PKC schemes that are secure against quantum attacks. In this paper, we propose a novel code-based threshold ring signature scheme with a leader-participant model. A leader is appointed, who chooses some shared parameters for other signers to participate in the signing process. This leader-participant model enhances the performance because every participant including the leader could execute the decoding algorithm (as a part of signing process upon receiving the shared parameters from the leader. The time complexity of our scheme is close to Courtois et al.’s (2001 scheme. The latter is often used as a basis to construct other types of code-based signature schemes. Moreover, as a threshold ring signature scheme, our scheme is as efficient as the normal code-based ring signature.

  6. Efficiency of rate and latency coding with respect to metabolic cost and time.

    Science.gov (United States)

    Levakova, Marie

    2017-11-01

    Recent studies on the theoretical performance of latency and rate code in single neurons have revealed that the ultimate accuracy is affected in a nontrivial way by aspects such as the level of spontaneous activity of presynaptic neurons, amount of neuronal noise or the duration of the time window used to determine the firing rate. This study explores how the optimal decoding performance and the corresponding conditions change when the energy expenditure of a neuron in order to spike and maintain the resting membrane potential is accounted for. It is shown that a nonzero amount of spontaneous activity remains essential for both the latency and the rate coding. Moreover, the optimal level of spontaneous activity does not change so much with respect to the intensity of the applied stimulus. Furthermore, the efficiency of the temporal and the rate code converge to an identical finite value if the neuronal activity is observed for an unlimited period of time. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. High efficiency lithium-thionyl chloride cell

    Science.gov (United States)

    Doddapaneni, N.

    1982-08-01

    The polarization characteristics and the specific cathode capacity of Teflon bonded carbon electrodes in the Li/SOCl2 system have been evaluated. Doping of electrocatalysts such as cobalt and iron phthalocyanine complexes improved both cell voltage and cell rate capability. High efficiency Li/SOCl2 cells were thus achieved with catalyzed cathodes. The electrochemical reduction of SOCl2 seems to undergo modification at catalyzed cathode. For example, the reduction of SOCl2 at FePc catalyzed cathode involves 2-1/2 e-/mole of SOCl2. Furthermore, the reduction mechanism is simplified and unwanted chemical species are eliminated by the catalyst. Thus a potentially safer high efficiency Li/SOCl2 can be anticipated.

  8. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    Science.gov (United States)

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  9. Bioblendstocks that Enable High Efficiency Engine Designs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Robert L.; Fioroni, Gina M.; Ratcliff, Matthew A.; Zigler, Bradley T.; Farrell, John

    2016-11-03

    The past decade has seen a high level of innovation in production of biofuels from sugar, lipid, and lignocellulose feedstocks. As discussed in several talks at this workshop, ethanol blends in the E25 to E50 range could enable more highly efficient spark-ignited (SI) engines. This is because of their knock resistance properties that include not only high research octane number (RON), but also charge cooling from high heat of vaporization, and high flame speed. Emerging alcohol fuels such as isobutanol or mixed alcohols have desirable properties such as reduced gasoline blend vapor pressure, but also have lower RON than ethanol. These fuels may be able to achieve the same knock resistance benefits, but likely will require higher blend levels or higher RON hydrocarbon blendstocks. A group of very high RON (>150) oxygenates such as dimethyl furan, methyl anisole, and related compounds are also produced from biomass. While providing no increase in charge cooling, their very high octane numbers may provide adequate knock resistance for future highly efficient SI engines. Given this range of options for highly knock resistant fuels there appears to be a critical need for a fuel knock resistance metric that includes effects of octane number, heat of vaporization, and potentially flame speed. Emerging diesel fuels include highly branched long-chain alkanes from hydroprocessing of fats and oils, as well as sugar-derived terpenoids. These have relatively high cetane number (CN), which may have some benefits in designing more efficient CI engines. Fast pyrolysis of biomass can produce diesel boiling range streams that are high in aromatic, oxygen and acid contents. Hydroprocessing can be applied to remove oxygen and consequently reduce acidity, however there are strong economic incentives to leave up to 2 wt% oxygen in the product. This oxygen will primarily be present as low CN alkyl phenols and aryl ethers. While these have high heating value, their presence in diesel fuel

  10. Evaluating performance of high efficiency mist eliminators

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, Charles A.; Parsons, Michael S.; Giffin, Paxton K. [Mississippi State University, Institute for Clean Energy Technology, 205 Research Blvd, Starkville, MS (United States)

    2013-07-01

    Processing liquid wastes frequently generates off gas streams with high humidity and liquid aerosols. Droplet laden air streams can be produced from tank mixing or sparging and processes such as reforming or evaporative volume reduction. Unfortunately these wet air streams represent a genuine threat to HEPA filters. High efficiency mist eliminators (HEME) are one option for removal of liquid aerosols with high dissolved or suspended solids content. HEMEs have been used extensively in industrial applications, however they have not seen widespread use in the nuclear industry. Filtering efficiency data along with loading curves are not readily available for these units and data that exist are not easily translated to operational parameters in liquid waste treatment plants. A specialized test stand has been developed to evaluate the performance of HEME elements under use conditions of a US DOE facility. HEME elements were tested at three volumetric flow rates using aerosols produced from an iron-rich waste surrogate. The challenge aerosol included submicron particles produced from Laskin nozzles and super micron particles produced from a hollow cone spray nozzle. Test conditions included ambient temperature and relative humidities greater than 95%. Data collected during testing HEME elements from three different manufacturers included volumetric flow rate, differential temperature across the filter housing, downstream relative humidity, and differential pressure (dP) across the filter element. Filter challenge was discontinued at three intermediate dPs and the filter to allow determining filter efficiency using dioctyl phthalate and then with dry surrogate aerosols. Filtering efficiencies of the clean HEME, the clean HEME loaded with water, and the HEME at maximum dP were also collected using the two test aerosols. Results of the testing included differential pressure vs. time loading curves for the nine elements tested along with the mass of moisture and solid

  11. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  12. Efficient MPEG-2 to H.264/AVC Transcoding of Intra-Coded Video

    Directory of Open Access Journals (Sweden)

    Vetro Anthony

    2007-01-01

    Full Text Available This paper presents an efficient transform-domain architecture and corresponding mode decision algorithms for transcoding intra-coded video from MPEG-2 to H.264/AVC. Low complexity is achieved in several ways. First, our architecture employs direct conversion of the transform coefficients, which eliminates the need for the inverse discrete cosine transform (DCT and forward H.264/AVC transform. Then, within this transform-domain architecture, we perform macroblock-based mode decisions based on H.264/AVC transform coefficients, which is possible using a novel method of calculating distortion in the transform domain. The proposed method for distortion calculation could be used to make rate-distortion optimized mode decisions with lower complexity. Compared to the pixel-domain architecture with rate-distortion optimized mode decision, simulation results show that there is a negligible loss in quality incurred by the direct conversion of transform coefficients and the proposed transform-domain mode decision algorithms, while complexity is significantly reduced. To further reduce the complexity, we also propose two fast mode decision algorithms. The first algorithm ranks modes based on a simple cost function in the transform domain, then computes the rate-distortion optimal mode from a reduced set of ranked modes. The second algorithm exploits temporal correlations in the mode decision between temporally adjacent frames. Simulation results show that these algorithms provide additional computational savings over the proposed transform-domain architecture while maintaining virtually the same coding efficiency.

  13. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  14. High efficiency inverter and ballast circuits

    International Nuclear Information System (INIS)

    Nilssen, O.K.

    1984-01-01

    A high efficiency push-pull inverter circuit employing a pair of relatively high power switching transistors is described. The switching on and off of the transistors is precisely controlled to minimize power losses due to common-mode conduction or due to transient conditions that occur in the process of turning a transistor on or off. Two current feed-back transformers are employed in the transistor base drives; one being saturable for providing a positive feedback, and the other being non-saturable for providing a subtractive feedback

  15. Optimization of a high efficiency FEL amplifier

    International Nuclear Information System (INIS)

    Schneidmiller, E.A.; Yurkov, M.V.

    2014-10-01

    The problem of an efficiency increase of an FEL amplifier is now of great practical importance. Technique of undulator tapering in the post-saturation regime is used at the existing X-ray FELs LCLS and SACLA, and is planned for use at the European XFEL, Swiss FEL, and PAL XFEL. There are also discussions on the future of high peak and average power FELs for scientific and industrial applications. In this paper we perform detailed analysis of the tapering strategies for high power seeded FEL amplifiers. Application of similarity techniques allows us to derive universal law of the undulator tapering.

  16. Highly efficient fully transparent inverted OLEDs

    Science.gov (United States)

    Meyer, J.; Winkler, T.; Hamwi, S.; Schmale, S.; Kröger, M.; Görrn, P.; Johannes, H.-H.; Riedl, T.; Lang, E.; Becker, D.; Dobbertin, T.; Kowalsky, W.

    2007-09-01

    One of the unique selling propositions of OLEDs is their potential to realize highly transparent devices over the visible spectrum. This is because organic semiconductors provide a large Stokes-Shift and low intrinsic absorption losses. Hence, new areas of applications for displays and ambient lighting become accessible, for instance, the integration of OLEDs into the windshield or the ceiling of automobiles. The main challenge in the realization of fully transparent devices is the deposition of the top electrode. ITO is commonly used as transparent bottom anode in a conventional OLED. To obtain uniform light emission over the entire viewing angle and a low series resistance, a TCO such as ITO is desirable as top contact as well. However, sputter deposition of ITO on top of organic layers causes damage induced by high energetic particles and UV radiation. We have found an efficient process to protect the organic layers against the ITO rf magnetron deposition process of ITO for an inverted OLED (IOLED). The inverted structure allows the integration of OLEDs in more powerful n-channel transistors used in active matrix backplanes. Employing the green electrophosphorescent material Ir(ppy) 3 lead to IOLED with a current efficiency of 50 cd/A and power efficiency of 24 lm/W at 100 cd/m2. The average transmittance exceeds 80 % in the visible region. The on-set voltage for light emission is lower than 3 V. In addition, by vertical stacking we achieved a very high current efficiency of more than 70 cd/A for transparent IOLED.

  17. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  18. High-Penetration Photovoltaics Standards and Codes Workshop, Denver, Colorado, May 20, 2010: Workshop Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Coddington, M.; Kroposki, B.; Basso, T.; Lynn, K.; Herig, C.; Bower, W.

    2010-09-01

    Effectively interconnecting high-level penetration of photovoltaic (PV) systems requires careful technical attention to ensuring compatibility with electric power systems. Standards, codes, and implementation have been cited as major impediments to widespread use of PV within electric power systems. On May 20, 2010, in Denver, Colorado, the National Renewable Energy Laboratory, in conjunction with the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), held a workshop to examine the key technical issues and barriers associated with high PV penetration levels with an emphasis on codes and standards. This workshop included building upon results of the High Penetration of Photovoltaic (PV) Systems into the Distribution Grid workshop held in Ontario California on February 24-25, 2009, and upon the stimulating presentations of the diverse stakeholder presentations.

  19. High Efficiency Colloidal Quantum Dot Phosphors

    Energy Technology Data Exchange (ETDEWEB)

    Kahen, Keith

    2013-12-31

    The project showed that non-Cd containing, InP-based nanocrystals (semiconductor materials with dimensions of ~6 nm) have high potential for enabling next-generation, nanocrystal-based, on chip phosphors for solid state lighting. Typical nanocrystals fall short of the requirements for on chip phosphors due to their loss of quantum efficiency under the operating conditions of LEDs, such as, high temperature (up to 150 °C) and high optical flux (up to 200 W/cm2). The InP-based nanocrystals invented during this project maintain high quantum efficiency (>80%) in polymer-based films under these operating conditions for emission wavelengths ranging from ~530 to 620 nm. These nanocrystals also show other desirable attributes, such as, lack of blinking (a common problem with nanocrystals which limits their performance) and no increase in the emission spectral width from room to 150 °C (emitters with narrower spectral widths enable higher efficiency LEDs). Prior to these nanocrystals, no nanocrystal system (regardless of nanocrystal type) showed this collection of properties; in fact, other nanocrystal systems are typically limited to showing only one desirable trait (such as high temperature stability) but being deficient in other properties (such as high flux stability). The project showed that one can reproducibly obtain these properties by generating a novel compositional structure inside of the nanomaterials; in addition, the project formulated an initial theoretical framework linking the compositional structure to the list of high performance optical properties. Over the course of the project, the synthetic methodology for producing the novel composition was evolved to enable the synthesis of these nanomaterials at a cost approximately equal to that required for forming typical conventional nanocrystals. Given the above results, the last major remaining step prior to scale up of the nanomaterials is to limit the oxidation of these materials during the tens of

  20. Highly Efficient Estimators of Multivariate Location with High Breakdown Point

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1991-01-01

    We propose an affine equivariant estimator of multivariate location that combines a high breakdown point and a bounded influence function with high asymptotic efficiency. This proposal is basically a location $M$-estimator based on the observations obtained after scaling with an affine equivariant

  1. Towards high dynamic range extensions of HEVC: subjective evaluation of potential coding technologies

    Science.gov (United States)

    Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj

    2015-09-01

    This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.

  2. High efficiency motors; Motores de alta eficiencia

    Energy Technology Data Exchange (ETDEWEB)

    Uranga Favela, Ivan Jaime [Energia Controlada de Mexico, S. A. de C. V., Mexico, D. F. (Mexico)

    1993-12-31

    This paper is a technical-financial study of the high efficiency and super-premium motors. As it is widely known, more than 60% of the electrical energy generated in the country is used for the operation of motors, in industry as well as in commerce. Therefore the importance that the motors have in the efficient energy use. [Espanol] El presente trabajo es un estudio tecnico-financiero de los motores de alta eficiencia y los motores super premium. Como es ampliamente conocido, mas del 60% de la energia electrica generada en el pais, es utilizada para accionar motores, dentro de la industria y el comercio. De alli la importancia que los motores tienen en el uso eficiente de la energia.

  3. High-efficiency organic glass scintillators

    Science.gov (United States)

    Feng, Patrick L.; Carlson, Joseph S.

    2017-12-19

    A new family of neutron/gamma discriminating scintillators is disclosed that comprises stable organic glasses that may be melt-cast into transparent monoliths. These materials have been shown to provide light yields greater than solution-grown trans-stilbene crystals and efficient PSD capabilities when combined with 0.01 to 0.05% by weight of the total composition of a wavelength-shifting fluorophore. Photoluminescence measurements reveal fluorescence quantum yields that are 2 to 5 times greater than conventional plastic or liquid scintillator matrices, which accounts for the superior light yield of these glasses. The unique combination of high scintillation light-yields, efficient neutron/gamma PSD, and straightforward scale-up via melt-casting distinguishes the developed organic glasses from existing scintillators.

  4. High efficiency motors; Motores de alta eficiencia

    Energy Technology Data Exchange (ETDEWEB)

    Uranga Favela, Ivan Jaime [Energia Controlada de Mexico, S. A. de C. V., Mexico, D. F. (Mexico)

    1992-12-31

    This paper is a technical-financial study of the high efficiency and super-premium motors. As it is widely known, more than 60% of the electrical energy generated in the country is used for the operation of motors, in industry as well as in commerce. Therefore the importance that the motors have in the efficient energy use. [Espanol] El presente trabajo es un estudio tecnico-financiero de los motores de alta eficiencia y los motores super premium. Como es ampliamente conocido, mas del 60% de la energia electrica generada en el pais, es utilizada para accionar motores, dentro de la industria y el comercio. De alli la importancia que los motores tienen en el uso eficiente de la energia.

  5. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  6. High efficiency beam splitting for H- accelerators

    International Nuclear Information System (INIS)

    Kramer, S.L.; Stipp, V.; Krieger, C.; Madsen, J.

    1985-01-01

    Beam splitting for high energy accelerators has typically involved a significant loss of beam and radiation. This paper reports on a new method of splitting beams for H - accelerators. This technique uses a high intensity flash of light to strip a fraction of the H - beam to H 0 which are then easily separated by a small bending magnet. A system using a 900-watt (average electrical power) flashlamp and a highly efficient collector will provide 10 -3 to 10 -2 splitting of a 50 MeV H - beam. Results on the operation and comparisons with stripping cross sections are presented. Also discussed is the possibility for developing this system to yield a higher stripping fraction

  7. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  8. High Quantum Efficiency OLED Lighting Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shiang, Joseph [General Electric (GE) Global Research, Fairfield, CT (United States)

    2011-09-30

    The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution process on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.

  9. High-efficiency concentrator silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Sinton, R.A.; Cuevas, A.; King, R.R.; Swanson, R.M. (Stanford Univ., CA (USA). Solid-State Electronics Lab.)

    1990-11-01

    This report presents results from extensive process development in high-efficiency Si solar cells. An advanced design for a 1.56-cm{sup 2} cell with front grids achieved 26% efficiency at 90 suns. This is especially significant since this cell does not require a prismatic cover glass. New designs for simplified backside-contact solar cells were advanced from a status of near-nonfunctionality to demonstrated 21--22% for one-sun cells in sizes up to 37.5 cm{sup 2}. An efficiency of 26% was achieved for similar 0.64-cm{sup 2} concentrator cells at 150 suns. More fundamental work on dopant-diffused regions is also presented here. The recombination vs. various process and physical parameters was studied in detail for boron and phosphorous diffusions. Emitter-design studies based solidly upon these new data indicate the performance vs design parameters for a variety of the cases of most interest to solar cell designers. Extractions of p-type bandgap narrowing and the surface recombination for p- and n-type regions from these studies have a generality that extends beyond solar cells into basic device modeling. 68 refs., 50 figs.

  10. Nanooptics for high efficient photon managment

    Science.gov (United States)

    Wyrowski, Frank; Schimmel, Hagen

    2005-09-01

    Optical systems for photon management, that is the generation of tailored electromagnetic fields, constitute one of the keys for innovation through photonics. An important subfield of photon management deals with the transformation of an incident light field into a field of specified intensity distribution. In this paper we consider some basic aspects of the nature of systems for those light transformations. It turns out, that the transversal redistribution of energy (TRE) is of central concern to achieve systems with high transformation efficiency. Besides established techniques nanostructured optical elements (NOE) are demanded to implement transversal energy redistribution. That builds a bridge between the needs of photon management, optical engineering, and nanooptics.

  11. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  12. Harmonic Enhancement in Low Bitrate Audio Coding Using an Efficient Long-Term Predictor

    Directory of Open Access Journals (Sweden)

    Song Jeongook

    2010-01-01

    Full Text Available This paper proposes audio coding using an efficient long-term prediction method to enhance the perceptual quality of audio codecs to speech input signals at low bit-rates. The MPEG-4 AAC-LTP exploited a similar concept, but its improvement was not significant because of small prediction gain due to long prediction lags and aliased components caused by the transformation with a time-domain aliasing cancelation (TDAC technique. The proposed algorithm increases the prediction gain by employing a deharmonizing predictor and a long-term compensation filter. The look-back memory elements are first constructed by applying the de-harmonizing predictor to the input signal, then the prediction residual is encoded and decoded by transform audio coding. Finally, the long-term compensation filter is applied to the updated look-back memory of the decoded prediction residual to obtain synthesized signals. Experimental results show that the proposed algorithm has much lower spectral distortion and higher perceptual quality than conventional approaches especially for harmonic signals, such as voiced speech.

  13. Efficient coding explains the universal law of generalization in human perception.

    Science.gov (United States)

    Sims, Chris R

    2018-05-11

    Perceptual generalization and discrimination are fundamental cognitive abilities. For example, if a bird eats a poisonous butterfly, it will learn to avoid preying on that species again by generalizing its past experience to new perceptual stimuli. In cognitive science, the "universal law of generalization" seeks to explain this ability and states that generalization between stimuli will follow an exponential function of their distance in "psychological space." Here, I challenge existing theoretical explanations for the universal law and offer an alternative account based on the principle of efficient coding. I show that the universal law emerges inevitably from any information processing system (whether biological or artificial) that minimizes the cost of perceptual error subject to constraints on the ability to process or transmit information. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  14. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  15. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand

  16. The WARP Code: Modeling High Intensity Ion Beams

    International Nuclear Information System (INIS)

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-01-01

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP( ) summary.html

  17. High efficiency double sided solar cells

    International Nuclear Information System (INIS)

    Seddik, M.M.

    1990-06-01

    Silicon technology state of the art for single crystalline was given to be limited to less than 20% efficiency. A proposed new form of photovoltaic solar cell of high current high efficiency with double sided structures has been given. The new forms could be n ++ pn ++ or p ++ np ++ double side junctions. The idea of double sided devices could be understood as two solar cells connected back-to-back in parallel electrical connection, in which the current is doubled if the cell is illuminated from both sides by a V-shaped reflector. The cell is mounted to the reflector such that each face is inclined at an angle of 45 deg. C to each side of the reflector. The advantages of the new structure are: a) High power devices. b) Easy to fabricate. c) The cells are used vertically instead of horizontal use of regular solar cell which require large area to install. This is very important in power stations and especially for satellite installation. If the proposal is made real and proved to be experimentally feasible, it would be a new era for photovoltaic solar cells since the proposal has already been extended to even higher currents. The suggested structures could be stated as: n ++ pn ++ Vp ++ np ++ ;n ++ pn ++ Vn ++ pn ++ ORp ++ np ++ Vp ++ np ++ . These types of structures are formed in wedged shape to employ indirect illumination by either parabolic; conic or V-shaped reflectors. The advantages of these new forms are low cost; high power; less in size and space; self concentrating; ... etc. These proposals if it happens to find their ways to be achieved experimentally, I think they will offer a short path to commercial market and would have an incredible impact on solar cell technology and applications. (author). 12 refs, 5 figs

  18. Simple Motor Control Concept Results High Efficiency at High Velocities

    Science.gov (United States)

    Starin, Scott; Engel, Chris

    2013-09-01

    The need for high velocity motors in space applications for reaction wheels and detectors has stressed the limits of Brushless Permanent Magnet Motors (BPMM). Due to inherent hysteresis core losses, conventional BPMMs try to balance the need for torque verses hysteresis losses. Cong-less motors have significantly less hysteresis losses but suffer from lower efficiencies. Additionally, the inherent low inductance in cog-less motors result in high ripple currents or high switching frequencies, which lowers overall efficiency and increases performance demands on the control electronics.However, using a somewhat forgotten but fully qualified technology of Isotropic Magnet Motors (IMM), extremely high velocities may be achieved at low power input using conventional drive electronics. This paper will discuss the trade study efforts and empirical test data on a 34,000 RPM IMM.

  19. High Efficiency, Illumination Quality OLEDs for Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Joseph Shiang; James Cella; Kelly Chichak; Anil Duggal; Kevin Janora; Chris Heller; Gautam Parthasarathy; Jeffery Youmans; Joseph Shiang

    2008-03-31

    The goal of the program was to demonstrate a 45 lumen per watt white light device based upon the use of multiple emission colors through the use of solution processing. This performance level is a dramatic extension of the team's previous 15 LPW large area illumination device. The fundamental material system was based upon commercial polymer materials. The team was largely able to achieve these goals, and was able to deliver to DOE a 90 lumen illumination source that had an average performance of 34 LPW a 1000 cd/m{sup 2} with peak performances near 40LPW. The average color temperature is 3200K and the calculated CRI 85. The device operated at a brightness of approximately 1000cd/m{sup 2}. The use of multiple emission colors particularly red and blue, provided additional degrees of design flexibility in achieving white light, but also required the use of a multilayered structure to separate the different recombination zones and prevent interconversion of blue emission to red emission. The use of commercial materials had the advantage that improvements by the chemical manufacturers in charge transport efficiency, operating life and material purity could be rapidly incorporated without the expenditure of additional effort. The program was designed to take maximum advantage of the known characteristics of these material and proceeded in seven steps. (1) Identify the most promising materials, (2) assemble them into multi-layer structures to control excitation and transport within the OLED, (3) identify materials development needs that would optimize performance within multilayer structures, (4) build a prototype that demonstrates the potential entitlement of the novel multilayer OLED architecture (5) integrate all of the developments to find the single best materials set to implement the novel multilayer architecture, (6) further optimize the best materials set, (7) make a large area high illumination quality white OLED. A photo of the final deliverable is shown

  20. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  1. Multiscale approaches to high efficiency photovoltaics

    Directory of Open Access Journals (Sweden)

    Connolly James Patrick

    2016-01-01

    Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.

  2. SASKTRAN: A spherical geometry radiative transfer code for efficient estimation of limb scattered sunlight

    International Nuclear Information System (INIS)

    Bourassa, A.E.; Degenstein, D.A.; Llewellyn, E.J.

    2008-01-01

    The inversion of satellite-based observations of limb scattered sunlight for the retrieval of constituent species requires an efficient and accurate modelling of the measurement. We present the development of the SASKTRAN radiative transfer model for the prediction of limb scatter measurements at optical wavelengths by method of successive orders along rays traced in a spherical atmosphere. The component of the signal due to the first two scattering events of the solar beam is accounted for directly along rays traced in the three-dimensional geometry. Simplifying assumptions in successive scattering orders provide computational optimizations without severely compromising the accuracy of the solution. SASKTRAN is designed for the analysis of measurements from the OSIRIS instrument and the implementation of the algorithm is efficient such that the code is suitable for the inversion of OSIRIS profiles on desktop computers. SASKTRAN total limb radiance profiles generally compare better with Monte-Carlo reference models over a large range of solar conditions than the approximate spherical and plane-parallel models typically used for inversions

  3. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    Science.gov (United States)

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  4. High efficiency thin-film solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Schock, Hans-Werner [Helmholtz Zentrum Berlin (Germany). Solar Energy

    2012-11-01

    Production of photovoltaics is growing worldwide on a gigawatt scale. Among the thin film technologies, Cu(In,Ga)S,Se{sub 2} (CIS or CIGS) based solar cells have been the focus of more and more attention. This paper aims to analyze the success of CIGS based solar cells and the potential of this technology for future photovoltaics large-scale production. Specific material properties make CIS unique and allow the preparation of the material with a wide range of processing options. The huge potential lies in the possibility to take advantage of modern thin film processing equipment and combine it with very high efficiencies beyond 20% already achieved on the laboratory scale. A sustainable development of this technology could be realized by modifying the materials and replacing indium by abundant elements. (orig.)

  5. Design of High Efficient MPPT Solar Inverter

    Directory of Open Access Journals (Sweden)

    Sunitha K. A.

    2017-01-01

    Full Text Available This work aims to design a High Efficient Maximum Power Point Tracking (MPPT Solar Inverter. A boost converter is designed in the system to boost the power from the photovoltaic panel. By this experimental setup a room consisting of 500 Watts load (eight fluorescent tubes is completely controlled. It is aimed to decrease the maintenance cost. A microcontroller is introduced for tracking the P&O (Perturb and Observe algorithm used for tracking the maximum power point. The duty cycle for the operation of the boost convertor is optimally adjusted by using MPPT controller. There is a MPPT charge controller to charge the battery as well as fed to inverter which runs the load. Both the P&O scheme with the fixed variation for the reference current and the intelligent MPPT algorithm were able to identify the global Maximum power point, however the performance of the MPPT algorithm was better.

  6. High Efficiency Ka-Band Spatial Combiner

    Directory of Open Access Journals (Sweden)

    D. Passi

    2014-12-01

    Full Text Available A Ka-Band, High Efficiency, Small Size Spatial Combiner (SPC is proposed in this paper, which uses an innovatively matched quadruple Fin Lines to microstrip (FLuS transitions. At the date of this paper and at the Author's best knowledge no such FLuS innovative transitions have been reported in literature before. These transitions are inserted into a WR28 waveguide T-junction, in order to allow the integration of 16 Monolithic Microwave Integrated Circuit (MMIC Solid State Power Amplifiers (SSPA's. A computational electromagnetic model using the finite elements method has been implemented. A mean insertion loss of 2 dB is achieved with a return loss better the 10 dB in the 31-37 GHz bandwidth.

  7. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  8. The CRRES high efficiency solar panel

    International Nuclear Information System (INIS)

    Trumble, T.M.

    1991-01-01

    This paper reports on the High Efficiency Solar Panel (HESP) experiments which is to provide both engineering and scientific information concerning the effects of space radiation on advanced gallium arsenide (GaAs) solar cells. The HESP experiment consists of an ambient panel, and annealing panel and a programmable load. This experiment, in conjunction with the radiation measurement experiments abroad the CREES, provides the first opportunity to simultaneously measure the trapped radiation belts and the results of radiation damage to solar cells. The engineering information will result in a design guide for selecting the optimum solar array characteristics for different orbits and different lifetimes. The scientific information will provide both correlation of laboratory damage effects to space damage effects and a better model for predicting effective solar cell panel lifetimes

  9. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  10. Particle In Cell Codes on Highly Parallel Architectures

    Science.gov (United States)

    Tableman, Adam

    2014-10-01

    We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.

  11. Zerodur polishing process for high surface quality and high efficiency

    International Nuclear Information System (INIS)

    Tesar, A.; Fuchs, B.

    1992-08-01

    Zerodur is a glass-ceramic composite importance in applications where temperature instabilities influence optical and mechanical performance, such as in earthbound and spaceborne telescope mirror substrates. Polished Zerodur surfaces of high quality have been required for laser gyro mirrors. Polished surface quality of substrates affects performance of high reflection coatings. Thus, the interest in improving Zerodur polished surface quality has become more general. Beyond eliminating subsurface damage, high quality surfaces are produced by reducing the amount of hydrated material redeposited on the surface during polishing. With the proper control of polishing parameters, such surfaces exhibit roughnesses of < l Angstrom rms. Zerodur polishing was studied to recommend a high surface quality polishing process which could be easily adapted to standard planetary continuous polishing machines and spindles. This summary contains information on a polishing process developed at LLNL which reproducibly provides high quality polished Zerodur surfaces at very high polishing efficiencies

  12. Shielding analysis of high level waste water storage facilities using MCNP code

    Energy Technology Data Exchange (ETDEWEB)

    Yabuta, Naohiro [Mitsubishi Research Inst., Inc., Tokyo (Japan)

    2001-01-01

    The neutron and gamma-ray transport analysis for the facility as a reprocessing facility with large buildings having thick shielding was made. Radiation shielding analysis consists of a deep transmission calculation for the concrete wall and a skyshine calculation for the space out of the buildings. An efficient analysis with a short running time and high accuracy needs a variance reduction technique suitable for all the calculation regions and structures. In this report, the shielding analysis using MCNP and a discrete ordinate transport code is explained and the idea and procedure of decision of variance reduction parameter is completed. (J.P.N.)

  13. Highly efficient red electrophosphorescent devices at high current densities

    International Nuclear Information System (INIS)

    Wu Youzhi; Zhu Wenqing; Zheng Xinyou; Sun, Runguang; Jiang Xueyin; Zhang Zhilin; Xu Shaohong

    2007-01-01

    Efficiency decrease at high current densities in red electrophosphorescent devices is drastically restrained compared with that from conventional electrophosphorescent devices by using bis(2-methyl-8-quinolinato)4-phenylphenolate aluminum (BAlq) as a hole and exciton blocker. Ir complex, bis(2-(2'-benzo[4,5-α]thienyl) pyridinato-N,C 3' ) iridium (acetyl-acetonate) is used as an emitter, maximum external quantum efficiency (QE) of 7.0% and luminance of 10000cd/m 2 are obtained. The QE is still as high as 4.1% at higher current density J=100mA/cm 2 . CIE-1931 co-ordinates are 0.672, 0.321. A carrier trapping mechanism is revealed to dominate in the process of electroluminescence

  14. White LED with High Package Extraction Efficiency

    International Nuclear Information System (INIS)

    Yi Zheng; Stough, Matthew

    2008-01-01

    The goal of this project is to develop a high efficiency phosphor converting (white) Light Emitting Diode (pcLED) 1-Watt package through an increase in package extraction efficiency. A transparent/translucent monolithic phosphor is proposed to replace the powdered phosphor to reduce the scattering caused by phosphor particles. Additionally, a multi-layer thin film selectively reflecting filter is proposed between blue LED die and phosphor layer to recover inward yellow emission. At the end of the project we expect to recycle approximately 50% of the unrecovered backward light in current package construction, and develop a pcLED device with 80 lm/W e using our technology improvements and commercially available chip/package source. The success of the project will benefit luminous efficacy of white LEDs by increasing package extraction efficiency. In most phosphor-converting white LEDs, the white color is obtained by combining a blue LED die (or chip) with a powdered phosphor layer. The phosphor partially absorbs the blue light from the LED die and converts it into a broad green-yellow emission. The mixture of the transmitted blue light and green-yellow light emerging gives white light. There are two major drawbacks for current pcLEDs in terms of package extraction efficiency. The first is light scattering caused by phosphor particles. When the blue photons from the chip strike the phosphor particles, some blue light will be scattered by phosphor particles. Converted yellow emission photons are also scattered. A portion of scattered light is in the backward direction toward the die. The amount of this backward light varies and depends in part on the particle size of phosphors. The other drawback is that yellow emission from phosphor powders is isotropic. Although some backward light can be recovered by the reflector in current LED packages, there is still a portion of backward light that will be absorbed inside the package and further converted to heat. Heat generated

  15. Tailored Materials for High Efficiency CIDI Engines

    Energy Technology Data Exchange (ETDEWEB)

    Grant, G.J.; Jana, S.

    2012-03-30

    The overall goal of the project, Tailored Materials for High Efficiency Compression Ignition Direct Injection (CIDI) Engines, is to enable the implementation of new combustion strategies, such as homogeneous charge compression ignition (HCCI), that have the potential to significantly increase the energy efficiency of current diesel engines and decrease fuel consumption and environmental emissions. These strategies, however, are increasing the demands on conventional engine materials, either from increases in peak cylinder pressure (PCP) or from increases in the temperature of operation. The specific objective of this project is to investigate the application of a new material processing technology, friction stir processing (FSP), to improve the thermal and mechanical properties of engine components. The concept is to modify the surfaces of conventional, low-cost engine materials. The project focused primarily on FSP in aluminum materials that are compositional analogs to the typical piston and head alloys seen in small- to mid-sized CIDI engines. Investigations have been primarily of two types over the duration of this project: (1) FSP of a cast hypoeutectic Al-Si-Mg (A356/357) alloy with no introduction of any new components, and (2) FSP of Al-Cu-Ni alloys (Alloy 339) by physically stirring-in various quantities of carbon nanotubes/nanofibers or carbon fibers. Experimental work to date on aluminum systems has shown significant increases in fatigue lifetime and stress-level performance in aluminum-silicon alloys using friction processing alone, but work to demonstrate the addition of carbon nanotubes and fibers into aluminum substrates has shown mixed results due primarily to the difficulty in achieving porosity-free, homogeneous distributions of the particulate. A limited effort to understand the effects of FSP on steel materials was also undertaken during the course of this project. Processed regions were created in high-strength, low-alloyed steels up to 0.5 in

  16. High efficiency diffusion molecular retention tumor targeting.

    Directory of Open Access Journals (Sweden)

    Yanyan Guo

    Full Text Available Here we introduce diffusion molecular retention (DMR tumor targeting, a technique that employs PEG-fluorochrome shielded probes that, after a peritumoral (PT injection, undergo slow vascular uptake and extensive interstitial diffusion, with tumor retention only through integrin molecular recognition. To demonstrate DMR, RGD (integrin binding and RAD (control probes were synthesized bearing DOTA (for (111 In(3+, a NIR fluorochrome, and 5 kDa PEG that endows probes with a protein-like volume of 25 kDa and decreases non-specific interactions. With a GFP-BT-20 breast carcinoma model, tumor targeting by the DMR or i.v. methods was assessed by surface fluorescence, biodistribution of [(111In] RGD and [(111In] RAD probes, and whole animal SPECT. After a PT injection, both probes rapidly diffused through the normal and tumor interstitium, with retention of the RGD probe due to integrin interactions. With PT injection and the [(111In] RGD probe, SPECT indicated a highly tumor specific uptake at 24 h post injection, with 352%ID/g tumor obtained by DMR (vs 4.14%ID/g by i.v.. The high efficiency molecular targeting of DMR employed low probe doses (e.g. 25 ng as RGD peptide, which minimizes toxicity risks and facilitates clinical translation. DMR applications include the delivery of fluorochromes for intraoperative tumor margin delineation, the delivery of radioisotopes (e.g. toxic, short range alpha emitters for radiotherapy, or the delivery of photosensitizers to tumors accessible to light.

  17. High collection efficiency CVD diamond alpha detectors

    International Nuclear Information System (INIS)

    Bergonzo, P.; Foulon, F.; Marshall, R.D.; Jany, C.; Brambilla, A.; McKeag, R.D.; Jackman, R.B.

    1998-01-01

    Advances in Chemical Vapor Deposited (CVD) diamond have enabled the routine use of this material for sensor device fabrication, allowing exploitation of its unique combination of physical properties (low temperature susceptibility (> 500 C), high resistance to radiation damage (> 100 Mrad) and to corrosive media). A consequence of CVD diamond growth on silicon is the formation of polycrystalline films which has a profound influence on the physical and electronic properties with respect to those measured on monocrystalline diamond. The authors report the optimization of physical and geometrical device parameters for radiation detection in the counting mode. Sandwich and co-planar electrode geometries are tested and their performances evaluated with regard to the nature of the field profile and drift distances inherent in such devices. The carrier drift length before trapping was measured under alpha particles and values as high as 40% of the overall film thickness are reported. Further, by optimizing the device geometry, they show that a gain in collection efficiency, defined as the induced charge divided by the deposited charge within the material, can be achieved even though lower bias values are used

  18. High-Efficient Parallel CAVLC Encoders on Heterogeneous Multicore Architectures

    Directory of Open Access Journals (Sweden)

    H. Y. Su

    2012-04-01

    Full Text Available This article presents two high-efficient parallel realizations of the context-based adaptive variable length coding (CAVLC based on heterogeneous multicore processors. By optimizing the architecture of the CAVLC encoder, three kinds of dependences are eliminated or weaken, including the context-based data dependence, the memory accessing dependence and the control dependence. The CAVLC pipeline is divided into three stages: two scans, coding, and lag packing, and be implemented on two typical heterogeneous multicore architectures. One is a block-based SIMD parallel CAVLC encoder on multicore stream processor STORM. The other is a component-oriented SIMT parallel encoder on massively parallel architecture GPU. Both of them exploited rich data-level parallelism. Experiments results show that compared with the CPU version, more than 70 times of speedup can be obtained for STORM and over 50 times for GPU. The implementation of encoder on STORM can make a real-time processing for 1080p @30fps and GPU-based version can satisfy the requirements for 720p real-time encoding. The throughput of the presented CAVLC encoders is more than 10 times higher than that of published software encoders on DSP and multicore platforms.

  19. High precision efficiency calibration of a HPGe detector

    International Nuclear Information System (INIS)

    Nica, N.; Hardy, J.C.; Iacob, V.E.; Helmer, R.G.

    2003-01-01

    Many experiments involving measurements of γ rays require a very precise efficiency calibration. Since γ-ray detection and identification also requires good energy resolution, the most commonly used detectors are of the coaxial HPGe type. We have calibrated our 70% HPGe to ∼ 0.2% precision, motivated by the measurement of precise branching ratios (BR) in superallowed 0 + → 0 + β decays. These BRs are essential ingredients in extracting ft-values needed to test the Standard Model via the unitarity of the Cabibbo-Kobayashi-Maskawa matrix, a test that it currently fails by more than two standard deviations. To achieve the required high precision in our efficiency calibration, we measured 17 radioactive sources at a source-detector distance of 15 cm. Some of these were commercial 'standard' sources but we achieved the highest relative precision with 'home-made' sources selected because they have simple decay schemes with negligible side feeding, thus providing exactly matched γ-ray intensities. These latter sources were produced by us at Texas A and M by n-activation or by nuclear reactions. Another critical source among the 17 was a 60 Co source produced by Physikalisch-Technische Bundesanstalt, Braunschweig, Germany: its absolute activity was quoted to better than 0.06%. We used it to establish our absolute efficiency, while all the other sources were used to determine relative efficiencies, extending our calibration over a large energy range (40-3500 keV). Efficiencies were also determined with Monte Carlo calculations performed with the CYLTRAN code. The physical parameters of the Ge crystal were independently determined and only two (unmeasurable) dead-layers were adjusted, within physically reasonable limits, to achieve precise absolute agreement with our measured efficiencies. The combination of measured efficiencies at more than 60 individual energies and Monte Carlo calculations to interpolate between them allows us to quote the efficiency of our

  20. High bandgap III-V alloys for high efficiency optoelectronics

    Energy Technology Data Exchange (ETDEWEB)

    Alberi, Kirstin; Mascarenhas, Angelo; Wanlass, Mark

    2017-01-10

    High bandgap alloys for high efficiency optoelectronics are disclosed. An exemplary optoelectronic device may include a substrate, at least one Al.sub.1-xIn.sub.xP layer, and a step-grade buffer between the substrate and at least one Al.sub.1-xIn.sub.xP layer. The buffer may begin with a layer that is substantially lattice matched to GaAs, and may then incrementally increase the lattice constant in each sequential layer until a predetermined lattice constant of Al.sub.1-xIn.sub.xP is reached.

  1. The KFA-Version of the high-energy transport code HETC and the generalized evaluation code SIMPEL

    International Nuclear Information System (INIS)

    Cloth, P.; Filges, D.; Sterzenbach, G.; Armstrong, T.W.; Colborn, B.L.

    1983-03-01

    This document describes the updates that have been made to the high-energy transport code HETC for use in the German spallation-neutron source project SNQ. Performance and purpose of the subsidiary code SIMPEL that has been written for general analysis of the HETC output are also described. In addition means of coupling to low energy transport programs, such as the Monte-Carlo code MORSE is provided. As complete input descriptions for HETC and SIMPEL are given together with a sample problem, this document can serve as a user's manual for these two codes. The document is also an answer to the demand that has been issued by a greater community of HETC users on the ICANS-IV meeting, Oct 20-24 1980, Tsukuba-gun, Japan for a complete description of at least one single version of HETC among the many different versions that exist. (orig.)

  2. Performance of a high efficiency high power UHF klystron

    International Nuclear Information System (INIS)

    Konrad, G.T.

    1977-03-01

    A 500 kW c-w klystron was designed for the PEP storage ring at SLAC. The tube operates at 353.2 MHz, 62 kV, a microperveance of 0.75, and a gain of approximately 50 dB. Stable operation is required for a VSWR as high as 2 : 1 at any phase angle. The design efficiency is 70%. To obtain this value of efficiency, a second harmonic cavity is used in order to produce a very tightly bunched beam in the output gap. At the present time it is planned to install 12 such klystrons in PEP. A tube with a reduced size collector was operated at 4% duty at 500 kW. An efficiency of 63% was observed. The same tube was operated up to 200 kW c-w for PEP accelerator cavity tests. A full-scale c-w tube reached 500 kW at 65 kV with an efficiency of 55%. In addition to power and phase measurements into a matched load, some data at various load mismatches are presented

  3. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  4. Series-Tuned High Efficiency RF-Power Amplifiers

    DEFF Research Database (Denmark)

    Vidkjær, Jens

    2008-01-01

    An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits.......An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits....

  5. Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO

    Science.gov (United States)

    Kulkarni, Sameer; Beach, Timothy A.

    2017-01-01

    Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.

  6. Efficient olfactory coding in the pheromone receptor neuron of a moth.

    Directory of Open Access Journals (Sweden)

    Lubomir Kostal

    2008-04-01

    Full Text Available The concept of coding efficiency holds that sensory neurons are adapted, through both evolutionary and developmental processes, to the statistical characteristics of their natural stimulus. Encouraged by the successful invocation of this principle to predict how neurons encode natural auditory and visual stimuli, we attempted its application to olfactory neurons. The pheromone receptor neuron of the male moth Antheraea polyphemus, for which quantitative properties of both the natural stimulus and the reception processes are available, was selected. We predicted several characteristics that the pheromone plume should possess under the hypothesis that the receptors perform optimally, i.e., transfer as much information on the stimulus per unit time as possible. Our results demonstrate that the statistical characteristics of the predicted stimulus, e.g., the probability distribution function of the stimulus concentration, the spectral density function of the stimulation course, and the intermittency, are in good agreement with those measured experimentally in the field. These results should stimulate further quantitative studies on the evolutionary adaptation of olfactory nervous systems to odorant plumes and on the plume characteristics that are most informative for the 'sniffer'. Both aspects are relevant to the design of olfactory sensors for odour-tracking robots.

  7. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  8. Energy Efficient Graphene Based High Performance Capacitors.

    Science.gov (United States)

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Validation of SCALE code package on high performance neutron shields

    International Nuclear Information System (INIS)

    Bace, M.; Jecmenica, R.; Smuc, T.

    1999-01-01

    The shielding ability and other properties of new high performance neutron shielding materials from the KRAFTON series have been recently published. A comparison of the published experimental and MCNP results for the two materials of the KRAFTON series, with our own calculations has been done. Two control modules of the SCALE-4.4 code system have been used, one of them based on one dimensional radiation transport analysis (SAS1) and other based on the three dimensional Monte Carlo method (SAS3). The comparison of the calculated neutron dose equivalent rates shows a good agreement between experimental and calculated results for the KRAFTON-N2 material.. Our results indicate that the N2-M-N2 sandwich type is approximately 10% inferior as neutron shield to the KRAFTON-N2 material. All values of neutron dose equivalent obtained by SAS1 are approximately 25% lower in comparison with the SAS3 results, which indicates proportions of discrepancies introduced by one-dimensional geometry approximation.(author)

  10. Highly efficient silicon light emitting diode

    NARCIS (Netherlands)

    Le Minh, P.; Holleman, J.; Wallinga, Hans

    2002-01-01

    In this paper, we describe the fabrication, using standard silicon processing techniques, of silicon light-emitting diodes (LED) that efficiently emit photons with energy around the silicon bandgap. The improved efficiency had been explained by the spatial confinement of charge carriers due to a

  11. A novel high efficiency solar photovoltalic pump

    NARCIS (Netherlands)

    Diepens, J.F.L.; Smulders, P.T.; Vries, de D.A.

    1993-01-01

    The daily average overall efficiency of a solar pump system is not only influenced by the maximum efficiency of the components of the system, but just as much by the correct matching of the components to the local irradiation pattern. Normally centrifugal pumps are used in solar pump systems. The

  12. Simulation for photon detection in spectrometric system of high purity (HPGe) using MCNPX code

    International Nuclear Information System (INIS)

    Correa, Guilherme Jorge de Souza

    2013-01-01

    The Brazilian National Commission of Nuclear Energy defines parameters for classification and management of radioactive waste in accordance with the activity of materials. The efficiency of a detection system is crucial to determine the real activity of a radioactive source. When it's possible, the system's calibration should be performed using a standard source. Unfortunately, there are only a few cases that it can be done this way, considering the difficulty of obtaining appropriate standard sources for each type of measurement. So, computer simulations can be performed to assist in calculating of the efficiency of the system and, consequently, also auxiliary the classification of radioactive waste. This study aims to model a high purity germanium (HPGe) detector with MCNPX code, approaching the spectral values computationally obtained of the values experimentally obtained for the photopeak of 137 Cs. The approach will be made through changes in outer dead layer of the germanium crystal modeled. (author)

  13. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  14. High efficiency targets for high gain inertial confinement fusion

    International Nuclear Information System (INIS)

    Gardner, J.H.; Bodner, S.E.

    1986-01-01

    Rocket efficiencies as high as 15% are possible using short wavelength lasers and moderately high aspect ratio pellet designs. These designs are made possible by two recent breakthroughs in physics constraints. First is the development of the Induced Spatial Incoherence (ISI) technique which allows uniform illumination of the pellet and relaxes the constraint of thermal smoothing, permitting the use of short wavelength laser light. Second is the discovery that the Rayleigh-Taylor growth rate is considerably reduced at the short laser wavelengths. By taking advantage of the reduced constraints imposed by nonuniform laser illumination and Rayleigh-Taylor instability, pellets using 1/4 micron laser light and initial aspect ratios of about 10 (with in flight aspect ratios of about 150 to 200) may produce energy gains as high as 200 to 250

  15. High power klystrons for efficient reliable high power amplifiers

    Science.gov (United States)

    Levin, M.

    1980-11-01

    This report covers the design of reliable high efficiency, high power klystrons which may be used in both existing and proposed troposcatter radio systems. High Power (10 kW) klystron designs were generated in C-band (4.4 GHz to 5.0 GHz), S-band (2.5 GHz to 2.7 GHz), and L-band or UHF frequencies (755 MHz to 985 MHz). The tubes were designed for power supply compatibility and use with a vapor/liquid phase heat exchanger. Four (4) S-band tubes were developed in the course of this program along with two (2) matching focusing solenoids and two (2) heat exchangers. These tubes use five (5) tuners with counters which are attached to the focusing solenoids. A reliability mathematical model of the tube and heat exchanger system was also generated.

  16. High efficiency quasi-monochromatic infrared emitter

    Energy Technology Data Exchange (ETDEWEB)

    Brucoli, Giovanni; Besbes, Mondher; Benisty, Henri, E-mail: henri.benisty@institutoptique.fr; Greffet, Jean-Jacques [Laboratoire Charles Fabry, UMR 8501, Institut d’Optique, CNRS, Université Paris-Sud 11, 2, Avenue Augustin Fresnel, 91127 Palaiseau Cedex (France); Bouchon, Patrick; Haïdar, Riad [Office National d’Études et de Recherches Aérospatiales, Chemin de la Hunière, 91761 Palaiseau (France)

    2014-02-24

    Incandescent radiation sources are widely used as mid-infrared emitters owing to the lack of alternative for compact and low cost sources. A drawback of miniature hot systems such as membranes is their low efficiency, e.g., for battery powered systems. For targeted narrow-band applications such as gas spectroscopy, the efficiency is even lower. In this paper, we introduce design rules valid for very generic membranes demonstrating that their energy efficiency for use as incandescent infrared sources can be increased by two orders of magnitude.

  17. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  18. Energy efficiency of high-rise buildings

    Science.gov (United States)

    Zhigulina, Anna Yu.; Ponomarenko, Alla M.

    2018-03-01

    The article is devoted to analysis of tendencies and advanced technologies in the field of energy supply and energy efficiency of tall buildings, to the history of the emergence of the concept of "efficiency" and its current interpretation. Also the article show the difference of evaluation criteria of the leading rating systems LEED and BREEAM. Authors reviewed the latest technologies applied in the construction of energy efficient buildings. Methodological approach to the design of tall buildings taking into account energy efficiency needs to include the primary energy saving; to seek the possibility of production and accumulation of alternative electric energy by converting energy from the sun and wind with the help of special technical devices; the application of regenerative technologies.

  19. High Efficiency Refrigeration Process, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — It has been proposed by NASA JSC studies, that the most mass efficient (non-nuclear) method of Lunar habitat cooling is via photovoltaic (PV) direct vapor...

  20. Potential impacts of climate change on the built environment: ASHRAE climate zones, building codes and national energy efficiency

    Energy Technology Data Exchange (ETDEWEB)

    New, Joshua Ryan [ORNL; Kumar, Jitendra [ORNL; Hoffman, Forrest M. [ORNL

    2017-10-01

    Statement of the Problem: ASHRAE releases updates to 90.1 “Energy Standard for Buildings except Low-Rise Residential Buildings” every three years resulting in a 3.7%-17.3% increase in energy efficiency for buildings with each release. This is adopted by or informs building codes in nations across the globe, is the National Standard for the US, and individual states elect which release year of the standard they will enforce. These codes are built upon Standard 169 “Climatic Data for Building Design Standards,” the latest 2017 release of which defines climate zones based on 8, 118 weather stations throughout the world and data from the past 8-25 years. This data may not be indicative of the weather that new buildings built today, will see during their upcoming 30-120 year lifespan. Methodology & Theoretical Orientation: Using more modern, high-resolution datasets from climate satellites, IPCC climate models (PCM and HadGCM), high performance computing resources (Titan) and new capabilities for clustering and optimization the authors briefly analyzed different methods for redefining climate zones. Using bottom-up analysis of multiple meteorological variables which were the subject matter, experts selected as being important to energy consumption, rather than the heating/cooling degree days currently used. Findings: We analyzed the accuracy of redefined climate zones, compared to current climate zones and how the climate zones moved under different climate change scenarios, and quantified the accuracy of these methods on a local level, at a national scale for the US. Conclusion & Significance: There is likely to be a significant annual, national energy and cost (billions USD) savings that could be realized by adjusting climate zones to take into account anticipated trends or scenarios in regional weather patterns.

  1. CANDU fuel cycle economic efficiency assessments using the IAEA-MESSAGE-V code

    International Nuclear Information System (INIS)

    Prodea, Iosif; Margeanu, Cristina Alice; Aioanei, Corina; Prisecaru, Ilie; Danila, Nicolae

    2007-01-01

    The main goal of the paper is to evaluate different electricity generation costs in a CANDU Nuclear Power Plant (NPP) using different nuclear fuel cycles. The IAEA-MESSAGE code (Model for Energy Supply Strategy Alternatives and their General Environmental Impacts) will be used to accomplish these assessments. This complex tool was supplied by International Atomic Energy Agency (IAEA) in 2002 at 'IAEA-Regional Training Course on Development and Evaluation of Alternative Energy Strategies in Support of Sustainable Development' held in Institute for Nuclear Research Pitesti. It is worthy to remind that the sustainable development requires satisfying the energy demand of present generations without compromising the possibility of future generations to meet their own needs. Based on the latest public information in the next 10-15 years four CANDU-6 based NPP could be in operation in Romania. Two of them will have some enhancements not clearly specified, yet. Therefore we consider being necessary to investigate possibility to enhance the economic efficiency of existing in-service CANDU-6 power reactors. The MESSAGE program can satisfy these requirements if appropriate input models will be built. As it is mentioned in the dedicated issues, a major inherent feature of CANDU is its fuel cycle flexibility. Keeping this in mind, some proposed CANDU fuel cycles will be analyzed in the paper: Natural Uranium (NU), Slightly Enriched Uranium (SEU), Recovered Uranium (RU) with and without reprocessing. Finally, based on optimization of the MESSAGE objective function an economic hierarchy of CANDU fuel cycles will be proposed. The authors used mainly public information on different costs required by analysis. (authors)

  2. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    Directory of Open Access Journals (Sweden)

    Chia-Chang Hu

    2005-04-01

    Full Text Available A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array (J, the rank of the MWF (M, the system processing gain (N, and the number of samples in a chip interval (S, that is, 𝒪(JMNS. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE or the subspace-based eigenstructure analysis is a function of 𝒪((JNS3. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the J-element antenna array, the amount of the L-sample support, and the rank of the M-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  3. High Power High Efficiency Diode Laser Stack for Processing

    Science.gov (United States)

    Gu, Yuanyuan; Lu, Hui; Fu, Yueming; Cui, Yan

    2018-03-01

    High-power diode lasers based on GaAs semiconductor bars are well established as reliable and highly efficient laser sources. As diode laser is simple in structure, small size, longer life expectancy with the advantages of low prices, it is widely used in the industry processing, such as heat treating, welding, hardening, cladding and so on. Respectively, diode laser could make it possible to establish the practical application because of rectangular beam patterns which are suitable to make fine bead with less power. At this power level, it can have many important applications, such as surgery, welding of polymers, soldering, coatings and surface treatment of metals. But there are some applications, which require much higher power and brightness, e.g. hardening, key hole welding, cutting and metal welding. In addition, High power diode lasers in the military field also have important applications. So all developed countries have attached great importance to high-power diode laser system and its applications. This is mainly due their low performance. In this paper we will introduce the structure and the principle of the high power diode stack.

  4. Determination of the dead layer and full-energy peak efficiency of an HPGe detector using the MCNP code and experimental results

    Directory of Open Access Journals (Sweden)

    M Moeinifar

    2017-02-01

    Full Text Available One important factor in using an High Purity Germanium (HPGe detector is its efficiency that highly depends on the geometry and absorption factors, so that when the configuration of source-detector geometry is changed, the detector efficiency must be re-measured. The best way of determining the efficiency of a detector is measuring the efficiency of standard sources. But considering the fact that standard sources are hardly available and it is time consuming to find them, determinig the efficiency by simulation which gives enough efficiency in less time, is important. In this study, the dead layer thickness and the full-energy peak efficiency of an HPGe detector was obtained by Monte Carlo simulation, using MCNPX code. For this, we first measured gamma–ray spectra for different sources placed at various distances from the detector and stored the measured spectra obtained. Then the obtained spectra were simulated under similar conditions in vitro.At first, the whole volume of germanium was regarded as active, and the obtaind spectra from calculation were compared with the corresponding experimental spectra. Comparison of the calculated spectra with the measured spectra showed considerable differences. By making small variations in the dead layer thickness of the detector (about a few hundredths of a millimeter in the simulation program, we tried to remove these differences and in this way a dead layer of 0.57 mm was obtained for the detector. By incorporating this value for the dead layer in the simulating program, the full-energy peak efficiency of the detector was then obtained both by experiment and by simulation, for various sources at various distances from the detector, and both methods showed good agreements. Then, using MCNP code and considering the exact measurement system, one can conclude that the efficiency of an HPGe detector for various source-detector geometries can be calculated with rather good accuracy by simulation method

  5. An efficient genetic algorithm for structural RNA pairwise alignment and its application to non-coding RNA discovery in yeast

    Directory of Open Access Journals (Sweden)

    Taneda Akito

    2008-12-01

    Full Text Available Abstract Background Aligning RNA sequences with low sequence identity has been a challenging problem since such a computation essentially needs an algorithm with high complexities for taking structural conservation into account. Although many sophisticated algorithms for the purpose have been proposed to date, further improvement in efficiency is necessary to accelerate its large-scale applications including non-coding RNA (ncRNA discovery. Results We developed a new genetic algorithm, Cofolga2, for simultaneously computing pairwise RNA sequence alignment and consensus folding, and benchmarked it using BRAliBase 2.1. The benchmark results showed that our new algorithm is accurate and efficient in both time and memory usage. Then, combining with the originally trained SVM, we applied the new algorithm to novel ncRNA discovery where we compared S. cerevisiae genome with six related genomes in a pairwise manner. By focusing our search to the relatively short regions (50 bp to 2,000 bp sandwiched by conserved sequences, we successfully predict 714 intergenic and 1,311 sense or antisense ncRNA candidates, which were found in the pairwise alignments with stable consensus secondary structure and low sequence identity (≤ 50%. By comparing with the previous predictions, we found that > 92% of the candidates is novel candidates. The estimated rate of false positives in the predicted candidates is 51%. Twenty-five percent of the intergenic candidates has supports for expression in cell, i.e. their genomic positions overlap those of the experimentally determined transcripts in literature. By manual inspection of the results, moreover, we obtained four multiple alignments with low sequence identity which reveal consensus structures shared by three species/sequences. Conclusion The present method gives an efficient tool complementary to sequence-alignment-based ncRNA finders.

  6. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  7. High Efficiency and Low Cost Thermal Energy Storage System

    Energy Technology Data Exchange (ETDEWEB)

    Sienicki, James J. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Lv, Qiuping [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Moisseytsev, Anton [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Bucknor, Matthew [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2017-09-29

    BgtL, LLC (BgtL) is focused on developing and commercializing its proprietary compact technology for processes in the energy sector. One such application is a compact high efficiency Thermal Energy Storage (TES) system that utilizes the heat of fusion through phase change between solid and liquid to store and release energy at high temperatures and incorporate state-of-the-art insulation to minimize heat dissipation. BgtL’s TES system would greatly improve the economics of existing nuclear and coal-fired power plants by allowing the power plant to store energy when power prices are low and sell power into the grid when prices are high. Compared to existing battery storage technology, BgtL’s novel thermal energy storage solution can be significantly less costly to acquire and maintain, does not have any waste or environmental emissions, and does not deteriorate over time; it can keep constant efficiency and operates cleanly and safely. BgtL’s engineers are experienced in this field and are able to design and engineer such a system to a specific power plant’s requirements. BgtL also has a strong manufacturing partner to fabricate the system such that it qualifies for an ASME code stamp. BgtL’s vision is to be the leading provider of compact systems for various applications including energy storage. BgtL requests that all technical information about the TES designs be protected as proprietary information. To honor that request, only non-proprietay summaries are included in this report.

  8. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    Directory of Open Access Journals (Sweden)

    Muhammad Harist Murdani

    2018-03-01

    Full Text Available In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc and neighborhood proximity (Top-K. Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  9. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    Science.gov (United States)

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  10. Designing high efficient solar powered lighting systems

    DEFF Research Database (Denmark)

    Poulsen, Peter Behrensdorff; Thorsteinsson, Sune; Lindén, Johannes

    2016-01-01

    Some major challenges in the development of L2L products is the lack of efficient converter electronics, modelling tools for dimensioning and furthermore, characterization facilities to support the successful development of the products. We report the development of 2 Three-Port-Converters respec...

  11. A structural modification of the two dimensional fuel behaviour analysis code FEMAXI-III with high-speed vectorized operation

    International Nuclear Information System (INIS)

    Yanagisawa, Kazuaki; Ishiguro, Misako; Yamazaki, Takashi; Tokunaga, Yasuo.

    1985-02-01

    Though the two-dimensional fuel behaviour analysis code FEMAXI-III has been developed by JAERI in form of optimized scalar computer code, the call for more efficient code usage generally arized from the recent trends like high burn-up and load follow operation asks the code into further modification stage. A principal aim of the modification is to transform the already implemented scalar type subroutines into vectorized forms to make the programme structure efficiently run on high-speed vector computers. The effort of such structural modification has been finished on a fair way to success. The benchmarking two tests subsequently performed to examine the effect of the modification led us the following concluding remarks: (1) In the first benchmark test, comparatively high-burned three fuel rods that have been irradiated in HBWR, BWR, and PWR condition are prepared. With respect to all cases, a net computing time consumed in the vectorized FEMAXI is approximately 50 % less than that consumed in the original one. (2) In the second benchmark test, a total of 26 PWR fuel rods that have been irradiated in the burn-up ranges of 13-30 MWd/kgU and subsequently power ramped in R2 reactor, Sweden is prepared. In this case the code is purposed to be used for making an envelop of PCI-failure threshold through 26 times code runs. Before coming to the same conclusion, the vectorized FEMAXI-III consumed a net computing time 18 min., while the original FEMAXI-III consumed a computing time 36 min. respectively. (3) The effects obtained from such structural modification are found to be significantly attributed to saving a net computing time in a mechanical calculation in the vectorized FEMAXI-III code. (author)

  12. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    Science.gov (United States)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  13. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    Science.gov (United States)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  14. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    Science.gov (United States)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  15. Energy efficiency indicators for high electric-load buildings

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, Bernard; Balmer, Markus A.; Kinney, Satkartar; Le Strat, Pascale; Shibata, Yoshiaki; Varone, Frederic

    2003-06-01

    Energy per unit of floor area is not an adequate indicator for energy efficiency in high electric-load buildings. For two activities, restaurants and computer centres, alternative indicators for energy efficiency are discussed.

  16. Efficient and Highly Aldehyde Selective Wacker Oxidation

    KAUST Repository

    Teo, Peili; Wickens, Zachary K.; Dong, Guangbin; Grubbs, Robert H.

    2012-01-01

    A method for efficient and aldehyde-selective Wacker oxidation of aryl-substituted olefins using PdCl 2(MeCN) 2, 1,4-benzoquinone, and t-BuOH in air is described. Up to a 96% yield of aldehyde can be obtained, and up to 99% selectivity can be achieved with styrene-related substrates. © 2012 American Chemical Society.

  17. Efficient and Highly Aldehyde Selective Wacker Oxidation

    KAUST Repository

    Teo, Peili

    2012-07-06

    A method for efficient and aldehyde-selective Wacker oxidation of aryl-substituted olefins using PdCl 2(MeCN) 2, 1,4-benzoquinone, and t-BuOH in air is described. Up to a 96% yield of aldehyde can be obtained, and up to 99% selectivity can be achieved with styrene-related substrates. © 2012 American Chemical Society.

  18. An Efficient Platform for the Automatic Extraction of Patterns in Native Code

    Directory of Open Access Journals (Sweden)

    Javier Escalada

    2017-01-01

    Full Text Available Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.

  19. Evaluation of Extended CCSDS Reed-Solomon Codes for Bandwidth efficiency

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Justesen, Jørn; Larsen, Knud J.

    1999-01-01

    The present CCSDS recommendation for Telemetry Channel Coding was originally written around twenty years ago. The appearance of the Turbo coding scheme has made an inclusion of this powerful scheme desirable, and thus it becomes natural also to perform a major rewriting of the other part of the r....... Finally, we present advantages and disadvantages by placing the frame synchronizer before and after the Viterbi decoder, and we suggest an option where the attached sync marker is not convolutionally encoded....

  20. Efficiency of rate and latency coding with respect to metabolic cost and time

    Czech Academy of Sciences Publication Activity Database

    Leváková, Marie

    2017-01-01

    Roč. 161, Nov 2017 (2017), s. 31-40 ISSN 0303-2647 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : rate coding * temporal coding * metabolic cost * Fisher information Subject RIV: BD - Theory of Information OBOR OECD: Biology (theoretical, mathematical, thermal, cryobiology, biological rhythm), Evolutionary biology Impact factor: 1.652, year: 2016

  1. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  2. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  3. MICROX-2: an improved two-region flux spectrum code for the efficient calculation of group cross sections

    International Nuclear Information System (INIS)

    Mathews, D.; Koch, P.

    1979-12-01

    The MICROX-2 code is an improved version of the MICROX code. The improvements allow MICROX-2 to be used for the efficient and rigorous preparation of broad group neutron cross sections for poorly moderated systems such as fast breeder reactors in addition to the well moderated thermal reactors for which MICROX was designed. MICROX-2 is an integral transport theory code which solves the neutron slowing down and thermalization equations on a detailed energy grid for two-region lattice cells. The fluxes in the two regions are coupled by transport corrected collision probabilities. The inner region may include two different types of grains (particles). Neutron leakage effects are treated by performing B 1 slowing down and P 0 plus DB 2 thermalization calculations in each region. Cell averaged diffusion coefficients are prepared with the Benoist cell homogenization prescription

  4. High efficiency nebulization for helium inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Jorabchi, Kaveh; McCormick, Ryan; Levine, Jonathan A.; Liu Huiying; Nam, S.-H.; Montaser, Akbar

    2006-01-01

    A pneumatically-driven, high efficiency nebulizer is explored for helium inductively coupled plasma mass spectrometry. The aerosol characteristics and analyte transport efficiencies of the high efficiency nebulizer for nebulization with helium are measured and compared to the results obtained with argon. Analytical performance indices of the helium inductively coupled plasma mass spectrometry are evaluated in terms of detection limits and precision. The helium inductively coupled plasma mass spectrometry detection limits obtained with the high efficiency nebulizer at 200 μL/min are higher than those achieved with the ultrasonic nebulizer consuming 2 mL/min solution, however, precision is generally better with high efficiency nebulizer (1-4% vs. 3-8% with ultrasonic nebulizer). Detection limits with the high efficiency nebulizer at 200 μL/min solution uptake rate approach those using ultrasonic nebulizer upon efficient desolvation with a heated spray chamber followed by a Peltier-cooled multipass condenser

  5. Recent Advances in High Efficiency Solar Cells

    Institute of Scientific and Technical Information of China (English)

    Yoshio; Ohshita; Hidetoshi; Suzuki; Kenichi; Nishimura; Masafumi; Yamaguchi

    2007-01-01

    1 Results The conversion efficiency of sunlight to electricity is limited around 25%,when we use single junction solar cells. In the single junction cells,the major energy losses arise from the spectrum mismatching. When the photons excite carriers with energy well in excess of the bandgap,these excess energies were converted to heat by the rapid thermalization. On the other hand,the light with lower energy than that of the bandgap cannot be absorbed by the semiconductor,resulting in the losses. One way...

  6. High efficiency cyclotron trap assisted positron moderator

    OpenAIRE

    Gerchow, L.; Cooke, D. A.; Braccini, S.; Döbeli, M.; Kirch, K.; Köster, U.; Müller, A.; Van Der Meulen, N. P.; Vermeulen, C.; Rubbia, A.; Crivelli, P.

    2017-01-01

    We report the realisation of a cyclotron trap assisted positron tungsten moderator for the conversion of positrons with a broad keV- few MeV energy spectrum to a mono-energetic eV beam with an efficiency of 1.8(2)% defined as the ratio of the slow positrons divided by the $\\beta^+$ activity of the radioactive source. This is an improvement of almost two orders of magnitude compared to the state of the art of tungsten moderators. The simulation validated with this measurement suggests that usi...

  7. DCHAIN-SP 2001: High energy particle induced radioactivity calculation code

    Energy Technology Data Exchange (ETDEWEB)

    Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)

    2001-03-01

    For the purpose of contribution to safety design calculations for induced radioactivities in the JAERI/KEK high-intensity proton accelerator project facilities, the DCHAIN-SP which calculates the high energy particle induced radioactivity has been updated to DCHAIN-SP 2001. The following three items were improved: (1) Fission yield data are included to apply the code to experimental facility design for nuclear transmutation of long-lived radioactive waste where fissionable materials are treated. (2) Activation cross section data below 20 MeV are revised. In particular, attentions are paid to cross section data of materials which have close relation to the facilities, i.e., mercury, lead and bismuth, and to tritium production cross sections which are important in terms of safety of the facilities. (3) User-interface for input/output data is sophisticated to perform calculations more efficiently than that in the previous version. Information needed for use of the code is attached in Appendices; the DCHAIN-SP 2001 manual, the procedures of installation and execution of DCHAIN-SP, and sample problems. (author)

  8. A high-efficiency electromechanical battery

    Science.gov (United States)

    Post, Richard F.; Fowler, T. K.; Post, Stephen F.

    1993-03-01

    In our society there is a growing need for efficient cost-effective means for storing electrical energy. The electric auto is a prime example. Storage systems for the electric utilities, and for wind or solar power, are other examples. While electrochemical cells could in principle supply these needs, the existing E-C batteries have well-known limitations. This article addresses an alternative, the electromechanical battery (EMB). An EMB is a modular unit consisting of an evacuated housing containing a fiber-composite rotor. The rotor is supported by magnetic bearings and contains an integrally mounted permanent magnet array. This article addresses design issues for EMBs with rotors made up of nested cylinders. Issues addressed include rotational stability, stress distributions, generator/motor power and efficiency, power conversion, and cost. It is concluded that the use of EMBs in electric autos could result in a fivefold reduction (relative to the IC engine) in the primary energy input required for urban driving, with a concomitant major positive impact on our economy and on air pollution.

  9. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  10. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  11. Increasing the efficiency of the TOUGH code for running large-scale problems in nuclear waste isolation

    International Nuclear Information System (INIS)

    Nitao, J.J.

    1990-08-01

    The TOUGH code developed at Lawrence Berkeley Laboratory (LBL) is being extensively used to numerically simulate the thermal and hydrologic environment around nuclear waste packages in the unsaturated zone for the Yucca Mountain Project. At the Lawrence Livermore National Laboratory (LLNL) we have rewritten approximately 80 percent of the TOUGH code to increase its speed and incorporate new options. The geometry of many requires large numbers of computational elements in order to realistically model detailed physical phenomena, and, as a result, large amounts of computer time are needed. In order to increase the speed of the code we have incorporated fast linear equation solvers, vectorization of substantial portions of code, improved automatic time stepping, and implementation of table look-up for the steam table properties. These enhancements have increased the speed of the code for typical problems by a factor of 20 on the Cray 2 computer. In addition to the increase in computational efficiency we have added several options: vapor pressure lowering; equivalent continuum treatment of fractures; energy and material volumetric, mass and flux accounting; and Stefan-Boltzmann radiative heat transfer. 5 refs

  12. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  13. Development of high efficiency neutron detectors

    International Nuclear Information System (INIS)

    Pickrell, M.M.; Menlove, H.O.

    1993-01-01

    The authors have designed a novel neutron detector system using conventional 3 He detector tubes and composites of polyethylene and graphite. At this time the design consists entirely of MCNP simulations of different detector configurations and materials. These detectors are applicable to low-level passive and active neutron assay systems such as the passive add-a-source and the 252 Cf shuffler. Monte Carlo simulations of these neutron detector designs achieved efficiencies of over 35% for assay chambers that can accommodate 55-gal. drums. Only slight increases in the number of detector tubes and helium pressure are required. The detectors also have reduced die-away times. Potential applications are coincident and multiplicity neutron counting for waste disposal and safeguards. The authors will present the general design philosophy, underlying physics, calculation mechanics, and results

  14. High efficient white organic light emitting diodes

    Energy Technology Data Exchange (ETDEWEB)

    Seidel, Stefan; Krause, Ralf [Department of Materials Science VI, University of Erlangen-Nuremberg (Germany); Siemens AG, CT MM 1, Erlangen (Germany); Kozlowski, Fryderyk; Schmid, Guenter; Hunze, Arvid [Siemens AG, CT MM 1, Erlangen (Germany); Winnacker, Albrecht [Department of Materials Science VI, University of Erlangen-Nuremberg (Germany)

    2007-07-01

    Due to the rapid progress in the last years the performance of organic light emitting diodes (OLEDs) has reached a level where general lighting presents a most interesting application target. We demonstrate, how the color coordinates of the emission spectrum can be adjusted using a combinatorial evaporation tool to lie on the desired black body curve representing cold and warm white, respectively. The evaluation includes phosphorescent and fluorescent dye approaches to optimize lifetime and efficiency, simultaneously. Detailed results are presented with respect to variation of layer thicknesses and dopant concentrations of each layer within the OLED stack. The most promising approach contains phosphorescent red and green dyes combined with a fluorescent blue one as blue phosphorescent dopants are not yet stable enough to achieve long lifetimes.

  15. High-Efficient Circuits for Ternary Addition

    Directory of Open Access Journals (Sweden)

    Reza Faghih Mirzaee

    2014-01-01

    Full Text Available New ternary adders, which are fundamental components of ternary addition, are presented in this paper. They are on the basis of a logic style which mostly generates binary signals. Therefore, static power dissipation reaches its minimum extent. Extensive different analyses are carried out to examine how efficient the new designs are. For instance, the ternary ripple adder constructed by the proposed ternary half and full adders consumes 2.33 μW less power than the one implemented by the previous adder cells. It is almost twice faster as well. Due to their unique superior characteristics for ternary circuitry, carbon nanotube field-effect transistors are used to form the novel circuits, which are entirely suitable for practical applications.

  16. A high-efficiency superconductor distributed amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Herr, Q P, E-mail: quentin.herr@ngc.co [Northrop Grumman Corporation, 7323 Aviation Boulevard, Baltimore, MD 21240 (United States)

    2010-02-15

    A superconductor output amplifier that converts single-flux-quantum signals to a non-return-to-zero pattern is reported using a twelve-stage distributed amplifier configuration. The output amplitude is measured to be 1.75 mV over a wide bias current range of {+-} 12%. The bit error rate is measured using a Delta-Sigma data pattern to be less than 1 x 10{sup -9} at 10 Gb s{sup -1} per channel. Analysis of the eye diagram suggests that the actual bit error rate may be much lower. The amplifier has power efficiency of 12% neglecting the termination resistor, which may be eliminated from the circuit with a small modification. (rapid communication)

  17. Wide-Range Motion Estimation Architecture with Dual Search Windows for High Resolution Video Coding

    Science.gov (United States)

    Dung, Lan-Rong; Lin, Meng-Chun

    This paper presents a memory-efficient motion estimation (ME) technique for high-resolution video compression. The main objective is to reduce the external memory access, especially for limited local memory resource. The reduction of memory access can successfully save the notorious power consumption. The key to reduce the memory accesses is based on center-biased algorithm in that the center-biased algorithm performs the motion vector (MV) searching with the minimum search data. While considering the data reusability, the proposed dual-search-windowing (DSW) approaches use the secondary windowing as an option per searching necessity. By doing so, the loading of search windows can be alleviated and hence reduce the required external memory bandwidth. The proposed techniques can save up to 81% of external memory bandwidth and require only 135 MBytes/sec, while the quality degradation is less than 0.2dB for 720p HDTV clips coded at 8Mbits/sec.

  18. The Simple Video Coder: A free tool for efficiently coding social video data.

    Science.gov (United States)

    Barto, Daniel; Bird, Clark W; Hamilton, Derek A; Fink, Brandi C

    2017-08-01

    Videotaping of experimental sessions is a common practice across many disciplines of psychology, ranging from clinical therapy, to developmental science, to animal research. Audio-visual data are a rich source of information that can be easily recorded; however, analysis of the recordings presents a major obstacle to project completion. Coding behavior is time-consuming and often requires ad-hoc training of a student coder. In addition, existing software is either prohibitively expensive or cumbersome, which leaves researchers with inadequate tools to quickly process video data. We offer the Simple Video Coder-free, open-source software for behavior coding that is flexible in accommodating different experimental designs, is intuitive for students to use, and produces outcome measures of event timing, frequency, and duration. Finally, the software also offers extraction tools to splice video into coded segments suitable for training future human coders or for use as input for pattern classification algorithms.

  19. An efficient simulation method of a cyclotron sector-focusing magnet using 2D Poisson code

    Energy Technology Data Exchange (ETDEWEB)

    Gad Elmowla, Khaled Mohamed M; Chai, Jong Seo, E-mail: jschai@skku.edu; Yeon, Yeong H; Kim, Sangbum; Ghergherehchi, Mitra

    2016-10-01

    In this paper we discuss design simulations of a spiral magnet using 2D Poisson code. The Independent Layers Method (ILM) is a new technique that was developed to enable the use of two-dimensional simulation code to calculate a non-symmetric 3-dimensional magnetic field. In ILM, the magnet pole is divided into successive independent layers, and the hill and valley shape around the azimuthal direction is implemented using a reference magnet. The normalization of the magnetic field in the reference magnet produces a profile that can be multiplied by the maximum magnetic field in the hill magnet, which is a dipole magnet made of the hills at the same radius. Both magnets are then calculated using the 2D Poisson SUPERFISH code. Then a fully three-dimensional magnetic field is produced using TOSCA for the original spiral magnet, and the comparison of the 2D and 3D results shows a good agreement between both.

  20. High Efficiency Regenerative Helium Compressor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Helium plays several critical rolls in spacecraft propulsion. High pressure helium is commonly used to pressurize propellant fuel tanks. Helium cryocoolers can be...

  1. Highly Efficient, Durable Regenerative Solid Oxide Stack, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Precision Combustion, Inc. (PCI) proposes to develop a highly efficient regenerative solid oxide stack design. Novel structural elements allow direct internal...

  2. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  3. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1974-01-01

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  4. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  5. Performance, Accuracy and Efficiency Evaluation of a Three-Dimensional Whole-Core Neutron Transport Code AGENT

    International Nuclear Information System (INIS)

    Jevremovic, Tatjana; Hursin, Mathieu; Satvat, Nader; Hopkins, John; Xiao, Shanjie; Gert, Godfree

    2006-01-01

    The AGENT (Arbitrary Geometry Neutron Transport) an open-architecture reactor modeling tool is deterministic neutron transport code for two or three-dimensional heterogeneous neutronic design and analysis of the whole reactor cores regardless of geometry types and material configurations. The AGENT neutron transport methodology is applicable to all generations of nuclear power and research reactors. It combines three theories: (1) the theory of R-functions used to generate real three-dimensional whole-cores of square, hexagonal or triangular cross sections, (2) the planar method of characteristics used to solve isotropic neutron transport in non-homogenized 2D) reactor slices, and (3) the one-dimensional diffusion theory used to couple the planar and axial neutron tracks through the transverse leakage and angular mesh-wise flux values. The R-function-geometrical module allows a sequential building of the layers of geometry and automatic sub-meshing based on the network of domain functions. The simplicity of geometry description and selection of parameters for accurate treatment of neutron propagation is achieved through the Boolean algebraic hierarchically organized simple primitives into complex domains (both being represented with corresponding domain functions). The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through real geometrical domains that does not require homogenization or simplifications. The efficiency is maintained through a set of acceleration techniques introduced at all important calculation levels. The flux solution incorporates power iteration with two different acceleration techniques: Coarse Mesh Re-balancing (CMR) and Coarse Mesh Finite Difference (CMFD). The stand-alone originally developed graphical user interface of the AGENT code design environment allows the user to view and verify input data by displaying the geometry and material distribution. The user can also view the output data such

  6. Radioactivities evaluation code system for high temperature gas cooled reactors during normal operation

    International Nuclear Information System (INIS)

    Ogura, Kenji; Morimoto, Toshio; Suzuki, Katsuo.

    1979-01-01

    A radioactivity evaluation code system for high temperature gas-cooled reactors during normal operation was developed to study the behavior of fission products (FP) in the plants. The system consists of a code for the calculation of diffusion of FPs in fuel (FIPERX), a code for the deposition of FPs in primary cooling system (PLATO), a code for the transfer and emission of FPs in nuclear power plants (FIPPI-2), and a code for the exposure dose due to emitted FPs (FEDOSE). The FIPERX code can calculate the changes in the course of time FP of the distribution of FP concentration, the distribution of FP flow, the distribution of FP partial pressure, and the emission rate of FP into coolant. The amount of deposition of FPs and their distribution in primary cooling system can be evaluated by the PLATO code. The FIPPI-2 code can be used for the estimation of the amount of FPs in nuclear power plants and the amount of emitted FPs from the plants. The exposure dose of residents around nuclear power plants in case of the operation of the plants is calculated by the FEDOSE code. This code evaluates the dose due to the external exposure in the normal operation and in the accident, and the internal dose by the inhalation of radioactive plume and foods. Further studies of this code system by the comparison with the experimental data are considered. (Kato, T.)

  7. High-efficiency airfoil rudders applied to submarines

    Directory of Open Access Journals (Sweden)

    ZHOU Yimei

    2017-03-01

    Full Text Available Modern submarine design puts forward higher and higher requirements for control surfaces, and this creates a requirement for designers to constantly innovate new types of rudder so as to improve the efficiency of control surfaces. Adopting the high-efficiency airfoil rudder is one of the most effective measures for improving the efficiency of control surfaces. In this paper, we put forward an optimization method for a high-efficiency airfoil rudder on the basis of a comparative analysis of the various strengths and weaknesses of the airfoil, and the numerical calculation method is adopted to analyze the influence rule of the hydrodynamic characteristics and wake field by using the high-efficiency airfoil rudder and the conventional NACA rudder comparatively; at the same time, a model load test in a towing tank was carried out, and the test results and simulation calculation obtained good consistency:the error between them was less than 10%. The experimental results show that the steerage of a high-efficiency airfoil rudder is increased by more than 40% when compared with the conventional rudder, but the total resistance is close:the error is no more than 4%. Adopting a high-efficiency airfoil rudder brings much greater lifting efficiency than the total resistance of the boat. The results show that high-efficiency airfoil rudder has obvious advantages for improving the efficiency of control, giving it good application prospects.

  8. Efficient Quantification of Uncertainties in Complex Computer Code Results, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...

  9. Thermal-hydraulic code selection for modular high temperature gas-cooled reactors

    Energy Technology Data Exchange (ETDEWEB)

    Komen, E M.J.; Bogaard, J.P.A. van den

    1995-06-01

    In order to study the transient thermal-hydraulic system behaviour of modular high temperature gas-cooled reactors, the thermal-hydraulic computer codes RELAP5, MELCOR, THATCH, MORECA, and VSOP are considered at the Netherlands Energy Research Foundation ECN. This report presents the selection of the most appropriate codes. To cover the range of relevant accidents, a suite of three codes is recommended for analyses of HTR-M and MHTGR reactors. (orig.).

  10. Adaptive colour contrast coding in the salamander retina efficiently matches natural scene statistics.

    Directory of Open Access Journals (Sweden)

    Genadiy Vasserman

    Full Text Available The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level.

  11. High Efficiency, High Density Terrestrial Panel. [for solar cell modules

    Science.gov (United States)

    Wohlgemuth, J.; Wihl, M.; Rosenfield, T.

    1979-01-01

    Terrestrial panels were fabricated using rectangular cells. Packing densities in excess of 90% with panel conversion efficiencies greater than 13% were obtained. Higher density panels can be produced on a cost competitive basis with the standard salami panels.

  12. Parallelization of MCNP 4, a Monte Carlo neutron and photon transport code system, in highly parallel distributed memory type computer

    International Nuclear Information System (INIS)

    Masukawa, Fumihiro; Takano, Makoto; Naito, Yoshitaka; Yamazaki, Takao; Fujisaki, Masahide; Suzuki, Koichiro; Okuda, Motoi.

    1993-11-01

    In order to improve the accuracy and calculating speed of shielding analyses, MCNP 4, a Monte Carlo neutron and photon transport code system, has been parallelized and measured of its efficiency in the highly parallel distributed memory type computer, AP1000. The code has been analyzed statically and dynamically, then the suitable algorithm for parallelization has been determined for the shielding analysis functions of MCNP 4. This includes a strategy where a new history is assigned to the idling processor element dynamically during the execution. Furthermore, to avoid the congestion of communicative processing, the batch concept, processing multi-histories by a unit, has been introduced. By analyzing a sample cask problem with 2,000,000 histories by the AP1000 with 512 processor elements, the 82 % of parallelization efficiency is achieved, and the calculational speed has been estimated to be around 50 times as fast as that of FACOM M-780. (author)

  13. Development of Safety Analysis Codes and Experimental Validation for a Very High Temperature Gas-Cooled Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Chang, H. Oh, PhD; Cliff Davis; Richard Moore

    2004-11-01

    The very high temperature gas-cooled reactors (VHTGRs) are those concepts that have average coolant temperatures above 900 degrees C or operational fuel temperatures above 1250 degrees C. These concepts provide the potential for increased energy conversion efficiency and for high-temperature process heat application in addition to power generation and nuclear hydrogen generation. While all the High Temperature Gas Cooled Reactor (HTGR) concepts have sufficiently high temperatures to support process heat applications, such as desalination and cogeneration, the VHTGR's higher temperatures are suitable for particular applications such as thermochemical hydrogen production. However, the high temperature operation can be detrimental to safety following a loss-of-coolant accident (LOCA) initiated by pipe breaks caused by seismic or other events. Following the loss of coolant through the break and coolant depressurization, air from the containment will enter the core by molecular diffusion and ultimately by natural convection, leading to oxidation of the in-core graphite structures and fuel. The oxidation will release heat and accelerate the heatup of the reactor core. Thus, without any effective countermeasures, a pipe break may lead to significant fuel damage and fission product release. The Idaho National Engineering and Environmental Laboratory (INEEL) has investigated this event for the past three years for the HTGR. However, the computer codes used, and in fact none of the world's computer codes, have been sufficiently developed and validated to reliably predict this event. New code development, improvement of the existing codes, and experimental validation are imperative to narrow the uncertaninty in the predictions of this type of accident. The objectives of this Korean/United States collaboration are to develop advanced computational methods for VHTGR safety analysis codes and to validate these computer codes.

  14. 40 CFR 761.71 - High efficiency boilers.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false High efficiency boilers. 761.71... PROHIBITIONS Storage and Disposal § 761.71 High efficiency boilers. (a) To burn mineral oil dielectric fluid containing a PCB concentration of ≥50 ppm, but boiler shall comply with the following...

  15. HIGH EFFICIENCY DESULFURIZATION OF SYNTHESIS GAS

    Energy Technology Data Exchange (ETDEWEB)

    Anirban Mukherjee; Kwang-Bok Yi; Elizabeth J. Podlaha; Douglas P. Harrison

    2001-11-01

    Mixed metal oxides containing CeO{sub 2} and ZrO{sub 2} are being studied as high temperature desulfurization sorbents capable of achieving the DOE Vision 21 target of 1 ppmv of less H{sub 2}S. The research is justified by recent results in this laboratory that showed that reduced CeO{sub 2}, designated CeO{sub n} (1.5 < n < 2.0), is capable of achieving the 1 ppmv target in highly reducing gas atmospheres. The addition of ZrO{sub 2} has improved the performance of oxidation catalysts and three-way automotive catalysts containing CeO{sub 2}, and should have similar beneficial effects on CeO{sub 2} desulfurization sorbents. An electrochemical method for synthesizing CeO{sub 2}-ZrO{sub 2} has been developed and the products have been characterized by XRD and TEM during year 01. Nanocrystalline particles having a diameter of about 5 nm and containing from approximately 10 mol% to 80 mol% ZrO{sub 2} have been prepared. XRD showed the product to be a solid solution at low ZrO{sub 2} contents with a separate ZrO{sub 2} phase emerging at higher ZrO{sub 2} levels. Phase separation did not occur when the solid solutions were heat treated at 700 C. A flow reactor system constructed of quartz and teflon has been constructed, and a gas chromatograph equipped with a pulsed flame photometric detector (PFPD) suitable for measuring sub-ppmv levels of H{sub 2}S has been purchased with LSU matching funds. Preliminary desulfurization tests using commercial CeO{sub 2} and CeO{sub 2}-ZrO{sub 2} in highly reducing gas compositions has confirmed that CeO{sub 2}-ZrO{sub 2} is more effective than CeO{sub 2} in removing H{sub 2}S. At 700 C the product H{sub 2}S concentration using CeO{sub 2}-ZrO{sub 2} sorbent was near the 0.1 ppmv PFPD detection limit during the prebreakthrough period.

  16. Investigation of the efficiency and qualification of computer codes for PSA

    International Nuclear Information System (INIS)

    Andernacht, M.; Dinsmore, S.

    1992-01-01

    An international selection of computer codes for the quantification of level 1 PSA models was evaluated due to the consistence of results of different Benchmark exercises. The exercises in this project are based on those developed during the first benchmark project (Phase I). Due to several large differences in the results during Phase I of the Benchmark, the exercises in Phase II were more precisely defined. Due to the improved definition of the benchmark exercises, the results delivered from the different computer codes for Phase II are much more consistent. In general, the results of Benchmark II show, that the exercises were defined well enough to allow consistant results to be generated. Thus, the exercises can also be used to support the evaluation of additional PSA programs. (orig.) [de

  17. Highly efficient induction of chirality in intramolecular

    Science.gov (United States)

    Cossio; Arrieta; Lecea; Alajarin; Vidal; Tovar

    2000-06-16

    Highly stereocontrolled, intramolecular [2 + 2] cycloadditions between ketenimines and imines leading to 1,2-dihydroazeto[2, 1-b]quinazolines have been achieved. The source of stereocontrol is a chiral carbon atom adjacent either to the iminic carbon or nitrogen atom. In the first case, the stereocontrol stems from the preference for the axial conformer in the first transition structure. In the second case, the origin of the stereocontrol lies on the two-electron stabilizing interaction between the C-C bond being formed and the sigma orbital corresponding to the polar C-X bond, X being an electronegative atom. These models can be extended to other related systems for predicting the stereochemical outcome in this intramolecular reaction.

  18. High Efficiency, Low Cost Scintillators for PET

    International Nuclear Information System (INIS)

    Kanai Shah

    2007-01-01

    Inorganic scintillation detectors coupled to PMTs are an important element of medical imaging applications such as positron emission tomography (PET). Performance as well as cost of these systems is limited by the properties of the scintillation detectors available at present. The Phase I project was aimed at demonstrating the feasibility of producing high performance scintillators using a low cost fabrication approach. Samples of these scintillators were produced and their performance was evaluated. Overall, the Phase I effort was very successful. The Phase II project will be aimed at advancing the new scintillation technology for PET. Large samples of the new scintillators will be produced and their performance will be evaluated. PET modules based on the new scintillators will also be built and characterized

  19. Compact and highly efficient laser pump cavity

    Science.gov (United States)

    Chang, Jim J.; Bass, Isaac L.; Zapata, Luis E.

    1999-01-01

    A new, compact, side-pumped laser pump cavity design which uses non-conventional optics for injection of laser-diode light into a laser pump chamber includes a plurality of elongated light concentration channels. In one embodiment, the light concentration channels are compound parabolic concentrators (CPC) which have very small exit apertures so that light will not escape from the pumping chamber and will be multiply reflected through the laser rod. This new design effectively traps the pump radiation inside the pump chamber that encloses the laser rod. It enables more uniform laser pumping and highly effective recycle of pump radiation, leading to significantly improved laser performance. This new design also effectively widens the acceptable radiation wavelength of the diodes, resulting in a more reliable laser performance with lower cost.

  20. Capturing Energy-Saving Opportunities: Improving Building Efficiency in Rajasthan through Energy Code Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Qing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Yu, Sha [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Evans, Meredydd [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mathur, Jyotirmay [Malaviya National Institute of Technology, Jaipur (India); Vu, Linh D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-01

    India adopted the Energy Conservation Building Code (ECBC) in 2007. Rajasthan is the first state to make ECBC mandatory at the state level. In collaboration with Malaviya National Institute of Technology (MNIT) Jaipur, Pacific Northwest National Laboratory (PNNL) has been working with Rajasthan to facilitate the implementation of ECBC. This report summarizes milestones made in Rajasthan and PNNL's contribution in institutional set-ups, capacity building, compliance enforcement and pilot building construction.

  1. High fidelity analysis of BWR fuel assembly with COBRA-TF/PARCS and trace codes

    International Nuclear Information System (INIS)

    Abarca, A.; Miro, R.; Barrachina, T.; Verdu, G.; Soler, A.

    2013-01-01

    The growing importance of detailed reactor core and fuel assembly description for light water reactors (LWRs) as well as the sub-channel safety analysis requires high fidelity models and coupled neutronic/thermalhydraulic codes. Hand in hand with advances in the computer technology, the nuclear safety analysis is beginning to use a more detailed thermal hydraulics and neutronics. Previously, a PWR core and a 16 by 16 fuel assembly models were developed to test and validate our COBRA-TF/PARCS v2.7 (CTF/PARCS) coupled code. In this work, a comparison of the modeling and simulation advantages and disadvantages of modern 10 by 10 BWR fuel assembly with CTF/PARCS and TRACE codes has been done. The objective of the comparison is making known the main advantages of using the sub-channel codes to perform high resolution nuclear safety analysis. The sub-channel codes, like CTF, permits obtain accurate predictions, in two flow regime, of the thermalhydraulic parameters important to safety with high local resolution. The modeled BWR fuel assembly has 91 fuel rods (81 full length and 10 partial length fuel rods) and a big square central water rod. This assembly has been modeled with high level of detail with CTF code and using the BWR modeling parameters provided by TRACE. The same neutronic PARCS's model has been used for the simulation with both codes. To compare the codes a coupled steady state has be performed. (author)

  2. Quantum secure direct communication with high-dimension quantum superdense coding

    International Nuclear Information System (INIS)

    Wang Chuan; Li Yansong; Liu Xiaoshu; Deng Fuguo; Long Guilu

    2005-01-01

    A protocol for quantum secure direct communication with quantum superdense coding is proposed. It combines the ideas of block transmission, the ping-pong quantum secure direct communication protocol, and quantum superdense coding. It has the advantage of being secure and of high source capacity

  3. Preliminary application of the draft code case for alloy 617 for a high temperature component

    International Nuclear Information System (INIS)

    Lee, Hyeong Yeon; Kim, Yong Wan; Song, Kee Nam

    2008-01-01

    The ASME draft Code Case for Alloy 617 was developed in the late 1980s for the design of very-high-temperature gas cooled reactors. The draft Code Case was patterned after the ASME Code Section III Subsection NH and was intended to cover Ni-Cr-Co-Mo Alloy 617 to 982 .deg. C (1800 .deg. F). But the draft Code Case is still in an incomplete status, lacking necessary material properties and design data. In this study, a preliminary evaluation on the creep-fatigue damage for a high temperature hot duct pipe structure has been carried out according to the draft Code Case. The evaluation procedures and results according to the draft Code Case for Alloy 617 material were compared with those of the ASME Subsection NH and RCC-MR for Alloy 800H material. It was shown that many data including material properties, fatigue and creep data should be supplemented for the draft Code Case. However, when the evaluation results on the creep-fatigue damage according to the draft Code Case, ASME-NH and RCC-MR were compared based on the preliminary evaluation, it was shown that the Alloy 617 results from the draft Code Case tended to be more resistant to the creep damage while less resistant to the fatigue damage than those from the ASME-NH and RCC-MR

  4. HIGH EFFICIENCY DESULFURIZATION OF SYNTHESIS GAS

    Energy Technology Data Exchange (ETDEWEB)

    Kwang-Bok Yi; Anirban Mukherjee; Elizabeth J. Podlaha; Douglas P. Harrison

    2004-03-01

    Mixed metal oxides containing ceria and zirconia have been studied as high temperature desulfurization sorbents with the objective of achieving the DOE Vision 21 target of 1 ppmv or less H{sub 2}S in the product gas. The research was justified by recent results in this laboratory that showed that reduced CeO{sub 2}, designated CeOn (1.5 < n < 2.0), is capable of achieving the 1 ppmv target in highly reducing gas atmospheres. The addition of ZrO{sub 2} has improved the performance of oxidation catalysts and three-way automotive catalysts containing CeO{sub 2}, and was postulated to have similar beneficial effects on CeO{sub 2} desulfurization sorbents. An electrochemical method for synthesizing CeO{sub 2}-ZrO{sub 2} mixtures was developed and the products were characterized by XRD and TEM during year 01. Nanocrystalline particles having a diameter of about 5 nm and containing from approximately 10 mol% to 80 mol% ZrO{sub 2} were prepared. XRD analysis showed the product to be a solid solution at low ZrO{sub 2} contents with a separate ZrO{sub 2} phase emerging at higher ZrO{sub 2} levels. Unfortunately, the quantity of CeO{sub 2}-ZrO{sub 2} that could be prepared electrochemically was too small to permit desulfurization testing. Also during year 01 a laboratory-scale fixed-bed reactor was constructed for desulfurization testing. All components of the reactor and analytical systems that were exposed to low concentrations of H{sub 2}S were constructed of quartz, Teflon, or silcosteel. Reactor product gas composition as a function of time was determined using a Varian 3800 gas chromatograph equipped with a pulsed flame photometric detector (PFPD) for measuring low H{sub 2}S concentrations from approximately 0.1 to 10 ppmv, and a thermal conductivity detector (TCD) for higher concentrations of H{sub 2}S. Larger quantities of CeO{sub 2}-ZrO{sub 2} mixtures from other sources, including mixtures prepared in this laboratory using a coprecipitation procedure, were obtained

  5. CIPHER: coded imager and polarimeter for high-energy radiation

    CERN Document Server

    Caroli, E; Dusi, W; Bertuccio, G; Sampietro, M

    2000-01-01

    The CIPHER instrument is a hard X- and soft gamma-ray spectroscopic and polarimetric coded mask imager based on an array of cadmium telluride micro-spectrometers. The position-sensitive detector (PSD) will be arranged in 4 modules of 32x32 crystals, each of 2x2 mm sup 2 cross section and 10 mm thickness giving a total active area of about 160 cm sup 2. The micro-spectrometer characteristics allow a wide operating range from approx 10 keV to 1 MeV, while the PSD is actively shielded by CsI crystals on the bottom in order to reduce background. The mask, based on a modified uniformly redundant array (MURA) pattern, is four times the area of the PSD and is situated at about 100 cm from the CdTe array top surface. The CIPHER instrument is proposed for a balloon experiment, both in order to assess the performance of such an instrumental concept for a small/medium-size satellite survey mission and to perform an innovative measurement of the Crab polarisation level. The CIPHER's field of view allows the instrument to...

  6. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  7. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  8. All passive architecture for high efficiency cascaded Raman conversion

    Science.gov (United States)

    Balaswamy, V.; Arun, S.; Chayran, G.; Supradeepa, V. R.

    2018-02-01

    Cascaded Raman fiber lasers have offered a convenient method to obtain scalable, high-power sources at various wavelength regions inaccessible with rare-earth doped fiber lasers. A limitation previously was the reduced efficiency of these lasers. Recently, new architectures have been proposed to enhance efficiency, but this came at the cost of enhanced complexity, requiring an additional low-power, cascaded Raman laser. In this work, we overcome this with a new, all-passive architecture for high-efficiency cascaded Raman conversion. We demonstrate our architecture with a fifth-order cascaded Raman converter from 1117nm to 1480nm with output power of ~64W and efficiency of 60%.

  9. Edge-preserving Intra mode for efficient depth map coding based on H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    2014-01-01

    Depth-image-based-rendering (DIBR) algorithms for 3D video communication systems based on the “multi-view video plus depth” format are very sensitive to the accuracy of depth information. Specifically, edge regions in the depth data should be preserved in the coding/decoding process to ensure good...... view synthesis performance, which directly affects the overall system performance. This paper proposes a novel scheme for edge-aware Intra depth compression based on the H.264/AVC framework enabled on both Intra (I) and Inter (P) slices. The proposed scheme includes a new Intra mode specifically...

  10. Adaptive Network Coded Clouds: High Speed Downloads and Cost-Effective Version Control

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Heide, Janus; Roetter, Daniel Enrique Lucani

    2018-01-01

    Although cloud systems provide a reliable and flexible storage solution, the use of a single cloud service constitutes a single point of failure, which can compromise data availability, download speed, and security. To address these challenges, we advocate for the use of multiple cloud storage...... providers simultaneously using network coding as the key enabling technology. Our goal is to study two challenges of network coded storage systems. First, the efficient update of the number of coded fragments per cloud in a system aggregating multiple clouds in order to boost the download speed of files. We...... developed a novel scheme using recoding with limited packets to trade-off storage space, reliability, and data retrieval speed. Implementation and measurements with commercial cloud providers show that up to 9x less network use is needed compared to other network coding schemes, while maintaining similar...

  11. Stabilization void-fill encapsulation high-efficiency particulate filters

    International Nuclear Information System (INIS)

    Alexander, R.G.; Stewart, W.E.; Phillips, S.J.; Serkowski, M.M.; England, J.L.; Boynton, H.C.

    1994-05-01

    This report discusses high-efficiency particulate air (HEPA) filter systems that which are contaminated with radionuclides are part of the nuclear fuel processing systems conducted by the US Department of Energy (DOE) and require replacement and safe and efficient disposal for plant safety. Two K-3 HEPA filters were removed from service, placed burial boxes, buried, and safely and efficiently stabilized remotely which reduced radiation exposure to personnel and the environment

  12. Design of High Efficiency Illumination for LED Lighting

    OpenAIRE

    Chang, Yong-Nong; Cheng, Hung-Liang; Kuo, Chih-Ming

    2013-01-01

    A high efficiency illumination for LED street lighting is proposed. For energy saving, this paper uses Class-E resonant inverter as main electric circuit to improve efficiency. In addition, single dimming control has the best efficiency, simplest control scheme and lowest circuit cost among other types of dimming techniques. Multiple serial-connected transformers used to drive the LED strings as they can provide galvanic isolation and have the advantage of good current distribution against de...

  13. Codes of Ethics and the High School Newspaper: Part One.

    Science.gov (United States)

    Hager, Marilyn

    1978-01-01

    Deals with two types of ethical problems encountered by journalists, including high school journalists: deciding whether to accept gifts and favors from advertisers and news sources, and deciding what types of language would be offensive to readers. (GT)

  14. DELTA : a computer code for determination of efficiency of particulate matter and aerosol transport

    International Nuclear Information System (INIS)

    Picini, P.; Caropreso, G.; Antonini, A.; Galuppi, G.; Sbrana, M.; Bardone, G.; Malvestuto, V.; Ricotta, A.

    1996-04-01

    In the Part I of this paper a mathematical model to calculate the sampling and transport efficiencies (both in laminar and turbulent condition) of any sampling and transport system decomposable in several cylindrical elemental component is presented. In the Part II an experimental facility built in Casaccia ENEA laboratory is described and the measures carried out to validate the model are reported

  15. Consequences of Converting Graded to Action Potentials upon Neural Information Coding and Energy Efficiency

    Science.gov (United States)

    Sengupta, Biswa; Laughlin, Simon Barry; Niven, Jeremy Edward

    2014-01-01

    Information is encoded in neural circuits using both graded and action potentials, converting between them within single neurons and successive processing layers. This conversion is accompanied by information loss and a drop in energy efficiency. We investigate the biophysical causes of this loss of information and efficiency by comparing spiking neuron models, containing stochastic voltage-gated Na+ and K+ channels, with generator potential and graded potential models lacking voltage-gated Na+ channels. We identify three causes of information loss in the generator potential that are the by-product of action potential generation: (1) the voltage-gated Na+ channels necessary for action potential generation increase intrinsic noise and (2) introduce non-linearities, and (3) the finite duration of the action potential creates a ‘footprint’ in the generator potential that obscures incoming signals. These three processes reduce information rates by ∼50% in generator potentials, to ∼3 times that of spike trains. Both generator potentials and graded potentials consume almost an order of magnitude less energy per second than spike trains. Because of the lower information rates of generator potentials they are substantially less energy efficient than graded potentials. However, both are an order of magnitude more efficient than spike trains due to the higher energy costs and low information content of spikes, emphasizing that there is a two-fold cost of converting analogue to digital; information loss and cost inflation. PMID:24465197

  16. The Energy Efficiency of High Intensity Proton Driver Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Yakovlev, Vyacheslav [Fermilab; Grillenberger, Joachim [PSI, Villigen; Kim, Sang-Ho [ORNL, Oak Ridge (main); Seidel, Mike [PSI, Villigen; Yoshii, Masahito [JAEA, Ibaraki

    2017-05-01

    For MW class proton driver accelerators the energy efficiency is an important aspect; the talk reviews the efficiency of different accelerator concepts including s.c./n.c. linac, rapid cycling synchrotron, cyclotron; the potential of these concepts for very high beam power is discussed.

  17. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  18. A simulation of driven reconnection by a high precision MHD code

    International Nuclear Information System (INIS)

    Kusano, Kanya; Ouchi, Yasuo; Hayashi, Takaya; Horiuchi, Ritoku; Watanabe, Kunihiko; Sato, Tetsuya.

    1988-01-01

    A high precision MHD code, which has the fourth-order accuracy for both the spatial and time steps, is developed, and is applied to the simulation studies of two dimensional driven reconnection. It is confirm that the numerical dissipation of this new scheme is much less than that of two-step Lax-Wendroff scheme. The effect of the plasma compressibility on the reconnection dynamics is investigated by means of this high precision code. (author)

  19. High-speed low-complexity video coding with EDiCTius: a DCT coding proposal for JPEG XS

    Science.gov (United States)

    Richter, Thomas; Fößel, Siegfried; Keinert, Joachim; Scherl, Christian

    2017-09-01

    In its 71th meeting, the JPEG committee issued a call for low complexity, high speed image coding, designed to address the needs of low-cost video-over-ip applications. As an answer to this call, Fraunhofer IIS and the Computing Center of the University of Stuttgart jointly developed an embedded DCT image codec requiring only minimal resources while maximizing throughput on FPGA and GPU implementations. Objective and subjective tests performed for the 73rd meeting confirmed its excellent performance and suitability for its purpose, and it was selected as one of the two key contributions for the development of a joined test model. In this paper, its authors describe the design principles of the codec, provide a high-level overview of the encoder and decoder chain and provide evaluation results on the test corpus selected by the JPEG committee.

  20. Determination of the detection efficiency of a HPGe detector by means of the MCNP 4A simulation code

    International Nuclear Information System (INIS)

    Leal, B.

    2004-01-01

    In the majority of the laboratories, the calibration in efficiency of the detector is carried out by means of the standard sources measurement of gamma photons that have a determined activity, or for matrices that contain a variety of radionuclides that can embrace the energy range of interest. Given the experimental importance that has the determination from the curves of efficiency to the effects of establishing the quantitative results, is appealed to the simulation of the response function of the detector used in the Regional Center of Nuclear Studies inside the energy range of 80 keV to 1400 keV varying the density of the matrix, by means of the application of the Monte Carlo code MCNP-4A. The adjustment obtained shows an acceptance grade in the range of 100 to 600 keV, with a smaller percentage discrepancy to 5%. (Author)

  1. Image sensor system with bio-inspired efficient coding and adaptation.

    Science.gov (United States)

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  2. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  3. Analysis of functioning and efficiency of a code blue system in a tertiary care hospital.

    Science.gov (United States)

    Monangi, Srinivas; Setlur, Rangraj; Ramanathan, Ramprasad; Bhasin, Sidharth; Dhar, Mridul

    2018-01-01

    "Code blue" (CB) is a popular hospital emergency code, which is used by hospitals to alert their emergency response team of any cardiorespiratory arrest. The factors affecting the outcomes of emergencies are related to both the patient and the nature of the event. The primary objective was to analyze the survival rate and factors associated with survival and also practical problems related to functioning of a CB system (CBS). After the approval of hospital ethics committee, an analysis and audit was conducted of all patients on whom a CB had been called in our tertiary care hospital over 24 months. Data collected were demographic data, diagnosis, time of cardiac arrest and activation of CBS, time taken by CBS to reach the patient, presenting rhythm on arrival of CB team, details of cardiopulmonary resuscitation (CPR) such as duration and drugs given, and finally, events and outcomes. Chi-square test and logistic regression analysis were used to analyze the data. A total of 720 CB calls were initiated during the period. After excluding 24 patients, 694 calls were studied and analyzed. Six hundred and twenty were true calls and 74 were falls calls. Of the 620, 422 were cardiac arrests and 198 were medical emergencies. Overall survival was 26%. Survival in patients with cardiac arrests was 11.13%. Factors such as age, presenting rhythm, and duration of CPR were found to have a significant effect on survival. Problems encountered were personnel and equipment related. A CBS is effective in improving the resuscitation efforts and survival rates after inhospital cardiac arrests. Age, presenting rhythm at the time of arrest, and duration of CPR have significant effect on survival of the patient after a cardiac arrest. Technical and staff-related problems need to be considered and improved upon.

  4. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John; Lee, Jon; Margulies, Susan

    2010-01-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  5. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  6. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  7. LABAN-PEL: a two-dimensional, multigroup diffusion, high-order response matrix code

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-06-01

    The capabilities of LABAN-PEL is described. LABAN-PEL is a modified version of the two-dimensional, high-order response matrix code, LABAN, written by Lindahl. The new version extends the capabilities of the original code with regard to the treatment of neutron migration by including an option to utilize full group-to-group diffusion coefficient matrices. In addition, the code has been converted from single to double precision and the necessary routines added to activate its multigroup capability. The coding has also been converted to standard FORTRAN-77 to enhance the portability of the code. Details regarding the input data requirements and calculational options of LABAN-PEL are provided. 13 refs

  8. High Efficiency Lighting with Integrated Adaptive Control (HELIAC), Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project is the continued development of the High Efficiency Lighting with Integrated Adaptive Control (HELIAC) system. Solar radiation is not a viable...

  9. High Efficiency Lighting with Integrated Adaptive Control (HELIAC), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed project is the development of High Efficiency Lighting with Integrated Adaptive Control (HELIAC) systems to drive plant growth. Solar...

  10. Efficiency of poly-generating high temperature fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Margalef, Pere; Brown, Tim; Brouwer, Jacob; Samuelsen, Scott [National Fuel Cell Research Center (NFCRC), University of California, Irvine, CA 92697-3550 (United States)

    2011-02-15

    High temperature fuel cells can be designed and operated to poly-generate electricity, heat, and useful chemicals (e.g., hydrogen) in a variety of configurations. The highly integrated and synergistic nature of poly-generating high temperature fuel cells, however, precludes a simple definition of efficiency for analysis and comparison of performance to traditional methods. There is a need to develop and define a methodology to calculate each of the co-product efficiencies that is useful for comparative analyses. Methodologies for calculating poly-generation efficiencies are defined and discussed. The methodologies are applied to analysis of a Hydrogen Energy Station (H{sub 2}ES) showing that high conversion efficiency can be achieved for poly-generation of electricity and hydrogen. (author)

  11. An Improved, Highly Efficient Method for the Synthesis of Bisphenols

    Directory of Open Access Journals (Sweden)

    L. S. Patil

    2011-01-01

    Full Text Available An efficient synthesis of bisphenols is described by condensation of substituted phenols with corresponding cyclic ketones in presence of cetyltrimethylammonium chloride and 3-mercaptopropionic acid as a catalyst in extremely high purity and yields.

  12. High Efficiency S-Band 20 Watt Amplifier

    Data.gov (United States)

    National Aeronautics and Space Administration — This project includes the design and build of a prototype 20 W, high efficiency, S-Band amplifier.   The design will incorporate the latest semiconductor technology,...

  13. Process development for high-efficiency silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Gee, J.M.; Basore, P.A.; Buck, M.E.; Ruby, D.S.; Schubert, W.K.; Silva, B.L.; Tingley, J.W.

    1991-12-31

    Fabrication of high-efficiency silicon solar cells in an industrial environment requires a different optimization than in a laboratory environment. Strategies are presented for process development of high-efficiency silicon solar cells, with a goal of simplifying technology transfer into an industrial setting. The strategies emphasize the use of statistical experimental design for process optimization, and the use of baseline processes and cells for process monitoring and quality control. 8 refs.

  14. Highly efficient procedure for the transesterification of vegetable oil

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Xuezheng; Gao, Shan; He, Mingyuan [Shanghai Key Laboratory of Green Chemistry and Chemical Process, Department of Chemistry, East China Normal University, Shanghai 200062 (China); Yang, Jianguo [Shanghai Key Laboratory of Green Chemistry and Chemical Process, Department of Chemistry, East China Normal University, Shanghai 200062 (China); Energy Institute, Department of Materials Science and Engineering, Pennsylvania State University, University Park, PA 16802 (United States)

    2009-10-15

    The highly efficient procedure has been developed for the synthesis of biodiesel from vegetable oil and methanol. The KF/MgO has been selected as the most efficient catalyst for the reactions with the yield of 99.3%. Operational simplicity, without need of the purification of raw vegetable oil, low cost of the catalyst used, high activities, no saponification and reusability are the key features of this methodology. (author)

  15. The photonic nanowire: A highly efficient single-photon source

    DEFF Research Database (Denmark)

    Gregersen, Niels

    2014-01-01

    The photonic nanowire represents an attractive platform for a quantum light emitter. However, careful optical engineering using the modal method, which elegantly allows access to all relevant physical parameters, is crucial to ensure high efficiency.......The photonic nanowire represents an attractive platform for a quantum light emitter. However, careful optical engineering using the modal method, which elegantly allows access to all relevant physical parameters, is crucial to ensure high efficiency....

  16. Highly Efficient Spontaneous Emission from Self-Assembled Quantum Dots

    DEFF Research Database (Denmark)

    Johansen, Jeppe; Lund-Hansen, Toke; Hvam, Jørn Märcher

    2006-01-01

    We present time resolved measurements of spontaneous emission (SE) from InAs/GaAs quantum dots (QDs). The measurements are interpreted using Fermi's Golden Rule and from this analysis we establish the parameters for high quantum efficiency.......We present time resolved measurements of spontaneous emission (SE) from InAs/GaAs quantum dots (QDs). The measurements are interpreted using Fermi's Golden Rule and from this analysis we establish the parameters for high quantum efficiency....

  17. Global climate change: Mitigation opportunities high efficiency large chiller technology

    Energy Technology Data Exchange (ETDEWEB)

    Stanga, M.V.

    1997-12-31

    This paper, comprised of presentation viewgraphs, examines the impact of high efficiency large chiller technology on world electricity consumption and carbon dioxide emissions. Background data are summarized, and sample calculations are presented. Calculations show that presently available high energy efficiency chiller technology has the ability to substantially reduce energy consumption from large chillers. If this technology is widely implemented on a global basis, it could reduce carbon dioxide emissions by 65 million tons by 2010.

  18. An environment for high energy physics code development

    International Nuclear Information System (INIS)

    Wisinski, D.E.

    1987-01-01

    As the size and complexity of high energy experiments increase there will be a greater need for better software tools and new programming environments. If these are not commercially available, then we must build them ourselves. This paper describes a prototype programming environment featuring a new type of file system, a ''smart'' editor, and integrated file management tools. This environment was constructed under the IBM VM/SP operating system. It uses the system interpreter, the system editor and the NOMAD2 relational database management system to create a software ''shell'' for the programmer. Some extensions to this environment are explored. (orig.)

  19. Quo vadis code optimization in high energy physics

    International Nuclear Information System (INIS)

    Jarp, S.

    1994-01-01

    Although performance tuning and optimization can be considered less critical than in the past, there are still many High Energy Physics (HEP) applications and application domains that can profit from such an undertaking. In CERN's CORE (Centrally Operated RISC Environment) where all major RISC vendors are present, this implies an understanding of the various computer architectures, instruction sets and performance analysis tools from each of these vendors. This paper discusses some initial observations after having evaluated the situation and makes some recommendations for further progress

  20. PIXIE3D: An efficient, fully implicit, parallel, 3D extended MHD code for fusion plasma modeling

    International Nuclear Information System (INIS)

    Chacon, L.

    2007-01-01

    PIXIE3D is a modern, parallel, state-of-the-art extended MHD code that employs fully implicit methods for efficiency and accuracy. It features a general geometry formulation, and is therefore suitable for the study of many magnetic fusion configurations of interest. PIXIE3D advances the state of the art in extended MHD modeling in two fundamental ways. Firstly, it employs a novel conservative finite volume scheme which is remarkably robust and stable, and demands very small physical and/or numerical dissipation. This is a fundamental requirement when one wants to study fusion plasmas with realistic conductivities. Secondly, PIXIE3D features fully-implicit time stepping, employing Newton-Krylov methods for inverting the associated nonlinear systems. These methods have been shown to be scalable and efficient when preconditioned properly. Novel preconditioned ideas (so-called physics based), which were prototypes in the context of reduced MHD, have been adapted for 3D primitive-variable resistive MHD in PIXIE3D, and are currently being extended to Hall MHD. PIXIE3D is fully parallel, employing PETSc for parallelism. PIXIE3D has been thoroughly benchmarked against linear theory and against other available extended MHD codes on nonlinear test problems (such as the GEM reconnection challenge). We are currently in the process of extending such comparisons to fusion-relevant problems in realistic geometries. In this talk, we will describe both the spatial discretization approach and the preconditioning strategy employed for extended MHD in PIXIE3D. We will report on recent benchmarking studies between PIXIE3D and other 3D extended MHD codes, and will demonstrate its usefulness in a variety of fusion-relevant configurations such as Tokamaks and Reversed Field Pinches. (Author)

  1. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R. [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J. [Elsam/Elkraft/TU Denmark (Denmark)

    1998-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  2. High efficiency USC power plant - present status and future potential

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R [Faelleskemikerne I/S Fynsvaerket (Denmark); Hald, J [Elsam/Elkraft/TU Denmark (Denmark)

    1999-12-31

    Increasing demand for energy production with low impact on the environment and minimised fuel consumption can be met with high efficient coal fired power plants with advanced steam parameters. An important key to this improvement is the development of high temperature materials with optimised mechanical strength. Based on the results of more than ten years of development a coal fired power plant with an efficiency above 50 % can now be realised. Future developments focus on materials which enable an efficiency of 52-55 %. (orig.) 25 refs.

  3. FPGA-Based Channel Coding Architectures for 5G Wireless Using High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Swapnil Mhaske

    2017-01-01

    Full Text Available We propose strategies to achieve a high-throughput FPGA architecture for quasi-cyclic low-density parity-check codes based on circulant-1 identity matrix construction. By splitting the node processing operation in the min-sum approximation algorithm, we achieve pipelining in the layered decoding schedule without utilizing additional hardware resources. High-level synthesis compilation is used to design and develop the architecture on the FPGA hardware platform. To validate this architecture, an IEEE 802.11n compliant 608 Mb/s decoder is implemented on the Xilinx Kintex-7 FPGA using the LabVIEW FPGA Compiler in the LabVIEW Communication System Design Suite. Architecture scalability was leveraged to accomplish a 2.48 Gb/s decoder on a single Xilinx Kintex-7 FPGA. Further, we present rapidly prototyped experimentation of an IEEE 802.16 compliant hybrid automatic repeat request system based on the efficient decoder architecture developed. In spite of the mixed nature of data processing—digital signal processing and finite-state machines—LabVIEW FPGA Compiler significantly reduced time to explore the system parameter space and to optimize in terms of error performance and resource utilization. A 4x improvement in the system throughput, relative to a CPU-based implementation, was achieved to measure the error-rate performance of the system over large, realistic data sets using accelerated, in-hardware simulation.

  4. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  5. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    Science.gov (United States)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  6. Status of design code work for metallic high temperature components

    International Nuclear Information System (INIS)

    Bieniussa, K.; Seehafer, H.J.; Over, H.H.; Hughes, P.

    1984-01-01

    The mechanical components of high temperature gas-cooled reactors, HTGR, are exposed to temperatures up to about 1000 deg. C and this in a more or less corrosive gas environment. Under these conditions metallic structural materials show a time-dependent structural behavior. Furthermore changes in the structure of the material and loss of material in the surface can result. The structural material of the components will be stressed originating from load-controlled quantities, for example pressure or dead weight, and/or deformation-controlled quantities, for example thermal expansion or temperature distribution, and thus it can suffer rowing permanent strains and deformations and an exhaustion of the material (damage) both followed by failure. To avoid a failure of the components the design requires the consideration of the following structural failure modes: ductile rupture due to short-term loadings; creep rupture due to long-term loadings; reep-fatigue failure due to cyclic loadings excessive strains due to incremental deformation or creep ratcheting; loss of function due to excessive deformations; loss of stability due to short-term loadings; loss of stability due to long-term loadings; environmentally caused material failure (excessive corrosion); fast fracture due to instable crack growth

  7. Charge transport in highly efficient iridium cored electrophosphorescent dendrimers

    Science.gov (United States)

    Markham, Jonathan P. J.; Samuel, Ifor D. W.; Lo, Shih-Chun; Burn, Paul L.; Weiter, Martin; Bässler, Heinz

    2004-01-01

    Electrophosphorescent dendrimers are promising materials for highly efficient light-emitting diodes. They consist of a phosphorescent core onto which dendritic groups are attached. Here, we present an investigation into the optical and electronic properties of highly efficient phosphorescent dendrimers. The effect of dendrimer structure on charge transport and optical properties is studied using temperature-dependent charge-generation-layer time-of-flight measurements and current voltage (I-V) analysis. A model is used to explain trends seen in the I-V characteristics. We demonstrate that fine tuning the mobility by chemical structure is possible in these dendrimers and show that this can lead to highly efficient bilayer dendrimer light-emitting diodes with neat emissive layers. Power efficiencies of 20 lm/W were measured for devices containing a second-generation (G2) Ir(ppy)3 dendrimer with a 1,3,5-tris(2-N-phenylbenzimidazolyl)benzene electron transport layer.

  8. Very-High Efficiency, High Power Laser Diodes, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — AdTech Photonics, in collaboration with the Center for Advanced Studies in Photonics Research (CASPR) at UMBC, is pleased to submit this proposal entitled ?Very-High...

  9. An efficient explicit numerical scheme for diffusion-type equations with a highly inhomogeneous and highly anisotropic diffusion tensor

    International Nuclear Information System (INIS)

    Larroche, O.

    2007-01-01

    A locally split-step explicit (LSSE) algorithm was developed for efficiently solving a multi-dimensional advection-diffusion type equation involving a highly inhomogeneous and highly anisotropic diffusion tensor, which makes the problem very ill-conditioned for standard implicit methods involving the iterative solution of large linear systems. The need for such an optimized algorithm arises, in particular, in the frame of thermonuclear fusion applications, for the purpose of simulating fast charged-particle slowing-down with an ion Fokker-Planck code. The LSSE algorithm is presented in this paper along with the results of a model slowing-down problem to which it has been applied

  10. High-Capacity Quantum Secure Direct Communication Based on Quantum Hyperdense Coding with Hyperentanglement

    International Nuclear Information System (INIS)

    Wang Tie-Jun; Li Tao; Du Fang-Fang; Deng Fu-Guo

    2011-01-01

    We present a quantum hyperdense coding protocol with hyperentanglement in polarization and spatial-mode degrees of freedom of photons first and then give the details for a quantum secure direct communication (QSDC) protocol based on this quantum hyperdense coding protocol. This QSDC protocol has the advantage of having a higher capacity than the quantum communication protocols with a qubit system. Compared with the QSDC protocol based on superdense coding with d-dimensional systems, this QSDC protocol is more feasible as the preparation of a high-dimension quantum system is more difficult than that of a two-level quantum system at present. (general)

  11. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  12. High-radix transforms for Reed-Solomon codes over Fermat primes

    Science.gov (United States)

    Liu, K. Y.; Reed, I. S.; Truong, T. K.

    1977-01-01

    A method is proposed to streamline the transform decoding algorithm for Reed-Solomon (RS) codes of length equal to 2 raised to the power 2n. It is shown that a high-radix fast Fourier transform (FFT) type algorithm with generator equal to 3 on GF(F sub n), where F sub n is a Fermat prime, can be used to decode RS codes of this length. For a 256-symbol RS code, a radix 4 and radix 16 FFT over GF(F sub 3) require, respectively, 30 and 70% fewer modulo F sub n multiplications than the usual radix 2 FFT.

  13. [Characteristics of phosphorus uptake and use efficiency of rice with high yield and high phosphorus use efficiency].

    Science.gov (United States)

    Li, Li; Zhang, Xi-Zhou; Li, Tinx-Xuan; Yu, Hai-Ying; Ji, Lin; Chen, Guang-Deng

    2014-07-01

    A total of twenty seven middle maturing rice varieties as parent materials were divided into four types based on P use efficiency for grain yield in 2011 by field experiment with normal phosphorus (P) application. The rice variety with high yield and high P efficiency was identified by pot experiment with normal and low P applications, and the contribution rates of various P efficiencies to yield were investigated in 2012. There were significant genotype differences in yield and P efficiency of the test materials. GRLu17/AiTTP//Lu17_2 (QR20) was identified as a variety with high yield and high P efficiency, and its yields at the low and normal rates of P application were 1.96 and 1.92 times of that of Yuxiang B, respectively. The contribution rate of P accumulation to yield was greater than that of P grain production efficiency and P harvest index across field and pot experiments. The contribution rates of P accumulation and P grain production efficiency to yield were not significantly different under the normal P condition, whereas obvious differences were observed under the low P condition (66.5% and 26.6%). The minimal contribution to yield was P harvest index (11.8%). Under the normal P condition, the contribution rates of P accumulation to yield and P harvest index were the highest at the jointing-heading stage, which were 93.4% and 85.7%, respectively. In addition, the contribution rate of P accumulation to grain production efficiency was 41.8%. Under the low P condition, the maximal contribution rates of P accumulation to yield and grain production efficiency were observed at the tillering-jointing stage, which were 56.9% and 20.1% respectively. Furthermore, the contribution rate of P accumulation to P harvest index was 16.0%. The yield, P accumulation, and P harvest index of QR20 significantly increased under the normal P condition by 20.6%, 18.1% and 18.2% respectively compared with that in the low P condition. The rank of the contribution rates of P

  14. High spectral efficiency optical CDMA system based on guard-time and optical hard-limiting (OHL)

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, R M; Bennett, C V; Mendez, A J; Hernandez, V J; Lennon, W J

    2003-12-02

    Optical code-division multiple access (OCDMA) is an interesting subject of research because of its potential to support asynchronous, bursty communications. OCDMA has been investigated for local area networks, access networks, and, more recently, as a packet label for emerging networks. Two-dimensional (2-D) OCDMA codes are preferred in current research because of the flexibility of designing the codes and their higher cardinality and spectral efficiency (SE) compared with direct sequence codes based on on-off keying and intensity modulation/direct detection, and because they lend themselves to being implemented with devices developed for wavelength-division-multiplexed (WDM) transmission (the 2-D codes typically combine wavelength and time as the two dimensions of the codes). This paper shows rigorously that 2-D wavelength/time codes have better SE than one-dimensional (1-D) CDMA/WDM combinations (of the same cardinality). Then, the paper describes a specific set of wavelength/time (W/T) codes and their implementation. These 2-D codes are high performance because they simultaneously have high cardinality (/spl Gt/10), per-user high bandwidth (>1 Gb/s), and high SE (>0.10 b/s/Hz). The physical implementation of these W/T codes is described and their performance evaluated by system simulations and measurements on an OCDMA technology demonstrator. This research shows that OCDMA implementation complexity (e.g., incorporating double hard-limiting and interference estimation) can be avoided by using a guard time in the codes and an optical hard limiter in the receiver.

  15. High Efficiency of Two Efficient QSDC with Authentication Is at the Cost of Their Security

    International Nuclear Information System (INIS)

    Su-Juan, Qin; Qiao-Yan, Wen; Luo-Ming, Meng; Fu-Chen, Zhu

    2009-01-01

    Two efficient protocols of quantum secure direct communication with authentication [Chin. Phys. Lett. 25 (2008) 2354] were recently proposed by Liu et al. to improve the efficiency of two protocols presented in [Phys. Rev. A 75 (2007) 026301] by four Pauli operations. We show that the high efficiency of the two protocols is at the expense of their security. The authenticator Trent can reach half the secret by a particular attack strategy in the first protocol. In the second protocol, not only Trent but also an eavesdropper outside can elicit half-information about the secret from the public declaration

  16. Detonation of high explosives in Lagrangian hydrodynamic codes using the programmed burn technique

    International Nuclear Information System (INIS)

    Berger, M.E.

    1975-09-01

    Two initiation methods were developed for improving the programmed burn technique for detonation of high explosives in smeared-shock Lagrangian hydrodynamic codes. The methods are verified by comparing the improved programmed burn with existing solutions in one-dimensional plane, converging, and diverging geometries. Deficiencies in the standard programmed burn are described. One of the initiation methods has been determined to be better for inclusion in production hydrodynamic codes

  17. Renewable Energy and Energy Efficiency Technologies in Residential Building Codes: June 15, 1998 to September 15, 1998

    Energy Technology Data Exchange (ETDEWEB)

    Wortman, D.; Echo-Hawk, L.

    2005-02-01

    This report is an attempt to describe the building code requirements and impediments to the application of EE and RE technologies in residential buildings. Several modern model building codes were reviewed. These are representative of the codes that will be adopted by most locations in the coming years. The codes reviewed for this report include: International Residential Code, First Draft, April 1998; International Energy Conservation Code, 1998; International Mechanical Code, 1998; International Plumbing Code, 1997; International Fuel Gas Code, 1997; National Electrical Code, 1996. These codes were reviewed as to their application to (1) PV systems in buildings and building-integrated PV systems and (2) active solar domestic hot water and space-heating systems. A discussion of general code issues that impact these technologies is also included. Examples of this are solar access and sustainability.

  18. High quality, high efficiency welding technology for nuclear power plants

    International Nuclear Information System (INIS)

    Aoki, Shigeyuki; Nagura, Yasumi

    1996-01-01

    For nuclear power plants, it is required to ensure the safety under the high reliability and to attain the high rate of operation. In the manufacture and installation of the machinery and equipment, the welding techniques which become the basis exert large influence to them. For the purpose of improving joint performance and excluding human errors, welding heat input and the number of passes have been reduced, the automation of welding has been advanced, and at present, narrow gap arc welding and high energy density welding such as electron beam welding and laser welding have been put to practical use. Also in the welding of pipings, automatic gas metal arc welding is employed. As for the welding of main machinery and equipment, there are the welding of the joints that constitute pressure boundaries, the build-up welding on the internal surfaces of pressure vessels for separating primary water from them, and the sealing welding of heating tubes and tube plates in steam generators. These weldings are explained. The welding of pipings and the state of development and application of new welding methods are reported. (K.I.)

  19. High-concentration planar microtracking photovoltaic system exceeding 30% efficiency

    Science.gov (United States)

    Price, Jared S.; Grede, Alex J.; Wang, Baomin; Lipski, Michael V.; Fisher, Brent; Lee, Kyu-Tae; He, Junwen; Brulo, Gregory S.; Ma, Xiaokun; Burroughs, Scott; Rahn, Christopher D.; Nuzzo, Ralph G.; Rogers, John A.; Giebink, Noel C.

    2017-08-01

    Prospects for concentrating photovoltaic (CPV) power are growing as the market increasingly values high power conversion efficiency to leverage now-dominant balance of system and soft costs. This trend is particularly acute for rooftop photovoltaic power, where delivering the high efficiency of traditional CPV in the form factor of a standard rooftop photovoltaic panel could be transformative. Here, we demonstrate a fully automated planar microtracking CPV system 660× concentration ratio over a 140∘ full field of view. In outdoor testing over the course of two sunny days, the system operates automatically from sunrise to sunset, outperforming a 17%-efficient commercial silicon solar cell by generating >50% more energy per unit area per day in a direct head-to-head competition. These results support the technical feasibility of planar microtracking CPV to deliver a step change in the efficiency of rooftop solar panels at a commercially relevant concentration ratio.

  20. Development of high-efficiency solar cells on silicon web

    Science.gov (United States)

    Meier, D. L.; Greggi, J.; Okeeffe, T. W.; Rai-Choudhury, P.

    1986-01-01

    Work was performed to improve web base material with a goal of obtaining solar cell efficiencies in excess of 18% (AM1). Efforts in this program are directed toward identifying carrier loss mechanisms in web silicon, eliminating or reducing these mechanisms, designing a high efficiency cell structure with the aid of numerical models, and fabricating high efficiency web solar cells. Fabrication techniques must preserve or enhance carrier lifetime in the bulk of the cell and minimize recombination of carriers at the external surfaces. Three completed cells were viewed by cross-sectional transmission electron microscopy (TEM) in order to investigate further the relation between structural defects and electrical performance of web cells. Consistent with past TEM examinations, the cell with the highest efficiency (15.0%) had no dislocations but did have 11 twin planes.

  1. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    2016-04-19

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiency of pathline computation.

  2. On RELAP5-simulated High Flux Isotope Reactor reactivity transients: Code change and application

    International Nuclear Information System (INIS)

    Freels, J.D.

    1993-01-01

    This paper presents a new and innovative application for the RELAP5 code (hereafter referred to as ''the code''). The code has been used to simulate several transients associated with the (presently) draft version of the High-Flux Isotope Reactor (HFIR) updated safety analysis report (SAR). This paper investigates those thermal-hydraulic transients induced by nuclear reactivity changes. A major goal of the work was to use an existing RELAP5 HFIR model for consistency with other thermal-hydraulic transient analyses of the SAR. To achieve this goal, it was necessary to incorporate a new self-contained point kinetics solver into the code because of a deficiency in the point-kinetics reactivity model of the Mod 2.5 version of the code. The model was benchmarked against previously analyzed (known) transients. Given this new code, four event categories defined by the HFIR probabilistic risk assessment (PRA) were analyzed: (in ascending order of severity) a cold-loop pump start; run-away shim-regulating control cylinder and safety plate withdrawal; control cylinder ejection; and generation of an optimum void in the target region. All transients are discussed. Results of the bounding incredible event transient, the target region optimum void, are shown. Future plans for RELAP5 HFIR applications and recommendations for code improvements are also discussed

  3. Trends in Data Centre Energy Consumption under the European Code of Conduct for Data Centre Energy Efficiency

    Directory of Open Access Journals (Sweden)

    Maria Avgerinou

    2017-09-01

    Full Text Available Climate change is recognised as one of the key challenges humankind is facing. The Information and Communication Technology (ICT sector including data centres generates up to 2% of the global CO2 emissions, a number on par to the aviation sector contribution, and data centres are estimated to have the fastest growing carbon footprint from across the whole ICT sector, mainly due to technological advances such as the cloud computing and the rapid growth of the use of Internet services. There are no recent estimations of the total energy consumption of the European data centre and of their energy efficiency. The aim of this paper is to evaluate, analyse and present the current trends in energy consumption and efficiency in data centres in the European Union using the data submitted by companies participating in the European Code of Conduct for Data Centre Energy Efficiency programme, a voluntary initiative created in 2008 in response to the increasing energy consumption in data centres and the need to reduce the related environmental, economic and energy supply security impacts. The analysis shows that the average Power Usage Effectiveness (PUE of the facilities participating in the programme is declining year after year. This confirms that voluntary approaches could be effective in addressing climate and energy issue.

  4. Evaluation of the photon transmission efficiency of light guides used in scintillation detectors using LightTools code

    International Nuclear Information System (INIS)

    Park, Hye Min; Joo, Koan Sik; Kim, Jeong Ho; Kim, Dong Sung; Park, Ki Hyun; Park, Chan Jong; Han, Woo Jin

    2016-01-01

    To optimize the photon transmission efficiency of light guides used in scintillation detectors, LightTools code, which can construct and track light, was used to analyze photon transmission effectiveness with respect to light guides thickness. This analysis was carried out using the commercial light guide, N-BK 7 Optical Glass by SCHOTT, as a model for this study. The luminous exitance characteristic of the LYSO scintillator was used to analyze the photon transmission effectiveness according to the thickness of the light guide. The results of the simulations showed the effectiveness of the photon transmission according to the thickness of the light guide, which was found to be distributed from 13.38% to 33.57%. In addition, the photon transmission efficiency was found to be the highest for light guides of 4 mm of thickness and a receiving angle of 49 .deg. . Through such simulations, it is confirmed that photon transmission efficiency depends on light guide thickness and subsequent changes in the internal angle of reflection. The aim is to produce an actual light guide based on these results and to evaluate its performance

  5. Efficient estimation for ergodic diffusions sampled at high frequency

    DEFF Research Database (Denmark)

    Sørensen, Michael

    A general theory of efficient estimation for ergodic diffusions sampled at high fre- quency is presented. High frequency sampling is now possible in many applications, in particular in finance. The theory is formulated in term of approximate martingale estimating functions and covers a large class...

  6. High-Efficiency Klystron Design for the CLIC Project

    CERN Document Server

    Mollard, Antoine; Peauger, Franck; Plouin, Juliette; Beunas, Armel; Marchesin, Rodolphe

    2017-01-01

    The CLIC project requests new type of RF sources for the high power conditioning of the accelerating cavities. We are working on the development of a new kind of high-efficiency klystron to fulfill this need. This work is performed under the EuCARD-2 European program and involves theoretical and experimental study of a brand new klystron concept.

  7. Efficient estimation for high similarities using odd sketches

    DEFF Research Database (Denmark)

    Mitzenmacher, Michael; Pagh, Rasmus; Pham, Ninh Dang

    2014-01-01

    . This means that Odd Sketches provide a highly space-efficient estimator for sets of high similarity, which is relevant in applications such as web duplicate detection, collaborative filtering, and association rule learning. The method extends to weighted Jaccard similarity, relevant e.g. for TF-IDF vector...... and web duplicate detection tasks....

  8. Design of High Efficiency Illumination for LED Lighting

    Directory of Open Access Journals (Sweden)

    Yong-Nong Chang

    2013-01-01

    Full Text Available A high efficiency illumination for LED street lighting is proposed. For energy saving, this paper uses Class-E resonant inverter as main electric circuit to improve efficiency. In addition, single dimming control has the best efficiency, simplest control scheme and lowest circuit cost among other types of dimming techniques. Multiple serial-connected transformers used to drive the LED strings as they can provide galvanic isolation and have the advantage of good current distribution against device difference. Finally, a prototype circuit for driving 112 W LEDs in total was built and tested to verify the theoretical analysis.

  9. High-Efficient Low-Cost Photovoltaics Recent Developments

    CERN Document Server

    Petrova-Koch, Vesselinka; Goetzberger, Adolf

    2009-01-01

    A bird's-eye view of the development and problems of recent photovoltaic cells and systems and prospects for Si feedstock is presented. High-efficient low-cost PV modules, making use of novel efficient solar cells (based on c-Si or III-V materials), and low cost solar concentrators are in the focus of this book. Recent developments of organic photovoltaics, which is expected to overcome its difficulties and to enter the market soon, are also included.

  10. High Performance Healthcare Buildings: A Roadmap to Improved Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Singer, Brett C.; Tschudi, William F.

    2009-09-08

    This document presents a road map for improving the energy efficiency of hospitals and other healthcare facilities. The report compiles input from a broad array of experts in healthcare facility design and operations. The initial section lists challenges and barriers to efficiency improvements in healthcare. Opportunities are organized around the following ten themes: understanding and benchmarking energy use; best practices and training; codes and standards; improved utilization of existing HVAC designs and technology; innovation in HVAC design and technology; electrical system design; lighting; medical equipment and process loads; economic and organizational issues; and the design of next generation sustainable hospitals. Achieving energy efficiency will require a broad set of activities including research, development, deployment, demonstration, training, etc., organized around 48 specific objectives. Specific activities are prioritized in consideration of potential impact, likelihood of near- or mid-term feasibility and anticipated cost-effectiveness. This document is intended to be broad in consideration though not exhaustive. Opportunities and needs are identified and described with the goal of focusing efforts and resources.

  11. An Efficient Network Coding-Based Fault-Tolerant Mechanism in WBAN for Smart Healthcare Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Yuhuai Peng

    2017-08-01

    Full Text Available As a key technology in smart healthcare monitoring systems, wireless body area networks (WBANs can pre-embed sensors and sinks on body surface or inside bodies for collecting different vital signs parameters, such as human Electrocardiograph (ECG, Electroencephalograph (EEG, Electromyogram (EMG, body temperature, blood pressure, blood sugar, blood oxygen, etc. Using real-time online healthcare, patients can be tracked and monitored in normal or emergency conditions at their homes, hospital rooms, and in Intensive Care Units (ICUs. In particular, the reliability and effectiveness of the packets transmission will be directly related to the timely rescue of critically ill patients with life-threatening injuries. However, traditional fault-tolerant schemes either have the deficiency of underutilised resources or react too slowly to failures. In future healthcare systems, the medical Internet of Things (IoT for real-time monitoring can integrate sensor networks, cloud computing, and big data techniques to address these problems. It can collect and send patient’s vital parameter signal and safety monitoring information to intelligent terminals and enhance transmission reliability and efficiency. Therefore, this paper presents a design in healthcare monitoring systems for a proactive reliable data transmission mechanism with resilience requirements in a many-to-one stream model. This Network Coding-based Fault-tolerant Mechanism (NCFM first proposes a greedy grouping algorithm to divide the topology into small logical units; it then constructs a spanning tree based on random linear network coding to generate linearly independent coding combinations. Numerical results indicate that this transmission scheme works better than traditional methods in reducing the probability of packet loss, the resource redundant rate, and average delay, and can increase the effective throughput rate.

  12. A Character Segmentation Proposal for High-Speed Visual Monitoring of Expiration Codes on Beverage Cans

    Directory of Open Access Journals (Sweden)

    José C. Rodríguez-Rodríguez

    2016-04-01

    Full Text Available Expiration date labels are ubiquitous in the food industry. With the passage of time, almost any food becomes unhealthy, even when well preserved. The expiration date is estimated based on the type and manufacture/packaging time of that particular food unit. This date is then printed on the container so it is available to the end user at the time of consumption. MONICOD (MONItoring of CODes; an industrial validator of expiration codes; allows the expiration code printed on a drink can to be read. This verification occurs immediately after printing. MONICOD faces difficulties due to the high printing rate (35 cans per second and problematic lighting caused by the metallic surface on which the code is printed. This article describes a solution that allows MONICOD to extract shapes and presents quantitative results for the speed and quality.

  13. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach

    Directory of Open Access Journals (Sweden)

    Félix Gontier

    2017-11-01

    Full Text Available The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1 the estimation of standard acoustic indicators; and (2 the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens.

  14. Efficient diagonalization of the sparse matrices produced within the framework of the UK R-matrix molecular codes

    Science.gov (United States)

    Galiatsatos, P. G.; Tennyson, J.

    2012-11-01

    The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.

  15. A high-efficiency neutron coincidence counter for small samples

    International Nuclear Information System (INIS)

    Miller, M.C.; Menlove, H.O.; Russo, P.A.

    1991-01-01

    The inventory sample coincidence counter (INVS) has been modified to enhance its performance. The new design is suitable for use with a glove box sample-well (in-line application) as well as for use in the standard at-line mode. The counter has been redesigned to count more efficiently and be less sensitive to variations in sample position. These factors lead to a higher degree of precision and accuracy in a given counting period and allow for the practical use of the INVS counter with gamma-ray isotopics to obtain a plutonium assay independent of operator declarations and time-consuming chemicals analysis. A calculation study was performed using the Los Alamos transport code MCNP to optimize the design parameters. 5 refs., 7 figs., 8 tabs

  16. High efficiency heat transport and power conversion system for cascade

    International Nuclear Information System (INIS)

    Maya, I.; Bourque, R.F.; Creedon, R.L.; Schultz, K.R.

    1985-02-01

    The Cascade ICF reactor features a flowing blanket of solid BeO and LiAlO 2 granules with very high temperature capability (up to approx. 2300 K). The authors present here the design of a high temperature granule transport and heat exchange system, and two options for high efficiency power conversion. The centrifugal-throw transport system uses the peripheral speed imparted to the granules by the rotating chamber to effect granule transport and requires no additional equipment. The heat exchanger design is a vacuum heat transfer concept utilizing gravity-induced flow of the granules over ceramic heat exchange surfaces. A reference Brayton power cycle is presented which achieves 55% net efficiency with 1300 K peak helium temperature. A modified Field steam cycle (a hybrid Rankine/Brayton cycle) is presented as an alternate which achieves 56% net efficiency

  17. Highly Efficient Coherent Optical Memory Based on Electromagnetically Induced Transparency

    Science.gov (United States)

    Hsiao, Ya-Fen; Tsai, Pin-Ju; Chen, Hung-Shiue; Lin, Sheng-Xiang; Hung, Chih-Chiao; Lee, Chih-Hsi; Chen, Yi-Hsin; Chen, Yong-Fan; Yu, Ite A.; Chen, Ying-Cheng

    2018-05-01

    Quantum memory is an important component in the long-distance quantum communication based on the quantum repeater protocol. To outperform the direct transmission of photons with quantum repeaters, it is crucial to develop quantum memories with high fidelity, high efficiency and a long storage time. Here, we achieve a storage efficiency of 92.0 (1.5)% for a coherent optical memory based on the electromagnetically induced transparency scheme in optically dense cold atomic media. We also obtain a useful time-bandwidth product of 1200, considering only storage where the retrieval efficiency remains above 50%. Both are the best record to date in all kinds of schemes for the realization of optical memory. Our work significantly advances the pursuit of a high-performance optical memory and should have important applications in quantum information science.

  18. High-efficiency white OLEDs based on small molecules

    Science.gov (United States)

    Hatwar, Tukaram K.; Spindler, Jeffrey P.; Ricks, M. L.; Young, Ralph H.; Hamada, Yuuhiko; Saito, N.; Mameno, Kazunobu; Nishikawa, Ryuji; Takahashi, Hisakazu; Rajeswaran, G.

    2004-02-01

    Eastman Kodak Company and SANYO Electric Co., Ltd. recently demonstrated a 15" full-color, organic light-emitting diode display (OLED) using a high-efficiency white emitter combined with a color-filter array. Although useful for display applications, white emission from organic structures is also under consideration for other applications, such as solid-state lighting, where high efficiency and good color rendition are important. By incorporating adjacent blue and orange emitting layers in a multi-layer structure, highly efficient, stable white emission has been attained. With suitable host and dopant combinations, a luminance yield of 20 cd/A and efficiency of 8 lm/W have been achieved at a drive voltage of less than 8 volts and luminance level of 1000 cd/m2. The estimated external efficiency of this device is 6.3% and a high level of operational stability is observed. To our knowledge, this is the highest performance reported so far for white organic electroluminescent devices. We will review white OLED technology and discuss the fabrication and operating characteristics of these devices.

  19. Low Cost, High Efficiency, High Pressure Hydrogen Storage

    Energy Technology Data Exchange (ETDEWEB)

    Mark Leavitt

    2010-03-31

    A technical and design evaluation was carried out to meet DOE hydrogen fuel targets for 2010. These targets consisted of a system gravimetric capacity of 2.0 kWh/kg, a system volumetric capacity of 1.5 kWh/L and a system cost of $4/kWh. In compressed hydrogen storage systems, the vast majority of the weight and volume is associated with the hydrogen storage tank. In order to meet gravimetric targets for compressed hydrogen tanks, 10,000 psi carbon resin composites were used to provide the high strength required as well as low weight. For the 10,000 psi tanks, carbon fiber is the largest portion of their cost. Quantum Technologies is a tier one hydrogen system supplier for automotive companies around the world. Over the course of the program Quantum focused on development of technology to allow the compressed hydrogen storage tank to meet DOE goals. At the start of the program in 2004 Quantum was supplying systems with a specific energy of 1.1-1.6 kWh/kg, a volumetric capacity of 1.3 kWh/L and a cost of $73/kWh. Based on the inequities between DOE targets and Quantum’s then current capabilities, focus was placed first on cost reduction and second on weight reduction. Both of these were to be accomplished without reduction of the fuel system’s performance or reliability. Three distinct areas were investigated; optimization of composite structures, development of “smart tanks” that could monitor health of tank thus allowing for lower design safety factor, and the development of “Cool Fuel” technology to allow higher density gas to be stored, thus allowing smaller/lower pressure tanks that would hold the required fuel supply. The second phase of the project deals with three additional distinct tasks focusing on composite structure optimization, liner optimization, and metal.

  20. Developing a Coding Scheme to Analyse Creativity in Highly-constrained Design Activities

    DEFF Research Database (Denmark)

    Dekoninck, Elies; Yue, Huang; Howard, Thomas J.

    2010-01-01

    This work is part of a larger project which aims to investigate the nature of creativity and the effectiveness of creativity tools in highly-constrained design tasks. This paper presents the research where a coding scheme was developed and tested with a designer-researcher who conducted two rounds...... of design and analysis on a highly constrained design task. This paper shows how design changes can be coded using a scheme based on creative ‘modes of change’. The coding scheme can show the way a designer moves around the design space, and particularly the strategies that are used by a creative designer...... larger study with more designers working on different types of highly-constrained design task is needed, in order to draw conclusions on the modes of change and their relationship to creativity....

  1. High Efficient Bidirectional Battery Converter for residential PV Systems

    DEFF Research Database (Denmark)

    Pham, Cam; Kerekes, Tamas; Teodorescu, Remus

    2012-01-01

    Photovoltaic (PV) installation is suited for the residential environment and the generation pattern follows the distribution of residential power consumption in daylight hours. In the cases of unbalance between generation and demand, the Smart PV with its battery storage can absorb or inject...... the power to balance it. High efficient bidirectional converter for the battery storage is required due high system cost and because the power is processed twice. A 1.5kW prototype is designed and built with CoolMOS and SiC diodes, >;95% efficiency has been obtained with 200 kHz hard switching....

  2. Innovative-Simplified Nuclear Power Plant Efficiency Evaluation with High-Efficiency Steam Injector System

    International Nuclear Information System (INIS)

    Shoji, Goto; Shuichi, Ohmori; Michitsugu, Mori

    2006-01-01

    It is possible to establish simplified system with reduced space and total equipment weight using high-efficiency Steam Injectors (SI) instead of low-pressure feedwater heaters in Nuclear Power Plant (NPP). The SI works as a heat exchanger through direct contact between feedwater from condensers and extracted steam from turbines. It can get higher pressure than supplied steam pressure. The maintenance and reliability are still higher than the feedwater ones because SI has no movable parts. This paper describes the analysis of the heat balance, plant efficiency and the operation of this Innovative-Simplified NPP with high-efficiency SI. The plant efficiency and operation are compared with the electric power of 1100 MWe-class BWR system and the Innovative-Simplified BWR system with SI. The SI model is adapted into the heat balance simulator with a simplified model. The results show that plant efficiencies of the Innovated-Simplified BWR system are almost equal to original BWR ones. The present research is one of the projects that are carried out by Tokyo Electric Power Company, Toshiba Corporation, and six Universities in Japan, funded from the Institute of Applied Energy (IAE) of Japan as the national public research-funded program. (authors)

  3. A metamaterial electromagnetic energy rectifying surface with high harvesting efficiency

    Directory of Open Access Journals (Sweden)

    Xin Duan

    2016-12-01

    Full Text Available A novel metamaterial rectifying surface (MRS for electromagnetic energy capture and rectification with high harvesting efficiency is presented. It is fabricated on a three-layer printed circuit board, which comprises an array of periodic metamaterial particles in the shape of mirrored split rings, a metal ground, and integrated rectifiers employing Schottky diodes. Perfect impedance matching is engineered at two interfaces, i.e. one between free space and the surface, and the other between the metamaterial particles and the rectifiers, which are connected through optimally positioned vias. Therefore, the incident electromagnetic power is captured with almost no reflection by the metamaterial particles, then channeled maximally to the rectifiers, and finally converted to direct current efficiently. Moreover, the rectifiers are behind the metal ground, avoiding the disturbance of high power incident electromagnetic waves. Such a MRS working at 2.45 GHz is designed, manufactured and measured, achieving a harvesting efficiency up to 66.9% under an incident power density of 5 mW/cm2, compared with a simulated efficiency of 72.9%. This high harvesting efficiency makes the proposed MRS an effective receiving device in practical microwave power transmission applications.

  4. A metamaterial electromagnetic energy rectifying surface with high harvesting efficiency

    Science.gov (United States)

    Duan, Xin; Chen, Xing; Zhou, Lin

    2016-12-01

    A novel metamaterial rectifying surface (MRS) for electromagnetic energy capture and rectification with high harvesting efficiency is presented. It is fabricated on a three-layer printed circuit board, which comprises an array of periodic metamaterial particles in the shape of mirrored split rings, a metal ground, and integrated rectifiers employing Schottky diodes. Perfect impedance matching is engineered at two interfaces, i.e. one between free space and the surface, and the other between the metamaterial particles and the rectifiers, which are connected through optimally positioned vias. Therefore, the incident electromagnetic power is captured with almost no reflection by the metamaterial particles, then channeled maximally to the rectifiers, and finally converted to direct current efficiently. Moreover, the rectifiers are behind the metal ground, avoiding the disturbance of high power incident electromagnetic waves. Such a MRS working at 2.45 GHz is designed, manufactured and measured, achieving a harvesting efficiency up to 66.9% under an incident power density of 5 mW/cm2, compared with a simulated efficiency of 72.9%. This high harvesting efficiency makes the proposed MRS an effective receiving device in practical microwave power transmission applications.

  5. Updated tokamak systems code and applications to high-field ignition devices

    International Nuclear Information System (INIS)

    Reid, R.L.; Galambos, J.D.; Peng, Y-K.M.; Strickler, D.J.; Selcow, E.C.

    1985-01-01

    This paper describes revisions made to the Tokamak Systems Code to more accurately model high-field copper ignition devices. The major areas of revision were in the plasma physics model, the toroidal field (TF) coil model, and the poloidal field (PF) coil/MHD model. Also included in this paper are results obtained from applying the revised code to a study for a high-field copper ignition device to determine the impact of magnetic field on axis, (at the major radius), on performance, and on cost

  6. The fast decoding of Reed-Solomon codes using high-radix fermat theoretic transforms

    Science.gov (United States)

    Liu, K. Y.; Reed, I. S.; Truong, T. K.

    1976-01-01

    Fourier-like transforms over GF(F sub n), where F sub n = 2(2n) + 1 is a Fermat prime, are applied in decoding Reed-Solomon codes. It is shown that such transforms can be computed using high-radix fast Fourier transform (FFT) algorithms requiring considerably fewer multiplications than the more usual radix 2 FFT algorithm. A special 256-symbol, 16-symbol-error-correcting, Reed-Solomon (RS) code for space communication-link applications can be encoded and decoded using this high-radix FFT algorithm over GF(F sub 3).

  7. Highly efficient light management for perovskite solar cells.

    Science.gov (United States)

    Wang, Dong-Lin; Cui, Hui-Juan; Hou, Guo-Jiao; Zhu, Zhen-Gang; Yan, Qing-Bo; Su, Gang

    2016-01-06

    Organic-inorganic halide perovskite solar cells have enormous potential to impact the existing photovoltaic industry. As realizing a higher conversion efficiency of the solar cell is still the most crucial task, a great number of schemes were proposed to minimize the carrier loss by optimizing the electrical properties of the perovskite solar cells. Here, we focus on another significant aspect that is to minimize the light loss by optimizing the light management to gain a high efficiency for perovskite solar cells. In our scheme, the slotted and inverted prism structured SiO2 layers are adopted to trap more light into the solar cells, and a better transparent conducting oxide layer is employed to reduce the parasitic absorption. For such an implementation, the efficiency and the serviceable angle of the perovskite solar cell can be promoted impressively. This proposal would shed new light on developing the high-performance perovskite solar cells.

  8. A Low VSWR and High Efficiency Waveguide Feed Antenna Array

    Directory of Open Access Journals (Sweden)

    Zhao Xiao-Fang

    2018-01-01

    Full Text Available A low VSWR and high efficiency antenna array operating in the Ku band for satellite communications is presented in this paper. To achieve high radiation efficiency and broad enough bandwidth, all-metal radiation elements and full-corporate waveguide feeding network are employed. As the general milling method is used in the multilayer antenna array fabrication, the E-plane waveguide feeding network is adopted here to suppress the wave leakage caused by the imperfect connectivity between adjacent layers. A 4 × 8 elements array prototype was fabricated and tested for verification. The measured results of proposed antenna array show bandwidth of 6.9% (13.9–14.8 GHz for VSWR < 1.5. Furthermore, antenna gain and efficiency of higher than 22.2 dBi and 80% are also exhibited, respectively.

  9. Potential high efficiency solar cells: Applications from space photovoltaic research

    Science.gov (United States)

    Flood, D. J.

    1986-01-01

    NASA involvement in photovoltaic energy conversion research development and applications spans over two decades of continuous progress. Solar cell research and development programs conducted by the Lewis Research Center's Photovoltaic Branch have produced a sound technology base not only for the space program, but for terrestrial applications as well. The fundamental goals which have guided the NASA photovoltaic program are to improve the efficiency and lifetime, and to reduce the mass and cost of photovoltaic energy conversion devices and arrays for use in space. The major efforts in the current Lewis program are on high efficiency, single crystal GaAs planar and concentrator cells, radiation hard InP cells, and superlattice solar cells. A brief historical perspective of accomplishments in high efficiency space solar cells will be given, and current work in all of the above categories will be described. The applicability of space cell research and technology to terrestrial photovoltaics will be discussed.

  10. The thermodynamic characteristics of high efficiency, internal-combustion engines

    International Nuclear Information System (INIS)

    Caton, Jerald A.

    2012-01-01

    Highlights: ► The thermodynamics of an automotive engine are determined using a cycle simulation. ► The net indicated thermal efficiency increased from 37.0% to 53.9%. ► High compression ratio, lean mixtures and high EGR were the important features. ► Efficiency increased due to lower heat losses, and increased work conversion. ► The nitric oxides were essentially zero due to the low combustion temperatures. - Abstract: Recent advancements have demonstrated new combustion modes for internal combustion engines that exhibit low nitric oxide emissions and high thermal efficiencies. These new combustion modes involve various combinations of stratification, lean mixtures, high levels of EGR, multiple injections, variable valve timings, two fuels, and other such features. Although the exact combination of these features that provides the best design is not yet clear, the results (low emissions with high efficiencies) are of major interest. The current work is directed at determining some of the fundamental thermodynamic reasons for the relatively high efficiencies and to quantify these factors. Both the first and second laws are used in this assessment. An automotive engine (5.7 l) which included some of the features mentioned above (e.g., high compression ratios, lean mixtures, and high EGR) was evaluated using a thermodynamic cycle simulation. These features were examined for a moderate load (bmep = 900 kPa), moderate speed (2000 rpm) condition. By the use of lean operation, high EGR levels, high compression ratio and other features, the net indicated thermal efficiency increased from 37.0% to 53.9%. These increases are explained in a step-by-step fashion. The major reasons for these improvements include the higher compression ratio and the dilute charge (lean mixture, high EGR). The dilute charge resulted in lower temperatures which in turn resulted in lower heat loss. In addition, the lower temperatures resulted in higher ratios of the specific heats which

  11. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    International Nuclear Information System (INIS)

    Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T

    2006-01-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems

  12. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  13. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  14. A nuclear standard high-efficiency adsorber for iodine

    International Nuclear Information System (INIS)

    Wang Jianmin; Qian Yinge

    1988-08-01

    The structure of a nuclear standard high-efficiency adsorber, adsorbent and its performance are introduced. The performance and structure were compared with the same kind product of other firms. The results show that the leakage rate is less than 0.005%

  15. Efficiency criteria for high reliability measured system structures

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.

    2012-01-01

    The procedures of structural redundancy are usually used to develop high reliability measured systems. To estimate efficiency of such structures the criteria to compare different systems has been developed. So it is possible to develop more exact system by inspection of redundant system data unit stochastic characteristics in accordance with the developed criteria [ru

  16. Optimization of high-efficiency components; Optimieren auf hohem Niveau

    Energy Technology Data Exchange (ETDEWEB)

    Neumann, Eva

    2009-07-01

    High efficiency is a common feature of modern current inverters and is not a unique selling proposition. Other factors that influence the buyer's decision are cost reduction, reliability and service, optimum grid integration, and the challenges of the competitive thin film technology. (orig.)

  17. Orion, a high efficiency 4π neutron detector

    International Nuclear Information System (INIS)

    Crema, E.; Piasecki, E.; Wang, X.M.; Doubre, H.; Galin, J.; Guerreau, D.; Pouthas, J.; Saint-Laurent, F.

    1990-01-01

    In intermediate energy heavy ion collisions the multiplicity of emitted neutrons is strongly connected to energy dissipation and to impact parameter. We present the 4π detector ORION, a high efficiency liquid scintillator detector which permits to get information on the multiplicity of neutrons measured event-wise and on the spatial distribution of these neutrons [fr

  18. High efficiency hydrodynamic DNA fragmentation in a bubbling system

    NARCIS (Netherlands)

    Li, Lanhui; Jin, Mingliang; Sun, Chenglong; Wang, Xiaoxue; Xie, Shuting; Zhou, Guofu; Van Den Berg, Albert; Eijkel, Jan C.T.; Shui, Lingling

    2017-01-01

    DNA fragmentation down to a precise fragment size is important for biomedical applications, disease determination, gene therapy and shotgun sequencing. In this work, a cheap, easy to operate and high efficiency DNA fragmentation method is demonstrated based on hydrodynamic shearing in a bubbling

  19. High efficiency confinement mode by electron cyclotron heating

    International Nuclear Information System (INIS)

    Funahashi, Akimasa

    1987-01-01

    In the medium size nuclear fusion experiment facility JFT-2M in the Japan Atomic Energy Research Institute, the research on the high efficiency plasma confinement mode has been advanced, and in the experiment in June, 1987, the formation of a high efficiency confinement mode was successfully controlled by electron cyclotron heating, for the first time in the world. This result further advanced the control of the formation of a high efficiency plasma confinement mode and the elucidation of the physical mechanism of that mode, and promoted the research and development of the plasma heating by electron cyclotron heating. In this paper, the recent results of the research on a high efficiency confinement mode at the JFT-2M are reported, and the role of the JFT-2M and the experiment on the improvement of core plasma performance are outlined. Now the plasma temperature exceeding 100 million deg C has been attained in large tokamaks, and in medium size facilities, the various measures for improving confinement performance are to be brought forth and their scientific basis is elucidated to assist large facilities. The JFT-2M started the operation in April, 1983, and has accumulated the results smoothly since then. (Kako, I.)

  20. Super Boiler: First Generation, Ultra-High Efficiency Firetube Boiler

    Energy Technology Data Exchange (ETDEWEB)

    None

    2006-06-01

    This factsheet describes a research project whose goal is to develop and demonstrate a first-generation ultra-high-efficiency, ultra-low emissions, compact gas-fired package boiler (Super Boiler), and formulate a long-range RD&D plan for advanced boiler technology out to the year 2020.

  1. High-efficient solar cells with porous silicon

    International Nuclear Information System (INIS)

    Migunova, A.A.

    2002-01-01

    It has been shown that the porous silicon is multifunctional high-efficient coating on silicon solar cells, modifies its surface and combines in it self antireflection and passivation properties., The different optoelectronic effects in solar cells with porous silicon were considered. The comparative parameters of uncovered photodetectors also solar cells with porous silicon and other coatings were resulted. (author)

  2. Benefits of high aerodynamic efficiency to orbital transfer vehicles

    Science.gov (United States)

    Andrews, D. G.; Norris, R. B.; Paris, S. W.

    1984-01-01

    The benefits and costs of high aerodynamic efficiency on aeroassisted orbital transfer vehicles (AOTV) are analyzed. Results show that a high lift to drag (L/D) AOTV can achieve significant velocity savings relative to low L/D aerobraked OTV's when traveling round trip between low Earth orbits (LEO) and alternate orbits as high as geosynchronous Earth orbit (GEO). Trajectory analysis is used to show the impact of thermal protection system technology and the importance of lift loading coefficient on vehicle performance. The possible improvements in AOTV subsystem technologies are assessed and their impact on vehicle inert weight and performance noted. Finally, the performance of high L/D AOTV concepts is compared with the performances of low L/D aeroassisted and all propulsive OTV concepts to assess the benefits of aerodynamic efficiency on this class of vehicle.

  3. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  4. How high are option values in energy-efficiency investments?

    International Nuclear Information System (INIS)

    Sanstad, A.H.; Blumstein, C.; Stoft, S.E.; California Univ., Berkeley, CA,

    1995-01-01

    High implicit discount rates in consumers' energy-efficiency investments have long been a source of controversy. In several recent papers, Hassett and Metcalf argue that the uncertainty and irreversibility attendant to such investments, and the resulting option value, account for this anomalously high implicit discounting. Using their model and data, we show that, to the contrary, their analysis falls well short of providing an explanation of this pattern. (author)

  5. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)

    2015-03-10

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.

  6. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    International Nuclear Information System (INIS)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing; Xu, Guang-Hua

    2015-01-01

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n n with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method

  7. Efficiency and Loading Evaluation of High Efficiency Mist Eliminators (HEME) - 12003

    Energy Technology Data Exchange (ETDEWEB)

    Giffin, Paxton K.; Parsons, Michael S.; Waggoner, Charles A. [Institute for Clean Energy Technology, Mississippi State University, 205 Research Blvd Starkville, MS 39759 (United States)

    2012-07-01

    High efficiency mist eliminators (HEME) are filters primarily used to remove moisture and/or liquid aerosols from an air stream. HEME elements are designed to reduce aerosol and particulate load on primary High Efficiency Particulate Air (HEPA) filters and to have a liquid particle removal efficiency of approximately 99.5% for aerosols down to sub-micron size particulates. The investigation presented here evaluates the loading capacity of the element in the absence of a water spray cleaning system. The theory is that without the cleaning system, the HEME element will suffer rapid buildup of solid aerosols, greatly reducing the particle loading capacity. Evaluation consists of challenging the element with a waste surrogate dry aerosol and di-octyl phthalate (DOP) at varying intervals of differential pressure to examine the filtering efficiency of three different element designs at three different media velocities. Also, the elements are challenged with a liquid waste surrogate using Laskin nozzles and large dispersion nozzles. These tests allow the loading capacity of the unit to be determined and the effectiveness of washing down the interior of the elements to be evaluated. (authors)

  8. Heat pumps; Synergy of high efficiency and low carbon electricity

    Energy Technology Data Exchange (ETDEWEB)

    Koike, Akio

    2010-09-15

    Heat pump is attracting wide attention for its high efficiency to utilize inexhaustible and renewable ambient heat in the environment. With its rapid innovation and efficiency improvement, this technology has a huge potential to reduce CO2 emissions by replacing currently widespread fossil fuel combustion systems to meet various heat demands from the residential, commercial and industrial sectors. Barriers to deployment such as low public awareness and a relatively long pay-back period do exist, so it is strongly recommended that each country implement policies to promote heat pumps as a renewable energy option and an effective method to combat global warming.

  9. Development of large area, high efficiency amorphous silicon solar cell

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, K.S.; Kim, S.; Kim, D.W. [Yu Kong Taedok Institute of Technology (Korea, Republic of)

    1996-02-01

    The objective of the research is to develop the mass-production technologies of high efficiency amorphous silicon solar cells in order to reduce the costs of solar cells and dissemination of solar cells. Amorphous silicon solar cell is the most promising option of thin film solar cells which are relatively easy to reduce the costs. The final goal of the research is to develop amorphous silicon solar cells having the efficiency of 10%, the ratio of light-induced degradation 15% in the area of 1200 cm{sup 2} and test the cells in the form of 2 Kw grid-connected photovoltaic system. (author) 35 refs., 8 tabs., 67 figs.

  10. Iodine laser of high efficiency and fast repetition rate

    Energy Technology Data Exchange (ETDEWEB)

    Hohla, K; Witte, K J

    1976-07-01

    The scaling laws of an iodine laser of high efficiency and fast repetition rate are reported. The laser is pumped with a new kind of low pressure Hg-UV-lamps which convert 32% of the electrical input in UV-light in the absorption band of the iodine laser and which can be fired up to 100 Hz. Details of a 10 kJ/1 nsec system as dimensions, energy density, repetition rate, flow velocity, gas composition and gas pressure and the overall efficiency are given which is expected to be about 2%.

  11. The problems of high efficient extraction from the isochronous cyclotron

    International Nuclear Information System (INIS)

    Schwabe, J.

    1994-06-01

    The problem of high efficient extraction (η ≥ 50%) from isochronous cyclotrons (with the exception of the stripping method) is not completely solved up to this day. This problem is specifically important, because these cyclotrons are being also applied in the production of medical radioisotopes, labeled pharmaceuticals as well as in neutron therapy (oncology), machine industry, agriculture (plant mutagenesis), etc. The aim of the proposed topic is to solve this problem on the AIC-144 isochronous cyclotron in the INP (Institute of Nuclear Physics). Lately, a beam of 20 MeV deuterons with an efficiency of ca. 15% was extracted from this cyclotron. (author). 25 refs, 14 figs

  12. Highly Flexible and Efficient Solar Steam Generation Device.

    Science.gov (United States)

    Chen, Chaoji; Li, Yiju; Song, Jianwei; Yang, Zhi; Kuang, Yudi; Hitz, Emily; Jia, Chao; Gong, Amy; Jiang, Feng; Zhu, J Y; Yang, Bao; Xie, Jia; Hu, Liangbing

    2017-08-01

    Solar steam generation with subsequent steam recondensation has been regarded as one of the most promising techniques to utilize the abundant solar energy and sea water or other unpurified water through water purification, desalination, and distillation. Although tremendous efforts have been dedicated to developing high-efficiency solar steam generation devices, challenges remain in terms of the relatively low efficiency, complicated fabrications, high cost, and inability to scale up. Here, inspired by the water transpiration behavior of trees, the use of carbon nanotube (CNT)-modified flexible wood membrane (F-Wood/CNTs) is demonstrated as a flexible, portable, recyclable, and efficient solar steam generation device for low-cost and scalable solar steam generation applications. Benefitting from the unique structural merits of the F-Wood/CNTs membrane-a black CNT-coated hair-like surface with excellent light absorbability, wood matrix with low thermal conductivity, hierarchical micro- and nanochannels for water pumping and escaping, solar steam generation device based on the F-Wood/CNTs membrane demonstrates a high efficiency of 81% at 10 kW cm -2 , representing one of the highest values ever-reported. The nature-inspired design concept in this study is straightforward and easily scalable, representing one of the most promising solutions for renewable and portable solar energy generation and other related phase-change applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  14. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  15. Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation

    International Nuclear Information System (INIS)

    Nnakamachi, Eiji; Kuramae, Hiroyuki; Ngoc Tam, Nguyen; Nakamura, Yasunori; Sakamoto, Hidetoshi; Morimoto, Hideo

    2007-01-01

    In this study, the dynamic- and static-explicit multi-scale finite element (F.E.) codes are developed by employing the homogenization method, the crystalplasticity constitutive equation and SEM-EBSD measurement based polycrystal model. These can predict the crystal morphological change and the hardening evolution at the micro level, and the macroscopic plastic anisotropy evolution. These codes are applied to analyze the asymmetrical rolling process, which is introduced to control the crystal texture of the sheet metal for generating a high formability sheet metal. These codes can predict the yield surface and the sheet formability by analyzing the strain path dependent yield, the simple sheet forming process, such as the limit dome height test and the cylindrical deep drawing problems. It shows that the shear dominant rolling process, such as the asymmetric rolling, generates ''high formability'' textures and eventually the high formability sheet. The texture evolution and the high formability of the newly generated sheet metal experimentally were confirmed by the SEM-EBSD measurement and LDH test. It is concluded that these explicit type crystallographic homogenized multi-scale F.E. code could be a comprehensive tool to predict the plastic induced texture evolution, anisotropy and formability by the rolling process and the limit dome height test analyses

  16. Simple processing of high efficiency silicon solar cells

    International Nuclear Information System (INIS)

    Hamammu, I.M.; Ibrahim, K.

    2006-01-01

    Cost effective photovoltaic devices have been an area research since the development of the first solar cells, as cost is the major factor in their usage. Silicon solar cells have the biggest share in the photovoltaic market, though silicon os not the optimal material for solar cells. This work introduces a simplified approach for high efficiency silicon solar cell processing, by minimizing the processing steps and thereby reducing cost. The suggested procedure might also allow for the usage of lower quality materials compared to the one used today. The main features of the present work fall into: simplifying the diffusion process, edge shunt isolation and using acidic texturing instead of the standard alkaline processing. Solar cells of 17% efficiency have been produced using this procedure. Investigations on the possibility of improving the efficiency and using less quality material are still underway

  17. High efficiency graphene coated copper based thermocells connected in series

    Science.gov (United States)

    Sindhuja, Mani; Indubala, Emayavaramban; Sudha, Venkatachalam; Harinipriya, Seshadri

    2018-04-01

    Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2W/m2 for normalized cross sectional electrode area is obtained at 60ºC of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  18. High Efficiency Graphene Coated Copper Based Thermocells Connected in Series

    Directory of Open Access Journals (Sweden)

    Mani Sindhuja

    2018-04-01

    Full Text Available Conversion of low-grade waste heat into electricity had been studied employing single thermocell or flowcells so far. Graphene coated copper electrodes based thermocells connected in series displayed relatively high efficiency of thermal energy harvesting. The maximum power output of 49.2 W/m2 for normalized cross sectional electrode area is obtained at 60°C of inter electrode temperature difference. The relative carnot efficiency of 20.2% is obtained from the device. The importance of reducing the mass transfer and ion transfer resistance to improve the efficiency of the device is demonstrated. Degradation studies confirmed mild oxidation of copper foil due to corrosion caused by the electrolyte.

  19. Rigid-beam model of a high-efficiency magnicon

    International Nuclear Information System (INIS)

    Rees, D.E.; Tallerico, P.J.; Humphries, S.J. Jr.

    1993-01-01

    The magnicon is a new type of high-efficiency deflection-modulated amplifier developed at the Institute of Nuclear Physics in Novosibirsk, Russia. The prototype pulsed magnicon achieved an output power of 2.4 MW and an efficiency of 73% at 915 MHz. This paper presents the results of a rigid-beam model for a 700-MHz, 2.5-MW 82%-efficient magnicon. The rigid-beam model allows for characterization of the beam dynamics by tracking only a single electron. The magnicon design presented consists of a drive cavity; passive cavities; a pi-mode, coupled-deflection cavity; and an output cavity. It represents an optimized design. The model is fully self-consistent, and this paper presents the details of the model and calculated performance of a 2.5-MW magnicon

  20. HIGH JET EFFICIENCY AND SIMULATIONS OF BLACK HOLE MAGNETOSPHERES

    International Nuclear Information System (INIS)

    Punsly, Brian

    2011-01-01

    This Letter reports on a growing body of observational evidence that many powerful lobe-dominated (FR II) radio sources likely have jets with high efficiency. This study extends the maximum efficiency line (jet power ∼25 times the thermal luminosity) defined in Fernandes et al. so as to span four decades of jet power. The fact that this line extends over the full span of FR II radio power is a strong indication that this is a fundamental property of jet production that is independent of accretion power. This is a valuable constraint for theorists. For example, the currently popular 'no-net-flux' numerical models of black hole accretion produce jets that are two to three orders of magnitude too weak to be consistent with sources near maximum efficiency.

  1. High efficiency particulate removal with sintered metal filters

    International Nuclear Information System (INIS)

    Kirstein, B.E.; Paplawsky, W.J.; Pence, D.T.; Hedahl, T.G.

    1981-01-01

    Because of their particle removal efficiencies and durability, sintered metal filters have been chosen for high efficiency particulate air (HEPA) filter protection in the off-gas treatment system for the proposed Idaho National Engineering Laboratory Transuranic Waste Treatment Facility. Process evaluation of sintered metal filters indicated a lack of sufficient process design data to ensure trouble-free operation. Subsequence pilot scale testing was performed with flyash as the test particulate. The test results showed that the sintered metal filters can have an efficiency greater than 0.9999999 for the specific test conditions used. Stable pressure drop characteristics were observed in pulsed and reversed flow blowback modes of operation. Over 4900 hours of operation were obtained with operating conditions ranging up to approximately 90 0 C and 24 vol % water vapor in the gas stream

  2. 3rd symposium on high-efficiency boiler technology: potential, performance, shortcomings of natural gas fuelled high-efficiency boilers

    International Nuclear Information System (INIS)

    1993-01-01

    The brochure contains abstracts of the papers presented at the symposium. The potential, performance and marketing problems of natural gas high-efficiency boiler systems are outlined, and new ideas are presented for gas utilities, producers of appliances, fitters, and chimneysweeps. 13 papers are available as separate regards in this database. (HW) [de

  3. Two dimensional code for modeling of high ione cyclotron harmonic fast wave heating and current drive

    International Nuclear Information System (INIS)

    Grekov, D.; Kasilov, S.; Kernbichler, W.

    2016-01-01

    A two dimensional numerical code for computation of the electromagnetic field of a fast magnetosonic wave in a tokamak at high harmonics of the ion cyclotron frequency has been developed. The code computes the finite difference solution of Maxwell equations for separate toroidal harmonics making use of the toroidal symmetry of tokamak plasmas. The proper boundary conditions are prescribed at the realistic tokamak vessel. The currents in the RF antenna are specified externally and then used in Ampere law. The main poloidal tokamak magnetic field and the ''kinetic'' part of the dielectric permeability tensor are treated iteratively. The code has been verified against known analytical solutions and first calculations of current drive in the spherical torus are presented.

  4. Modification in the FUDA computer code to predict fuel performance at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Das, M; Arunakumar, B V; Prasad, P N [Nuclear Power Corp., Mumbai (India)

    1997-08-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig.

  5. High performance mixed optical CDMA system using ZCC code and multiband OFDM

    Science.gov (United States)

    Nawawi, N. M.; Anuar, M. S.; Junita, M. N.; Rashidi, C. B. M.

    2017-11-01

    In this paper, we have proposed a high performance network design, which is based on mixed optical Code Division Multiple Access (CDMA) system using Zero Cross Correlation (ZCC) code and multiband Orthogonal Frequency Division Multiplexing (OFDM) called catenated OFDM. In addition, we also investigate the related changing parameters such as; effective power, number of user, number of band, code length and code weight. Then we theoretically analyzed the system performance comprehensively while considering up to five OFDM bands. The feasibility of the proposed system architecture is verified via the numerical analysis. The research results demonstrated that our developed modulation solution can significantly enhanced the total number of user; improving up to 80% for five catenated bands compared to traditional optical CDMA system, with the code length equals to 80, transmitted at 622 Mbps. It is also demonstrated that the BER performance strongly depends on number of weight, especially with less number of users. As the number of weight increases, the BER performance is better.

  6. Analysis and application of ratcheting evaluation procedure of Japanese high temperature design code DDS

    International Nuclear Information System (INIS)

    Lee, H. Y.; Kim, J. B.; Lee, J. H.

    2002-01-01

    In this study, the evaluation procedure of Japanese DDS code which was recently developed to assess the progressive inelastic deformation occurring under repetition of secondary stresses was analyzed and the evaluation results according to DDS was compared those of the thermal ratchet structural test carried out by KAERI to analyze the conservativeness of the code. The existing high temperature codes of US ASME-NH and French RCC-MR suggest the limited ratcheting procedures for only the load cases of cyclic secondary stresses under primary stresses. So they are improper to apply to the actual ratcheting problem which can occur under cyclic secondary membrane stresses due to the movement of hot free surface for the pool type LMR. DDS provides explicitly an analysis procedure of ratcheting due to moving thermal gradients near hot free surface. A comparison study was carried out between the results by the design code of DDS and by the structural test to investigate the conservativeness of DDS code, which showed that the evaluation results by DDS were in good agreement with those of the structural test

  7. High performance mixed optical CDMA system using ZCC code and multiband OFDM

    Directory of Open Access Journals (Sweden)

    Nawawi N. M.

    2017-01-01

    Full Text Available In this paper, we have proposed a high performance network design, which is based on mixed optical Code Division Multiple Access (CDMA system using Zero Cross Correlation (ZCC code and multiband Orthogonal Frequency Division Multiplexing (OFDM called catenated OFDM. In addition, we also investigate the related changing parameters such as; effective power, number of user, number of band, code length and code weight. Then we theoretically analyzed the system performance comprehensively while considering up to five OFDM bands. The feasibility of the proposed system architecture is verified via the numerical analysis. The research results demonstrated that our developed modulation solution can significantly enhanced the total number of user; improving up to 80% for five catenated bands compared to traditional optical CDMA system, with the code length equals to 80, transmitted at 622 Mbps. It is also demonstrated that the BER performance strongly depends on number of weight, especially with less number of users. As the number of weight increases, the BER performance is better.

  8. Modification in the FUDA computer code to predict fuel performance at high burnup

    International Nuclear Information System (INIS)

    Das, M.; Arunakumar, B.V.; Prasad, P.N.

    1997-01-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig

  9. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  10. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  11. Lightweight High Efficiency Electric Motors for Space Applications

    Science.gov (United States)

    Robertson, Glen A.; Tyler, Tony R.; Piper, P. J.

    2011-01-01

    Lightweight high efficiency electric motors are needed across a wide range of space applications from - thrust vector actuator control for launch and flight applications to - general vehicle, base camp habitat and experiment control for various mechanisms to - robotics for various stationary and mobile space exploration missions. QM Power?s Parallel Path Magnetic Technology Motors have slowly proven themselves to be a leading motor technology in this area; winning a NASA Phase II for "Lightweight High Efficiency Electric Motors and Actuators for Low Temperature Mobility and Robotics Applications" a US Army Phase II SBIR for "Improved Robot Actuator Motors for Medical Applications", an NSF Phase II SBIR for "Novel Low-Cost Electric Motors for Variable Speed Applications" and a DOE SBIR Phase I for "High Efficiency Commercial Refrigeration Motors" Parallel Path Magnetic Technology obtains the benefits of using permanent magnets while minimizing the historical trade-offs/limitations found in conventional permanent magnet designs. The resulting devices are smaller, lower weight, lower cost and have higher efficiency than competitive permanent magnet and non-permanent magnet designs. QM Power?s motors have been extensively tested and successfully validated by multiple commercial and aerospace customers and partners as Boeing Research and Technology. Prototypes have been made between 0.1 and 10 HP. They are also in the process of scaling motors to over 100kW with their development partners. In this paper, Parallel Path Magnetic Technology Motors will be discussed; specifically addressing their higher efficiency, higher power density, lighter weight, smaller physical size, higher low end torque, wider power zone, cooler temperatures, and greater reliability with lower cost and significant environment benefit for the same peak output power compared to typically motors. A further discussion on the inherent redundancy of these motors for space applications will be provided.

  12. Load balancing in highly parallel processing of Monte Carlo code for particle transport

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Takemiya, Hiroshi; Kawasaki, Takuji

    1998-01-01

    In parallel processing of Monte Carlo (MC) codes for neutron, photon and electron transport problems, particle histories are assigned to processors making use of independency of the calculation for each particle. Although we can easily parallelize main part of a MC code by this method, it is necessary and practically difficult to optimize the code concerning load balancing in order to attain high speedup ratio in highly parallel processing. In fact, the speedup ratio in the case of 128 processors remains in nearly one hundred times when using the test bed for the performance evaluation. Through the parallel processing of the MCNP code, which is widely used in the nuclear field, it is shown that it is difficult to attain high performance by static load balancing in especially neutron transport problems, and a load balancing method, which dynamically changes the number of assigned particles minimizing the sum of the computational and communication costs, overcomes the difficulty, resulting in nearly fifteen percentage of reduction for execution time. (author)

  13. Test Program for High Efficiency Gas Turbine Exhaust Diffuser

    Energy Technology Data Exchange (ETDEWEB)

    Norris, Thomas R.

    2009-12-31

    This research relates to improving the efficiency of flow in a turbine exhaust, and thus, that of the turbine and power plant. The Phase I SBIR project demonstrated the technical viability of “strutlets” to control stalls on a model diffuser strut. Strutlets are a novel flow-improving vane concept intended to improve the efficiency of flow in turbine exhausts. Strutlets can help reduce turbine back pressure, and incrementally improve turbine efficiency, increase power, and reduce greenhouse gas emmission. The long-term goal is a 0.5 percent improvement of each item, averaged over the US gas turbine fleet. The strutlets were tested in a physical scale model of a gas turbine exhaust diffuser. The test flow passage is a straight, annular diffuser with three sets of struts. At the end of Phase 1, the ability of strutlets to keep flow attached to struts was demonstrated, but the strutlet drag was too high for a net efficiency advantage. An independently sponsored followup project did develop a highly-modified low-drag strutlet. In combination with other flow improving vanes, complicance to the stated goals was demonstrated for for simple cycle power plants, and to most of the goals for combined cycle power plants using this particular exhaust geometry. Importantly, low frequency diffuser noise was reduced by 5 dB or more, compared to the baseline. Appolicability to other diffuser geometries is yet to be demonstrated.

  14. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  15. High Efficiency, High Temperature Foam Core Heat Exchanger for Fission Surface Power Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Fission-based power systems with power levels of 30 to ≥100 kWe will be needed for planetary surface bases. Development of high temperature, high efficiency heat...

  16. Combustion phasing for maximum efficiency for conventional and high efficiency engines

    International Nuclear Information System (INIS)

    Caton, Jerald A.

    2014-01-01

    Highlights: • Combustion phasing for max efficiency is a function of engine parameters. • Combustion phasing is most affected by heat transfer, compression ratio, burn duration. • Combustion phasing is less affected by speed, load, equivalence ratio and EGR. • Combustion phasing for a high efficiency engine was more advanced. • Exergy destruction during combustion as functions of combustion phasing is reported. - Abstract: The importance of the phasing of the combustion event for internal-combustion engines is well appreciated, but quantitative details are sparse. The objective of the current work was to examine the optimum combustion phasing (based on maximum bmep) as functions of engine design and operating variables. A thermodynamic, engine cycle simulation was used to complete this assessment. As metrics for the combustion phasing, both the crank angle for 50% fuel mass burned (CA 50 ) and the crank angle for peak pressure (CA pp ) are reported as functions of the engine variables. In contrast to common statements in the literature, the optimum CA 50 and CA pp vary depending on the design and operating variables. Optimum, as used in this paper, refers to the combustion timing that provides the maximum bmep and brake thermal efficiency (MBT timing). For this work, the variables with the greatest influence on the optimum CA 50 and CA pp were the heat transfer level, the burn duration and the compression ratio. Other variables such as equivalence ratio, EGR level, engine speed and engine load had a much smaller impact on the optimum CA 50 and CA pp . For the conventional engine, for the conditions examined, the optimum CA 50 varied between about 5 and 11°aTDC, and the optimum CA pp varied between about 9 and 16°aTDC. For a high efficiency engine (high dilution, high compression ratio), the optimum CA 50 was 2.5°aTDC, and the optimum CA pp was 7.8°aTDC. These more advanced values for the optimum CA 50 and CA pp for the high efficiency engine were

  17. SimProp: a simulation code for ultra high energy cosmic ray propagation

    International Nuclear Information System (INIS)

    Aloisio, R.; Grillo, A.F.; Boncioli, D.; Petrera, S.; Salamida, F.

    2012-01-01

    A new Monte Carlo simulation code for the propagation of Ultra High Energy Cosmic Rays is presented. The results of this simulation scheme are tested by comparison with results of another Monte Carlo computation as well as with the results obtained by directly solving the kinetic equation for the propagation of Ultra High Energy Cosmic Rays. A short comparison with the latest flux published by the Pierre Auger collaboration is also presented

  18. High resolution PET breast imager with improved detection efficiency

    Science.gov (United States)

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  19. The high efficiency steel filters for nuclear air cleaning

    International Nuclear Information System (INIS)

    Bergman, W.; Larsen, G.; Lopez, R.; Williams, K.; Violet, C.

    1990-08-01

    We have, in cooperation with industry, developed high-efficiency filters made from sintered stainless-steel fibers for use in several air-cleaning applications in the nuclear industry. These filters were developed to overcome the failure modes in present high-efficiency particulate air (HEPA) filters. HEPA filters are made from glass paper and glue, and they may fail when they get hot or wet and when they are overpressured. In developing our steel filters, we first evaluated the commercially available stainless-steel filter media made from sintered powder and sintered fiber. The sintered-fiber media performed much better than sintered-powder media, and the best media had the smallest fiber diameter. Using the best media, we then built prototype filters for venting compressed gases and evaluated them in our automated filter tester. 12 refs., 20 figs

  20. High efficiency steel filters for nuclear air cleaning

    International Nuclear Information System (INIS)

    Bergman, W.; Conner, J.; Larsen, G.; Lopez, R.; Turner, C.; Vahla, G.; Violet, C.; Williams, K.

    1991-01-01

    The authors have, in cooperation with industry, developed high-efficiency filters made from sintered stainless-steel fibers for use in several air-cleaning applications in the nuclear industry. These filters were developed to overcome the failure modes in present high-efficiently particulate air (HEPA) filters. HEPA filters are made from glass paper and glue, and they may fail when they get hot or wet and when they are overpressured. In developing steel filters, they first evaluated the commercially available stainless-steel filter media made from sintered powder and sintered fiber. The sintered-fiber media performed much better than sintered-powder media, and the best media had the smallest fiber diameter. Using the best media, prototype filters were then built for venting compressed gases and evaluated in their automated filter tester

  1. Blanket options for high-efficiency fusion power

    International Nuclear Information System (INIS)

    Usher, J.L.; Lazareth, O.W.; Fillo, J.A.; Horn, F.L.; Powell, J.R.

    1980-01-01

    The efficiencies of blankets for fusion reactors are usually in the range of 30 to 40%, limited by the operating temperatures (500 0 C) of conventional structural materials such as stainless steels. In this project two-zone blankets are proposed; these blankets consist of a low-temperature shell surrounding a high-temperature interior zone. A survey of nucleonics and thermal hydraulic parameters has led to a reference blanket design consisting of a water-cooled stainless steel shell around a BeO, ZrO 2 interior (cooled by argon) utilizing Li 2 O for tritium breeding. In this design, approximately 60% of the fusion energy is deposited in the high-temperature interior. The maximum argon temperature is 2230 0 C leading to an overall efficiency estimate of 55 to 60% for this reference case

  2. Fusion blankets for high-efficiency power cycles

    International Nuclear Information System (INIS)

    Usher, J.L.; Lazareth, O.W.; Fillo, J.A.; Horn, F.L.; Powell, J.R.

    1980-01-01

    The efficiencies of blankets for fusion reactors are usually in the range of 30 to 40%, limited by the operating temperatures (500 0 C) of conventional structural materials such as stainless steels. In this project two-zone blankets are proposed; these blankets consist of a low-temperature shell surrounding a high-temperature interior zone. A survey of nucleonics and thermal hydraulic parameters has led to a reference blanket design consisting of a water-cooled stainless steel shell around a BeO, ZrO 2 interior (cooled by argon) utilizing Li 2 O for tritium breeding. In this design, approximately 60% of the fusion energy is deposited in the high-temperature interior. The maximum argon temperature is 2230 0 C leading to an overall efficiency estimate of 55 to 60% for this reference case

  3. Fusion blanket for high-efficiency power cycles

    International Nuclear Information System (INIS)

    Usher, J.L.; Powell, J.R.; Fillo, J.A.; Horn, F.L.; Lazareth, O.W.; Taussig, R.

    1980-01-01

    The efficiencies of blankets for fusion reactors are usually in the range of 30 to 40%, limited by the operating temperature (500 0 C) of conventional structural materials such as stainless steels. In this project two-zone blankets are proposed; these blankets consist of a low-temperature shell surrounding a high-temperature interior zone. A survey of nucleonics and thermal hydraulic parameters has led to a reference blanket design consisting of a water-cooled stainless steel shell around a BeO, ZrO 2 interior (cooled by Ar) utilizing Li 2 O for tritium breeding. In this design, approx. 60% of the fusion energy is deposited in the high-temperature interior. The maximum Ar temperature is 2230 0 C leading to an overall efficiency estimate of 55 to 60% for this reference case

  4. Fusion blankets for high-efficiency power cycles

    International Nuclear Information System (INIS)

    Usher, J.L.; Lazareth, O.W.; Fillo, J.A.; Horn, F.L.; Powell, J.R.

    1981-01-01

    The efficiencies of blankets for fusion reactors are usually in the range of 30 to 40%, limited by the operating temperatures (500 deg C) of conventional structural materials such as stainless steels. In this project 'two-zone' blankets are proposed; these blankets consist of a low-temperature shell surrounding a high-temperature interior zone. A survey of nucleonics and thermal hydraulic parameters has led to a reference blanket design consisting of a water-cooled stainless steel shell around a BeO, ZrO 2 interior (cooled by argon) utilizing Li 2 O for tritium breeding. In this design, approximately 60% of the fusion energy is deposited in the high-temperature interior. The maximum argon temperature is 2230 deg C leading to an overall efficiency estimate of 55 to 60% for this reference case. (author)

  5. Irradiation effects on high efficiency Si solar cells

    International Nuclear Information System (INIS)

    Nguyen Duy, T.; Amingual, D.; Colardelle, P.; Bernard, J.

    1974-01-01

    By optimizing the diffusion parameters, high efficiency cells are obtained with 2ohmsxcm (13.5% AMO) and 10ohmsxcm (12.5% AMO) silicon material. These new cells have been submitted to radiation tests under 1MeV, 2MeV electrons and 2.5MeV protons. Their behavior under irradiation is found to be dependent only on the bulk material. By using the same resistivity silicon, the rate of degradation is exactly the same than those of conventional cells. The power increase, due to a better superficial response of the cell, is maintained after irradiation. These results show that new high efficiency cells offer an E.O.L. power higher than conventional cells [fr

  6. Enhancement of the Open National Combustion Code (OpenNCC) and Initial Simulation of Energy Efficient Engine Combustor

    Science.gov (United States)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2016-01-01

    In this paper, we present the recent enhancement of the Open National Combustion Code (OpenNCC) and apply the OpenNCC to model a realistic combustor configuration (Energy Efficient Engine (E3)). First, we perform a series of validation tests for the newly-implemented advection upstream splitting method (AUSM) and the extended version of the AUSM-family schemes (AUSM+-up). Compared with the analytical/experimental data of the validation tests, we achieved good agreement. In the steady-state E3 cold flow results using the Reynolds-averaged Navier-Stokes(RANS), we find a noticeable difference in the flow fields calculated by the two different numerical schemes, the standard Jameson- Schmidt-Turkel (JST) scheme and the AUSM scheme. The main differences are that the AUSM scheme is less numerical dissipative and it predicts much stronger reverse flow in the recirculation zone. This study indicates that two schemes could show different flame-holding predictions and overall flame structures.

  7. Highly efficient electron vortex beams generated by nanofabricated phase holograms

    Energy Technology Data Exchange (ETDEWEB)

    Grillo, Vincenzo, E-mail: vincenzo.grillo@nano.cnr.it [CNR-Istituto Nanoscienze, Centro S3, Via G Campi 213/a, I-41125 Modena (Italy); CNR-IMEM Parco Area delle Scienze 37/A, I-43124 Parma (Italy); Carlo Gazzadi, Gian [CNR-Istituto Nanoscienze, Centro S3, Via G Campi 213/a, I-41125 Modena (Italy); Karimi, Ebrahim [CNR-Istituto Nanoscienze, Centro S3, Via G Campi 213/a, I-41125 Modena (Italy); Department of Physics, University of Ottawa, 150 Louis Pasteur, Ottawa, Ontario K1N 6N5 (Canada); Mafakheri, Erfan [Dipartimento di Fisica Informatica e Matematica, Università di Modena e Reggio Emilia, via G Campi 213/a, I-41125 Modena (Italy); Boyd, Robert W. [Department of Physics, University of Ottawa, 150 Louis Pasteur, Ottawa, Ontario K1N 6N5 (Canada); Frabboni, Stefano [CNR-Istituto Nanoscienze, Centro S3, Via G Campi 213/a, I-41125 Modena (Italy); Dipartimento di Fisica Informatica e Matematica, Università di Modena e Reggio Emilia, via G Campi 213/a, I-41125 Modena (Italy)

    2014-01-27

    We propose an improved type of holographic-plate suitable for the shaping of electron beams. The plate is fabricated by a focused ion beam on a silicon nitride membrane and introduces a controllable phase shift to the electron wavefunction. We adopted the optimal blazed-profile design for the phase hologram, which results in the generation of highly efficient (25%) electron vortex beams. This approach paves the route towards applications in nano-scale imaging and materials science.

  8. Holography as a highly efficient RG flow I: Rephrasing gravity

    OpenAIRE

    Behr, Nicolas; Kuperstein, Stanislav; Mukhopadhyay, Ayan

    2015-01-01

    We investigate how the holographic correspondence can be reformulated as a generalisation of Wilsonian RG flow in a strongly interacting large $N$ quantum field theory. We firstly define a \\textit{highly efficient RG flow} as one in which the Ward identities related to local conservation of energy, momentum and charges preserve the same form at each scale -- to achieve this it is necessary to redefine the background metric and external sources at each scale as functionals of the effective sin...

  9. Sm , Bi phosphors with high efficiency white-light-emittin

    Indian Academy of Sciences (India)

    2017-08-29

    Aug 29, 2017 ... Therefore, researches on high efficiency red phos- phors are very important. So far ..... ing concentration and reached a maximum at y = 8 mol%. A .... [10] Xue L P, Wang Y J, Lv P W, Chen D G, Lin Z, Liang J K et al. 2009 Crystal ... [28] Liu J, Xu B, Song C, Luo H, Zou X, Han L et al 2012 Cryst-. EngComm.

  10. High-efficiency pumps drastically reduce energy consumption

    Energy Technology Data Exchange (ETDEWEB)

    Anon

    2002-05-01

    Wilo's Stratos pumps for air conditioning and other domestic heating applications combine the advantages of wet runner technology with an innovative electronic commutator motor. The energy consumption of these high-efficiency pumps is halved compared with similar wet runner designs. With vast numbers of pumps used in buildings across Europe alone, the adoption of this technology potentially offers significant energy sayings. (Author)

  11. Wavy channel transistor for area efficient high performance operation

    KAUST Repository

    Fahad, Hossain M.

    2013-04-05

    We report a wavy channel FinFET like transistor where the channel is wavy to increase its width without any area penalty and thereby increasing its drive current. Through simulation and experiments, we show the effectiveness of such device architecture is capable of high performance operation compared to conventional FinFETs with comparatively higher area efficiency and lower chip latency as well as lower power consumption.

  12. Highly efficient electron vortex beams generated by nanofabricated phase holograms

    International Nuclear Information System (INIS)

    Grillo, Vincenzo; Carlo Gazzadi, Gian; Karimi, Ebrahim; Mafakheri, Erfan; Boyd, Robert W.; Frabboni, Stefano

    2014-01-01

    We propose an improved type of holographic-plate suitable for the shaping of electron beams. The plate is fabricated by a focused ion beam on a silicon nitride membrane and introduces a controllable phase shift to the electron wavefunction. We adopted the optimal blazed-profile design for the phase hologram, which results in the generation of highly efficient (25%) electron vortex beams. This approach paves the route towards applications in nano-scale imaging and materials science

  13. High voltage generator circuit with low power and high efficiency applied in EEPROM

    International Nuclear Information System (INIS)

    Liu Yan; Zhang Shilin; Zhao Yiqiang

    2012-01-01

    This paper presents a low power and high efficiency high voltage generator circuit embedded in electrically erasable programmable read-only memory (EEPROM). The low power is minimized by a capacitance divider circuit and a regulator circuit using the controlling clock switch technique. The high efficiency is dependent on the zero threshold voltage (V th ) MOSFET and the charge transfer switch (CTS) charge pump. The proposed high voltage generator circuit has been implemented in a 0.35 μm EEPROM CMOS process. Measured results show that the proposed high voltage generator circuit has a low power consumption of about 150.48 μW and a higher pumping efficiency (83.3%) than previously reported circuits. This high voltage generator circuit can also be widely used in low-power flash devices due to its high efficiency and low power dissipation. (semiconductor integrated circuits)

  14. Development and evaluation of a cleanable high efficiency steel filter

    International Nuclear Information System (INIS)

    Bergman, W.; Larsen, G.; Weber, F.; Wilson, P.; Lopez, R.; Valha, G.; Conner, J.; Garr, J.; Williams, K.; Biermann, A.; Wilson, K.; Moore, P.; Gellner, C.; Rapchun, D.; Simon, K.; Turley, J.; Frye, L.; Monroe, D.

    1993-01-01

    We have developed a high efficiency steel filter that can be cleaned in-situ by reverse air pulses. The filter consists of 64 pleated cylindrical filter elements packaged into a 6l0 x 6l0 x 292 mm aluminum frame and has 13.5 m 2 of filter area. The filter media consists of a sintered steel fiber mat using 2 μm diameter fibers. We conducted an optimization study for filter efficiency and pressure drop to determine the filter design parameters of pleat width, pleat depth, outside diameter of the cylinder, and the total number of cylinders. Several prototype cylinders were then built and evaluated in terms of filter cleaning by reverse air pulses. The results of these studies were used to build the high efficiency steel filter. We evaluated the prototype filter for efficiency and cleanability. The DOP filter certification test showed the filter has a passing efficiency of 99.99% but a failing pressure drop of 0.80 kPa at 1,700 m 3 /hr. Since we were not able to achieve a pressure drop less than 0.25 kPa, the steel filter does not meet all the criteria for a HEPA filter. Filter loading and cleaning tests using AC Fine dust showed the filter could be repeatedly cleaned by reverse air pulses. The next phase of the prototype evaluation consisted of installing the unit and support housing in the exhaust duct work of a uranium grit blaster for a field evaluation at the Y-12 Plant in Oak Ridge, TN. The grit blaster is used to clean the surface of uranium parts and generates a cloud of UO 2 aerosols. We used a 1,700 m 3 /hr slip stream from the 10,200 m 3 /hr exhaust system

  15. Telescoping Solar Array Concept for Achieving High Packaging Efficiency

    Science.gov (United States)

    Mikulas, Martin; Pappa, Richard; Warren, Jay; Rose, Geoff

    2015-01-01

    Lightweight, high-efficiency solar arrays are required for future deep space missions using high-power Solar Electric Propulsion (SEP). Structural performance metrics for state-of-the art 30-50 kW flexible blanket arrays recently demonstrated in ground tests are approximately 40 kW/cu m packaging efficiency, 150 W/kg specific power, 0.1 Hz deployed stiffness, and 0.2 g deployed strength. Much larger arrays with up to a megawatt or more of power and improved packaging and specific power are of interest to mission planners for minimizing launch and life cycle costs of Mars exploration. A new concept referred to as the Compact Telescoping Array (CTA) with 60 kW/cu m packaging efficiency at 1 MW of power is described herein. Performance metrics as a function of array size and corresponding power level are derived analytically and validated by finite element analysis. Feasible CTA packaging and deployment approaches are also described. The CTA was developed, in part, to serve as a NASA reference solar array concept against which other proposed designs of 50-1000 kW arrays for future high-power SEP missions could be compared.

  16. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  17. ORBIT: A CODE FOR COLLECTIVE BEAM DYNAMICS IN HIGH INTENSITY RINGS

    International Nuclear Information System (INIS)

    HOLMES, J.A.; DANILOV, V.; GALAMBOS, J.; SHISHLO, A.; COUSINEAU, S.; CHOU, W.; MICHELOTTI, L.; OSTIGUY, J.F.; WEI, J.

    2002-01-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK, the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings

  18. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    Science.gov (United States)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  19. ORBIT: A code for collective beam dynamics in high-intensity rings

    International Nuclear Information System (INIS)

    Holmes, J.A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-01-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings

  20. Coding For Compression Of Low-Entropy Data

    Science.gov (United States)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  1. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm `boundary separated checkerboard sweep method` appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it`s similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  2. MOSRA-Light; high speed three-dimensional nodal diffusion code for vector computers

    International Nuclear Information System (INIS)

    Okumura, Keisuke

    1998-10-01

    MOSRA-Light is a three-dimensional neutron diffusion calculation code for X-Y-Z geometry. It is based on the 4th order polynomial nodal expansion method (NEM). As the 4th order NEM is not sensitive to mesh sizes, accurate calculation is possible by the use of coarse meshes of about 20 cm. The drastic decrease of number of unknowns in a 3-dimensional problem results in very fast computation. Furthermore, it employs newly developed computation algorithm 'boundary separated checkerboard sweep method' appropriate to vector computers. This method is very efficient because the speedup factor by vectorization increases, as a scale of problem becomes larger. Speed-up factor compared to the scalar calculation is from 20 to 40 in the case of PWR core calculation. Considering the both effects by the vectorization and the coarse mesh method, total speedup factor is more than 1000 as compared with conventional scalar code with the finite difference method. MOSRA-Light can be available on most of vector or scalar computers with the UNIX or it's similar operating systems (e.g. freeware like Linux). Users can easily install it by the help of the conversation style installer. This report contains the general theory of NEM, the fast computation algorithm, benchmark calculation results and detailed information for usage of this code including input data instructions and sample input data. (author)

  3. Highly conserved non-coding sequences are associated with vertebrate development.

    Directory of Open Access Journals (Sweden)

    Adam Woolfe

    2005-01-01

    Full Text Available In addition to protein coding sequence, the human genome contains a significant amount of regulatory DNA, the identification of which is proving somewhat recalcitrant to both in silico and functional methods. An approach that has been used with some success is comparative sequence analysis, whereby equivalent genomic regions from different organisms are compared in order to identify both similarities and differences. In general, similarities in sequence between highly divergent organisms imply functional constraint. We have used a whole-genome comparison between humans and the pufferfish, Fugu rubripes, to identify nearly 1,400 highly conserved non-coding sequences. Given the evolutionary divergence between these species, it is likely that these sequences are found in, and furthermore are essential to, all vertebrates. Most, and possibly all, of these sequences are located in and around genes that act as developmental regulators. Some of these sequences are over 90% identical across more than 500 bases, being more highly conserved than coding sequence between these two species. Despite this, we cannot find any similar sequences in invertebrate genomes. In order to begin to functionally test this set of sequences, we have used a rapid in vivo assay system using zebrafish embryos that allows tissue-specific enhancer activity to be identified. Functional data is presented for highly conserved non-coding sequences associated with four unrelated developmental regulators (SOX21, PAX6, HLXB9, and SHH, in order to demonstrate the suitability of this screen to a wide range of genes and expression patterns. Of 25 sequence elements tested around these four genes, 23 show significant enhancer activity in one or more tissues. We have identified a set of non-coding sequences that are highly conserved throughout vertebrates. They are found in clusters across the human genome, principally around genes that are implicated in the regulation of development

  4. CFD application to advanced design for high efficiency spacer grid

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, Kazuo, E-mail: kazuo3_ikeda@ndc.mhi.co.jp

    2014-11-15

    Highlights: • A new LDV was developed to investigate the local velocity in a rod bundle and inside a spacer grid. • The design information that utilizes for high efficiency spacer grid has been obtained. • CFD methodology that predicts flow field in a PWR fuel has been developed. • The high efficiency spacer grid was designed using the CFD methodology. - Abstract: Pressurized water reactor (PWR) fuels have been developed to meet the needs of the market. A spacer grid is a key component to improve thermal hydraulic performance of a PWR fuel assembly. Mixing structures (vanes) of a spacer grid promote coolant mixing and enhance heat removal from fuel rods. A larger mixing vane would improve mixing effect, which would increase the departure from nucleate boiling (DNB) benefit for fuel. However, the increased pressure loss at large mixing vanes would reduce the coolant flow at the mixed fuel core, which would reduce the DNB margin. The solution is to develop a spacer grid whose pressure loss is equal to or less than the current spacer grid and that has higher critical heat flux (CHF) performance. For this reason, a requirement of design tool for predicting the pressure loss and CHF performance of spacer grids has been increased. The author and co-workers have been worked for development of high efficiency spacer grid using Computational Fluid Dynamics (CFD) for nearly 20 years. A new laser Doppler velocimetry (LDV), which is miniaturized with fiber optics embedded in a fuel cladding, was developed to investigate the local velocity profile in a rod bundle and inside a spacer grid. The rod-embedded fiber LDV (rod LDV) can be inserted in an arbitrary grid cell instead of a fuel rod, and has the advantage of not disturbing the flow field since it is the same shape as a fuel rod. The probe volume of the rod LDV is small enough to measure spatial velocity profile in a rod gap and inside a spacer grid. According to benchmark experiments such as flow velocity

  5. CFD application to advanced design for high efficiency spacer grid

    International Nuclear Information System (INIS)

    Ikeda, Kazuo

    2014-01-01

    Highlights: • A new LDV was developed to investigate the local velocity in a rod bundle and inside a spacer grid. • The design information that utilizes for high efficiency spacer grid has been obtained. • CFD methodology that predicts flow field in a PWR fuel has been developed. • The high efficiency spacer grid was designed using the CFD methodology. - Abstract: Pressurized water reactor (PWR) fuels have been developed to meet the needs of the market. A spacer grid is a key component to improve thermal hydraulic performance of a PWR fuel assembly. Mixing structures (vanes) of a spacer grid promote coolant mixing and enhance heat removal from fuel rods. A larger mixing vane would improve mixing effect, which would increase the departure from nucleate boiling (DNB) benefit for fuel. However, the increased pressure loss at large mixing vanes would reduce the coolant flow at the mixed fuel core, which would reduce the DNB margin. The solution is to develop a spacer grid whose pressure loss is equal to or less than the current spacer grid and that has higher critical heat flux (CHF) performance. For this reason, a requirement of design tool for predicting the pressure loss and CHF performance of spacer grids has been increased. The author and co-workers have been worked for development of high efficiency spacer grid using Computational Fluid Dynamics (CFD) for nearly 20 years. A new laser Doppler velocimetry (LDV), which is miniaturized with fiber optics embedded in a fuel cladding, was developed to investigate the local velocity profile in a rod bundle and inside a spacer grid. The rod-embedded fiber LDV (rod LDV) can be inserted in an arbitrary grid cell instead of a fuel rod, and has the advantage of not disturbing the flow field since it is the same shape as a fuel rod. The probe volume of the rod LDV is small enough to measure spatial velocity profile in a rod gap and inside a spacer grid. According to benchmark experiments such as flow velocity

  6. Metamaterial Receivers for High Efficiency Concentrated Solar Energy Conversion

    Energy Technology Data Exchange (ETDEWEB)

    Yellowhair, Julius E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Concentrating Solar Technologies Dept.; Kwon, Hoyeong [Univ. of Texas, Austin, TX (United States). Dept. of Electrical and Computer Engineering; Alu, Andrea [Univ. of Texas, Austin, TX (United States). Dept. of Electrical and Computer Engineering; Jarecki, Robert L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Concentrating Solar Technologies Dept.; Shinde, Subhash L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Concentrating Solar Technologies Dept.

    2016-09-01

    Operation of concentrated solar power receivers at higher temperatures (>700°C) would enable supercritical carbon dioxide (sCO2) power cycles for improved power cycle efficiencies (>50%) and cost-effective solar thermal power. Unfortunately, radiative losses at higher temperatures in conventional receivers can negatively impact the system efficiency gains. One approach to improve receiver thermal efficiency is to utilize selective coatings that enhance absorption across the visible solar spectrum while minimizing emission in the infrared to reduce radiative losses. Existing coatings, however, tend to degrade rapidly at elevated temperatures. In this report, we report on the initial designs and fabrication of spectrally selective metamaterial-based absorbers for high-temperature, high-thermal flux environments important for solarized sCO2 power cycles. Metamaterials are structured media whose optical properties are determined by sub-wavelength structural features instead of bulk material properties, providing unique solutions by decoupling the optical absorption spectrum from thermal stability requirements. The key enabling innovative concept proposed is the use of structured surfaces with spectral responses that can be tailored to optimize the absorption and retention of solar energy for a given temperature range. In this initial study through the Academic Alliance partnership with University of Texas at Austin, we use Tungsten for its stability in expected harsh environments, compatibility with microfabrication techniques, and required optical performance. Our goal is to tailor the optical properties for high (near unity) absorptivity across the majority of the solar spectrum and over a broad range of incidence angles, and at the same time achieve negligible absorptivity in the near infrared to optimize the energy absorbed and retained. To this goal, we apply the recently developed concept of plasmonic Brewster angle to suitably designed

  7. Highly Efficient Thermoresponsive Nanocomposite for Controlled Release Applications

    KAUST Repository

    Yassine, Omar

    2016-06-23

    Highly efficient magnetic release from nanocomposite microparticles is shown, which are made of Poly (N-isopropylacrylamide) hydrogel with embedded iron nanowires. A simple microfluidic technique was adopted to fabricate the microparticles with a high control of the nanowire concentration and in a relatively short time compared to chemical synthesis methods. The thermoresponsive microparticles were used for the remotely triggered release of Rhodamine (B). With a magnetic field of only 1 mT and 20 kHz a drug release of 6.5% and 70% was achieved in the continuous and pulsatile modes, respectively. Those release values are similar to the ones commonly obtained using superparamagnetic beads but accomplished with a magnetic field of five orders of magnitude lower power. The high efficiency is a result of the high remanent magnetization of the nanowires, which produce a large torque when exposed to a magnetic field. This causes the nanowires to vibrate, resulting in friction losses and heating. For comparison, microparticles with superparamagnetic beads were also fabricated and tested; while those worked at 73 mT and 600 kHz, no release was observed at the low field conditions. Cytotoxicity assays showed similar and high cell viability for microparticles with nanowires and beads.

  8. Highly Efficient Thermoresponsive Nanocomposite for Controlled Release Applications

    KAUST Repository

    Yassine, Omar; Zaher, Amir; Li, Erqiang; Alfadhel, Ahmed; Perez, Jose E.; Kavaldzhiev, Mincho; Contreras, Maria F.; Thoroddsen, Sigurdur T; Khashab, Niveen M.; Kosel, Jü rgen

    2016-01-01

    Highly efficient magnetic release from nanocomposite microparticles is shown, which are made of Poly (N-isopropylacrylamide) hydrogel with embedded iron nanowires. A simple microfluidic technique was adopted to fabricate the microparticles with a high control of the nanowire concentration and in a relatively short time compared to chemical synthesis methods. The thermoresponsive microparticles were used for the remotely triggered release of Rhodamine (B). With a magnetic field of only 1 mT and 20 kHz a drug release of 6.5% and 70% was achieved in the continuous and pulsatile modes, respectively. Those release values are similar to the ones commonly obtained using superparamagnetic beads but accomplished with a magnetic field of five orders of magnitude lower power. The high efficiency is a result of the high remanent magnetization of the nanowires, which produce a large torque when exposed to a magnetic field. This causes the nanowires to vibrate, resulting in friction losses and heating. For comparison, microparticles with superparamagnetic beads were also fabricated and tested; while those worked at 73 mT and 600 kHz, no release was observed at the low field conditions. Cytotoxicity assays showed similar and high cell viability for microparticles with nanowires and beads.

  9. High Efficiency Heat Exchanger for High Temperature and High Pressure Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sienicki, James J. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Lv, Qiuping [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Moisseytsev, Anton [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2017-09-29

    CompRex, LLC (CompRex) specializes in the design and manufacture of compact heat exchangers and heat exchange reactors for high temperature and high pressure applications. CompRex’s proprietary compact technology not only increases heat exchange efficiency by at least 25 % but also reduces footprint by at least a factor of ten compared to traditional shell-and-tube solutions of the same capacity and by 15 to 20 % compared to other currently available Printed Circuit Heat Exchanger (PCHE) solutions. As a result, CompRex’s solution is especially suitable for Brayton cycle supercritical carbon dioxide (sCO2) systems given its high efficiency and significantly lower capital and operating expenses. CompRex has already successfully demonstrated its technology and ability to deliver with a pilot-scale compact heat exchanger that was under contract by the Naval Nuclear Laboratory for sCO2 power cycle development. The performance tested unit met or exceeded the thermal and hydraulic specifications with measured heat transfer between 95 to 98 % of maximum heat transfer and temperature and pressure drop values all consistent with the modeled values. CompRex’s vision is to commercialize its compact technology and become the leading provider for compact heat exchangers and heat exchange reactors for various applications including Brayton cycle sCO2 systems. One of the limitations of the sCO2 Brayton power cycle is the design and manufacturing of efficient heat exchangers at extreme operating conditions. Current diffusion-bonded heat exchangers have limitations on the channel size through which the fluid travels, resulting in excessive solid material per heat exchanger volume. CompRex’s design allows for more open area and shorter fluid proximity for increased heat transfer efficiency while sustaining the structural integrity needed for the application. CompRex is developing a novel improvement to its current heat exchanger design where fluids are directed to alternating

  10. A high efficiency hybrid stirling-pulse tube cryocooler

    Directory of Open Access Journals (Sweden)

    Xiaotao Wang

    2015-03-01

    Full Text Available This article presented a hybrid cryocooler which combines the room temperature displacers and the pulse tube in one system. Compared with a traditional pulse tube cryocooler, the system uses the rod-less ambient displacer to recover the expansion work from the pulse tube cold end to improve the efficiency while still keeps the advantage of the pulse tube cryocooler with no moving parts at the cold region. In the meantime, dual-opposed configurations for both the compression pistons and displacers reduce the cooler vibration to a very low level. In the experiments, a lowest no-load temperature of 38.5 K has been obtained and the cooling power at 80K was 26.4 W with an input electric power of 290 W. This leads to an efficiency of 24.2% of Carnot, marginally higher than that of an ordinary pulse tube cryocooler. The hybrid configuration herein provides a very competitive option when a high efficiency, high-reliability and robust cryocooler is desired.

  11. High-efficiency ballistic electrostatic generator using microdroplets

    Science.gov (United States)

    Xie, Yanbo; Bos, Diederik; de Vreede, Lennart J.; de Boer, Hans L.; van der Meulen, Mark-Jan; Versluis, Michel; Sprenkels, Ad J.; van den Berg, Albert; Eijkel, Jan C. T.

    2014-04-01

    The strong demand for renewable energy promotes research on novel methods and technologies for energy conversion. Microfluidic systems for energy conversion by streaming current are less known to the public, and the relatively low efficiencies previously obtained seemed to limit the further applications of such systems. Here we report a microdroplet-based electrostatic generator operating by an acceleration-deceleration cycle (‘ballistic’ conversion), and show that this principle enables both high efficiency and compact simple design. Water is accelerated by pumping it through a micropore to form a microjet breaking up into fast-moving charged droplets. Droplet kinetic energy is converted to electrical energy when the charged droplets decelerate in the electrical field that forms between membrane and target. We demonstrate conversion efficiencies of up to 48%, a power density of 160 kW m-2 and both high- (20 kV) and low- (500 V) voltage operation. Besides offering striking new insights, the device potentially opens up new perspectives for low-cost and robust renewable energy conversion.

  12. Radiation hardened high efficiency silicon space solar cell

    International Nuclear Information System (INIS)

    Garboushian, V.; Yoon, S.; Turner, J.

    1993-01-01

    A silicon solar cell with AMO 19% Beginning of Life (BOL) efficiency is reported. The cell has demonstrated equal or better radiation resistance when compared to conventional silicon space solar cells. Conventional silicon space solar cell performance is generally ∼ 14% at BOL. The Radiation Hardened High Efficiency Silicon (RHHES) cell is thinned for high specific power (watts/kilogram). The RHHES space cell provides compatibility with automatic surface mounting technology. The cells can be easily combined to provide desired power levels and voltages. The RHHES space cell is more resistant to mechanical damage due to micrometeorites. Micro-meteorites which impinge upon conventional cells can crack the cell which, in turn, may cause string failure. The RHHES, operating in the same environment, can continue to function with a similar crack. The RHHES cell allows for very efficient thermal management which is essential for space cells generating higher specific power levels. The cell eliminates the need for electrical insulation layers which would otherwise increase the thermal resistance for conventional space panels. The RHHES cell can be applied to a space concentrator panel system without abandoning any of the attributes discussed. The power handling capability of the RHHES cell is approximately five times more than conventional space concentrator solar cells

  13. Highly Efficient Spectrally Stable Red Perovskite Light-Emitting Diodes.

    Science.gov (United States)

    Tian, Yu; Zhou, Chenkun; Worku, Michael; Wang, Xi; Ling, Yichuan; Gao, Hanwei; Zhou, Yan; Miao, Yu; Guan, Jingjiao; Ma, Biwu

    2018-05-01

    Perovskite light-emitting diodes (LEDs) have recently attracted great research interest for their narrow emissions and solution processability. Remarkable progress has been achieved in green perovskite LEDs in recent years, but not blue or red ones. Here, highly efficient and spectrally stable red perovskite LEDs with quasi-2D perovskite/poly(ethylene oxide) (PEO) composite thin films as the light-emitting layer are reported. By controlling the molar ratios of organic salt (benzylammonium iodide) to inorganic salts (cesium iodide and lead iodide), luminescent quasi-2D perovskite thin films are obtained with tunable emission colors from red to deep red. The perovskite/polymer composite approach enables quasi-2D perovskite/PEO composite thin films to possess much higher photoluminescence quantum efficiencies and smoothness than their neat quasi-2D perovskite counterparts. Electrically driven LEDs with emissions peaked at 638, 664, 680, and 690 nm have been fabricated to exhibit high brightness and external quantum efficiencies (EQEs). For instance, the perovskite LED with an emission peaked at 680 nm exhibits a brightness of 1392 cd m -2 and an EQE of 6.23%. Moreover, exceptional electroluminescence spectral stability under continuous device operation has been achieved for these red perovskite LEDs. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Design Strategies for Ultra-high Efficiency Photovoltaics

    Science.gov (United States)

    Warmann, Emily Cathryn

    While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches. This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems. Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired

  15. Efficient Four-Parametric with-and-without-Memory Iterative Methods Possessing High Efficiency Indices

    Directory of Open Access Journals (Sweden)

    Alicia Cordero

    2018-01-01

    Full Text Available We construct a family of derivative-free optimal iterative methods without memory to approximate a simple zero of a nonlinear function. Error analysis demonstrates that the without-memory class has eighth-order convergence and is extendable to with-memory class. The extension of new family to the with-memory one is also presented which attains the convergence order 15.5156 and a very high efficiency index 15.51561/4≈1.9847. Some particular schemes of the with-memory family are also described. Numerical examples and some dynamical aspects of the new schemes are given to support theoretical results.

  16. Microbial electrolytic disinfection process for highly efficient Escherichia coli inactivation

    DEFF Research Database (Denmark)

    Zhou, Shaofeng; Huang, Shaobin; Li, Xiaohu

    2018-01-01

    extensively studied for recalcitrant organics removal, its application potential towards water disinfection (e.g., inactivation of pathogens) is still unknown. This study investigated the inactivation of Escherichia coli in a microbial electrolysis cell based bio-electro-Fenton system (renamed as microbial......Water quality deterioration caused by a wide variety of recalcitrant organics and pathogenic microorganisms has become a serious concern worldwide. Bio-electro-Fenton systems have been considered as cost-effective and highly efficient water treatment platform technology. While it has been......]OH was identified as one potential mechanism for disinfection. This study successfully demonstrated the feasibility of bio-electro-Fenton process for pathogens inactivation, which offers insight for the future development of sustainable, efficient, and cost-effective biological water treatment technology....

  17. Study on a Novel High-Efficiency Bridgeless PFC Converter

    Directory of Open Access Journals (Sweden)

    Cao Taiqiang

    2014-01-01

    Full Text Available In order to implement a high-efficiency bridgeless power factor correction converter, a new topology and operation principles of continuous conduction mode (CCM and DC steady-state character of the converter are analyzed, which show that the converter not only has bipolar-gain characteristic but also has the same characteristic as the traditional Boost converter, while the voltage transfer ratio is not related with the resonant branch parameters and switching frequency. Based on the above topology, a novel bridgeless Bipolar-Gain Pseudo-Boost PFC converter is proposed. With this converter, the diode rectifier bridge of traditional AC-DC converter is eliminated, and zero-current switching of fast recovery diode is achieved. Thus, the efficiency is improved. Next, we also propose the one-cycle control policy of this converter. Finally, experiments are provided to verify the accuracy and feasibility of the proposed converter.

  18. Highly Efficient and Reliable Transparent Electromagnetic Interference Shielding Film.

    Science.gov (United States)

    Jia, Li-Chuan; Yan, Ding-Xiang; Liu, Xiaofeng; Ma, Rujun; Wu, Hong-Yuan; Li, Zhong-Ming

    2018-04-11

    Electromagnetic protection in optoelectronic instruments such as optical windows and electronic displays is challenging because of the essential requirements of a high optical transmittance and an electromagnetic interference (EMI) shielding effectiveness (SE). Herein, we demonstrate the creation of an efficient transparent EMI shielding film that is composed of calcium alginate (CA), silver nanowires (AgNWs), and polyurethane (PU), via a facile and low-cost Mayer-rod coating method. The CA/AgNW/PU film with a high optical transmittance of 92% achieves an EMI SE of 20.7 dB, which meets the requirements for commercial shielding applications. A superior EMI SE of 31.3 dB could be achieved, whereas the transparent film still maintains a transmittance of 81%. The integrated efficient EMI SE and high transmittance are superior to those of most previously reported transparent EMI shielding materials. Moreover, our transparent films exhibit a highly reliable shielding ability in a complex service environment, with 98 and 96% EMI SE retentions even after 30 min of ultrasound treatment and 5000 bending cycles (1.5 mm radius), respectively. The comprehensive performance that is associated with the facile fabrication strategy imparts the CA/AgNW/PU film with great potential as an optimized EMI shielding material in emerging optoelectronic devices, such as flexible solar cells, displays, and touch panels.

  19. Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes

    Science.gov (United States)

    Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas

    2018-05-01

    An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.

  20. Quasi Cyclic Low Density Parity Check Code for High SNR Data Transfer

    Directory of Open Access Journals (Sweden)

    M. R. Islam

    2010-06-01

    Full Text Available An improved Quasi Cyclic Low Density Parity Check code (QC-LDPC is proposed to reduce the complexity of the Low Density Parity Check code (LDPC while obtaining the similar performance. The proposed QC-LDPC presents an improved construction at high SNR with circulant sub-matrices. The proposed construction yields a performance gain of about 1 dB at a 0.0003 bit error rate (BER and it is tested on 4 different decoding algorithms. Proposed QC-LDPC is compared with the existing QC-LDPC and the simulation results show that the proposed approach outperforms the existing one at high SNR. Simulations are also performed varying the number of horizontal sub matrices and the results show that the parity check matrix with smaller horizontal concatenation shows better performance.