WorldWideScience

Sample records for linear block codes

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  2. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  3. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  4. Non-Linear Detection for Joint Space-Frequency Block Coding and Spatial Multiplexing in OFDM-MIMO Systems

    DEFF Research Database (Denmark)

    Rahman, Imadur Mohamed; Marchetti, Nicola; Fitzek, Frank

    2005-01-01

    (SIC) receiver where the detection is done on subcarrier by sub-carrier basis based on both Zero Forcing (ZF) and Minimum Mean Square Error (MMSE) nulling criterion for the system. In terms of Frame Error Rate (FER), MMSE based SIC receiver performs better than all other receivers compared......In this work, we have analyzed a joint spatial diversity and multiplexing transmission structure for MIMO-OFDM system, where Orthogonal Space-Frequency Block Coding (OSFBC) is used across all spatial multiplexing branches. We have derived a BLAST-like non-linear Successive Interference Cancellation...... in this paper. We have found that a linear two-stage receiver for the proposed system [1] performs very close to the non-linear receiver studied in this work. Finally, we compared the system performance in spatially correlated scenario. It is found that higher amount of spatial correlation at the transmitter...

  5. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  6. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  7. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  8. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  9. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  10. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    Science.gov (United States)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  11. Development of flow network analysis code for block type VHTR core by linear theory method

    International Nuclear Information System (INIS)

    Lee, J. H.; Yoon, S. J.; Park, J. W.; Park, G. C.

    2012-01-01

    VHTR (Very High Temperature Reactor) is high-efficiency nuclear reactor which is capable of generating hydrogen with high temperature of coolant. PMR (Prismatic Modular Reactor) type reactor consists of hexagonal prismatic fuel blocks and reflector blocks. The flow paths in the prismatic VHTR core consist of coolant holes, bypass gaps and cross gaps. Complicated flow paths are formed in the core since the coolant holes and bypass gap are connected by the cross gap. Distributed coolant was mixed in the core through the cross gap so that the flow characteristics could not be modeled as a simple parallel pipe system. It requires lot of effort and takes very long time to analyze the core flow with CFD analysis. Hence, it is important to develop the code for VHTR core flow which can predict the core flow distribution fast and accurate. In this study, steady state flow network analysis code is developed using flow network algorithm. Developed flow network analysis code was named as FLASH code and it was validated with the experimental data and CFD simulation results. (authors)

  12. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  13. On the linear programming bound for linear Lee codes.

    Science.gov (United States)

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  14. Dynamic code block size for JPEG 2000

    Science.gov (United States)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  15. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  16. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  17. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  18. Rate-Compatible LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  19. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  20. Responsive linear-dendritic block copolymers.

    Science.gov (United States)

    Blasco, Eva; Piñol, Milagros; Oriol, Luis

    2014-06-01

    The combination of dendritic and linear polymeric structures in the same macromolecule opens up new possibilities for the design of block copolymers and for applications of functional polymers that have self-assembly properties. There are three main strategies for the synthesis of linear-dendritic block copolymers (LDBCs) and, in particular, the emergence of click chemistry has made the coupling of preformed blocks one of the most efficient ways of obtaining libraries of LDBCs. In these materials, the periphery of the dendron can be precisely functionalised to obtain functional LDBCs with self-assembly properties of interest in different technological areas. The incorporation of stimuli-responsive moieties gives rise to smart materials that are generally processed as self-assemblies of amphiphilic LDBCs with a morphology that can be controlled by an external stimulus. Particular emphasis is placed on light-responsive LDBCs. Furthermore, a brief review of the biomedical or materials science applications of LDBCs is presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  2. Differential Space-Time Block Code Modulation for DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Liu Jianhua

    2002-01-01

    Full Text Available A differential space-time block code (DSTBC modulation scheme is used to improve the performance of DS-CDMA systems in fast time-dispersive fading channels. The resulting scheme is referred to as the differential space-time block code modulation for DS-CDMA (DSTBC-CDMA systems. The new modulation and demodulation schemes are especially studied for the down-link transmission of DS-CDMA systems. We present three demodulation schemes, referred to as the differential space-time block code Rake (D-Rake receiver, differential space-time block code deterministic (D-Det receiver, and differential space-time block code deterministic de-prefix (D-Det-DP receiver, respectively. The D-Det receiver exploits the known information of the spreading sequences and their delayed paths deterministically besides the Rake type combination; consequently, it can outperform the D-Rake receiver, which employs the Rake type combination only. The D-Det-DP receiver avoids the effect of intersymbol interference and hence can offer better performance than the D-Det receiver.

  3. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  4. Some new quasi-twisted ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2015-09-01

    Full Text Available Let [n, k, d]_q code be a linear code of length n, dimension k and minimum Hamming distance d over GF(q. One of the basic and most important problems in coding theory is to construct codes with best possible minimum distances. In this paper seven quasi-twisted ternary linear codes are constructed. These codes are new and improve the best known lower bounds on the minimum distance in [6].

  5. The linear programming bound for binary linear codes

    NARCIS (Netherlands)

    Brouwer, A.E.

    1993-01-01

    Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.

  6. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).

  7. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  8. On some properties of the block linear multi-step methods | Chollom ...

    African Journals Online (AJOL)

    The convergence, stability and order of Block linear Multistep methods have been determined in the past based on individual members of the block. In this paper, methods are proposed to examine the properties of the entire block. Some Block Linear Multistep methods have been considered, their convergence, stability and ...

  9. Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling

    Directory of Open Access Journals (Sweden)

    Ertürk Sarp

    2007-01-01

    Full Text Available This paper presents warped discrete cosine transform (WDCT-based low bit-rate block coding using image downsampling. While WDCT aims to improve the performance of conventional DCT by frequency warping, the WDCT has only been applicable to high bit-rate coding applications because of the overhead required to define the parameters of the warping filter. Recently, low bit-rate block coding based on image downsampling prior to block coding followed by upsampling after the decoding process is proposed to improve the compression performance for low bit-rate block coders. This paper demonstrates that a superior performance can be achieved if WDCT is used in conjunction with image downsampling-based block coding for low bit-rate applications.

  10. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  11. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  12. Riemann-Roch Spaces and Linear Network Codes

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    We construct linear network codes utilizing algebraic curves over finite fields and certain associated Riemann-Roch spaces and present methods to obtain their parameters. In particular we treat the Hermitian curve and the curves associated with the Suzuki and Ree groups all having the maximal...... number of points for curves of their respective genera. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang %\\cite{DBLP:journals/tit/KoetterK08} introduced...... in the above metric making them suitable for linear network coding....

  13. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao

    2017-11-29

    In the latest Joint Video Exploration Team development, the quadtree plus binary tree (QTBT) block partitioning structure has been proposed for future video coding. Compared to the traditional quadtree structure of High Efficiency Video Coding (HEVC) standard, QTBT provides more flexible patterns for splitting the blocks, which results in dramatically increased combinations of block partitions and high computational complexity. In view of this, a confidence interval based early termination (CIET) scheme is proposed for QTBT to identify the unnecessary partition modes in the sense of rate-distortion (RD) optimization. In particular, a RD model is established to predict the RD cost of each partition pattern without the full encoding process. Subsequently, the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure by reducing 54.7% encoding time with only 1.12% increase in terms of bit rate. Moreover, the proposed scheme performs consistently well for the high resolution sequences, of which the video coding efficiency is crucial in real applications.

  14. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  15. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  16. A Simple Differential Modulation Scheme for Quasi-Orthogonal Space-Time Block Codes with Partial Transmit Diversity

    Directory of Open Access Journals (Sweden)

    Lingyang Song

    2007-04-01

    Full Text Available We report a simple differential modulation scheme for quasi-orthogonal space-time block codes. A new class of quasi-orthogonal coding structures that can provide partial transmit diversity is presented for various numbers of transmit antennas. Differential encoding and decoding can be simplified for differential Alamouti-like codes by grouping the signals in the transmitted matrix and decoupling the detection of data symbols, respectively. The new scheme can achieve constant amplitude of transmitted signals, and avoid signal constellation expansion; in addition it has a linear signal detector with very low complexity. Simulation results show that these partial-diversity codes can provide very useful results at low SNR for current communication systems. Extension to more than four transmit antennas is also considered.

  17. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  18. Storage and Retrieval of Encrypted Data Blocks with In-Line Message Authentication Codes

    NARCIS (Netherlands)

    Bosch, H.G.P.; McLellan Jr, Hubert Rae; Mullender, Sape J.

    2007-01-01

    Techniques are disclosed for in-line storage of message authentication codes with respective encrypted data blocks. In one aspect, a given data block is encrypted and a message authentication code is generated for the encrypted data block. A target address is determined for storage of the encrypted

  19. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  20. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  1. Capacitor blocks for linear transformer driver stages.

    Science.gov (United States)

    Kovalchuk, B M; Kharlov, A V; Kumpyak, E V; Smorudov, G V; Zherlitsyn, A A

    2014-01-01

    In the Linear Transformer Driver (LTD) technology, the low inductance energy storage components and switches are directly incorporated into the individual cavities (named stages) to generate a fast output voltage pulse, which is added along a vacuum coaxial line like in an inductive voltage adder. LTD stages with air insulation were recently developed, where air is used both as insulation in a primary side of the stages and as working gas in the LTD spark gap switches. A custom designed unit, referred to as a capacitor block, was developed for use as a main structural element of the transformer stages. The capacitor block incorporates two capacitors GA 35426 (40 nF, 100 kV) and multichannel multigap gas switch. Several modifications of the capacitor blocks were developed and tested on the life time and self breakdown probability. Blocks were tested both as separate units and in an assembly of capacitive module, consisting of five capacitor blocks. This paper presents detailed design of capacitor blocks, description of operation regimes, numerical simulation of electric field in the switches, and test results.

  2. Linear codes associated to determinantal varieties

    DEFF Research Database (Denmark)

    Beelen, Peter; Ghorpade, Sudhir R.; Hasan, Sartaj Ul

    2015-01-01

    We consider a class of linear codes associated to projective algebraic varieties defined by the vanishing of minors of a fixed size of a generic matrix. It is seen that the resulting code has only a small number of distinct weights. The case of varieties defined by the vanishing of 2×2 minors is ...

  3. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    Science.gov (United States)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  4. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  5. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  6. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  7. A symmetric Roos bound for linear codes

    NARCIS (Netherlands)

    Duursma, I.M.; Pellikaan, G.R.

    2006-01-01

    The van Lint–Wilson AB-method yields a short proof of the Roos bound for the minimum distance of a cyclic code. We use the AB-method to obtain a different bound for the weights of a linear code. In contrast to the Roos bound, the role of the codes A and B in our bound is symmetric. We use the bound

  8. Forms and Linear Network Codes

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    We present a general theory to obtain linear network codes utilizing forms and obtain explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces intersect in the same small dimension. The theory is inspired by the methods of the author utilizing the osculating...... spaces of Veronese varieties. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The vector spaces in our construction are equidistant in the above metric and the distance between any pair of vector spaces is large making...

  9. Development of non-linear vibration analysis code for CANDU fuelling machine

    International Nuclear Information System (INIS)

    Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.

    1988-01-01

    This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)

  10. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-01-01

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC

  11. Osculating Spaces of Varieties and Linear Network Codes

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    2013-01-01

    We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...

  12. Osculating Spaces of Varieties and Linear Network Codes

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    We present a general theory to obtain good linear network codes utilizing the osculating nature of algebraic varieties. In particular, we obtain from the osculating spaces of Veronese varieties explicit families of equideminsional vector spaces, in which any pair of distinct vector spaces...... intersects in the same dimension. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The obtained osculating spaces of Veronese varieties are equidistant in the above metric. The parameters of the resulting linear network codes...

  13. Spatial Block Codes Based on Unitary Transformations Derived from Orthonormal Polynomial Sets

    Directory of Open Access Journals (Sweden)

    Mandyam Giridhar D

    2002-01-01

    Full Text Available Recent work in the development of diversity transformations for wireless systems has produced a theoretical framework for space-time block codes. Such codes are beneficial in that they may be easily concatenated with interleaved trellis codes and yet still may be decoded separately. In this paper, a theoretical framework is provided for the generation of spatial block codes of arbitrary dimensionality through the use of orthonormal polynomial sets. While these codes cannot maximize theoretical diversity performance for given dimensionality, they still provide performance improvements over the single-antenna case. In particular, their application to closed-loop transmit diversity systems is proposed, as the bandwidth necessary for feedback using these types of codes is fixed regardless of the number of antennas used. Simulation data is provided demonstrating these types of codes′ performance under this implementation as compared not only to the single-antenna case but also to the two-antenna code derived from the Radon-Hurwitz construction.

  14. Linear Shrinkage Behaviour of Compacted Loam Masonry Blocks

    Directory of Open Access Journals (Sweden)

    NAWAB ALI LAKHO

    2017-04-01

    Full Text Available Walls of wet loam, used in earthen houses, generally experience more shrinkage which results in cracks and less compressive strength. This paper presents a technique of producing loam masonry blocks that are compacted in drained state during casting process in order to minimize shrinkage. For this purpose, loam masonry blocks were cast and compacted at a pressure of 6 MPa and then dried in shade by covering them in plastic sheet. The results show that linear shrinkage of 2% occurred which is smaller when compared to un-compacted wet loam walls. This implies that the loam masonry blocks compacted in drained state is expected to perform better than un-compacted wet loam walls.

  15. New binary linear codes which are dual transforms of good codes

    NARCIS (Netherlands)

    Jaffe, D.B.; Simonis, J.

    1999-01-01

    If C is a binary linear code, one may choose a subset S of C, and form a new code CST which is the row space of the matrix having the elements of S as its columns. One way of picking S is to choose a subgroup H of Aut(C) and let S be some H-stable subset of C. Using (primarily) this method for

  16. Infinity-Norm Permutation Covering Codes from Cyclic Groups

    OpenAIRE

    Karni, Ronen; Schwartz, Moshe

    2017-01-01

    We study covering codes of permutations with the $\\ell_\\infty$-metric. We provide a general code construction, which uses smaller building-block codes. We study cyclic transitive groups as building blocks, determining their exact covering radius, and showing linear-time algorithms for finding a covering codeword. We also bound the covering radius of relabeled cyclic transitive groups under conjugation.

  17. Computer codes for designing proton linear accelerators

    International Nuclear Information System (INIS)

    Kato, Takao

    1992-01-01

    Computer codes for designing proton linear accelerators are discussed from the viewpoint of not only designing but also construction and operation of the linac. The codes are divided into three categories according to their purposes: 1) design code, 2) generation and simulation code, and 3) electric and magnetic fields calculation code. The role of each category is discussed on the basis of experience at KEK (the design of the 40-MeV proton linac and its construction and operation, and the design of the 1-GeV proton linac). We introduce our recent work relevant to three-dimensional calculation and supercomputer calculation: 1) tuning of MAFIA (three-dimensional electric and magnetic fields calculation code) for supercomputer, 2) examples of three-dimensional calculation of accelerating structures by MAFIA, 3) development of a beam transport code including space charge effects. (author)

  18. Block-based wavelet transform coding of mammograms with region-adaptive quantization

    Science.gov (United States)

    Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong

    1998-06-01

    To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.

  19. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    Energy Technology Data Exchange (ETDEWEB)

    Preston, Leiph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti) by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.

  20. Linear and nonlinear verification of gyrokinetic microstability codes

    Science.gov (United States)

    Bravenec, R. V.; Candy, J.; Barnes, M.; Holland, C.

    2011-12-01

    Verification of nonlinear microstability codes is a necessary step before comparisons or predictions of turbulent transport in toroidal devices can be justified. By verification we mean demonstrating that a code correctly solves the mathematical model upon which it is based. Some degree of verification can be accomplished indirectly from analytical instability threshold conditions, nonlinear saturation estimates, etc., for relatively simple plasmas. However, verification for experimentally relevant plasma conditions and physics is beyond the realm of analytical treatment and must rely on code-to-code comparisons, i.e., benchmarking. The premise is that the codes are verified for a given problem or set of parameters if they all agree within a specified tolerance. True verification requires comparisons for a number of plasma conditions, e.g., different devices, discharges, times, and radii. Running the codes and keeping track of linear and nonlinear inputs and results for all conditions could be prohibitive unless there was some degree of automation. We have written software to do just this and have formulated a metric for assessing agreement of nonlinear simulations. We present comparisons, both linear and nonlinear, between the gyrokinetic codes GYRO [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and GS2 [W. Dorland, F. Jenko, M. Kotschenreuther, and B. N. Rogers, Phys. Rev. Lett. 85, 5579 (2000)]. We do so at the mid-radius for the same discharge as in earlier work [C. Holland, A. E. White, G. R. McKee, M. W. Shafer, J. Candy, R. E. Waltz, L. Schmitz, and G. R. Tynan, Phys. Plasmas 16, 052301 (2009)]. The comparisons include electromagnetic fluctuations, passing and trapped electrons, plasma shaping, one kinetic impurity, and finite Debye-length effects. Results neglecting and including electron collisions (Lorentz model) are presented. We find that the linear frequencies with or without collisions agree well between codes, as do the time averages of

  1. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially...... achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  2. Equidistant Linear Network Codes with maximal Error-protection from Veronese Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    2012-01-01

    Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vectorspace. Ralf Koetter and Frank R. Kschischang in Coding for errors and erasures in random network coding (IEEE Transactions on Information Theory...... construct explicit families of vector-spaces of constant dimension where any pair of distinct vector-spaces are equidistant in the above metric. The parameters of the resulting linear network codes which have maximal error-protection are determined....

  3. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  4. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  5. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  6. System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels

    Directory of Open Access Journals (Sweden)

    Kam Tai Chan

    2005-05-01

    Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time.

  7. Simulations of linear and Hamming codes using SageMath

    Science.gov (United States)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  8. Performance Comparison of Assorted Color Spaces for Multilevel Block Truncation Coding based Face Recognition

    OpenAIRE

    H.B. Kekre; Sudeep Thepade; Karan Dhamejani; Sanchit Khandelwal; Adnan Azmi

    2012-01-01

    The paper presents a performance analysis of Multilevel Block Truncation Coding based Face Recognition among widely used color spaces. In [1], Multilevel Block Truncation Coding was applied on the RGB color space up to four levels for face recognition. Better results were obtained when the proposed technique was implemented using Kekre’s LUV (K’LUV) color space [25]. This was the motivation to test the proposed technique using assorted color spaces. For experimental analysis, two face databas...

  9. Multispectral data compression through transform coding and block quantization

    Science.gov (United States)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  10. Protograph LDPC Codes with Node Degrees at Least 3

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  11. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  12. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  13. Adaptive Multi-Layered Space-Time Block Coded Systems in Wireless Environments

    KAUST Repository

    Al-Ghadhban, Samir

    2014-12-23

    © 2014, Springer Science+Business Media New York. Multi-layered space-time block coded systems (MLSTBC) strike a balance between spatial multiplexing and transmit diversity. In this paper, we analyze the block error rate performance of MLSTBC. In addition, we propose an adaptive MLSTBC schemes that are capable of accommodating the channel signal-to-noise ratio variation of wireless systems by near instantaneously adapting the uplink transmission configuration. The main results demonstrate that significant effective throughput improvements can be achieved while maintaining a certain target bit error rate.

  14. Two "dual" families of Nearly-Linear Codes over ℤ p , p odd

    NARCIS (Netherlands)

    Asch, van A.G.; Tilborg, van H.C.A.

    2001-01-01

    Since the paper by Hammons e.a. [1], various authors have shown an enormous interest in linear codes over the ring Z4. A special weight function on Z4 was introduced and by means of the so called Gray map ¿ : Z4¿Z2 2 a relation was established between linear codes over Z4 and certain interesting

  15. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  16. Performance analysis of linear codes under maximum-likelihood decoding: a tutorial

    National Research Council Canada - National Science Library

    Sason, Igal; Shamai, Shlomo

    2006-01-01

    ..., upper and lower bounds on the error probability of linear codes under ML decoding are surveyed and applied to codes and ensembles of codes on graphs. For upper bounds, we discuss various bounds where focus is put on Gallager bounding techniques and their relation to a variety of other reported bounds. Within the class of lower bounds, we ad...

  17. Comparison of the nuclear code systems LINEAR-RECENT-NJOY and NJOY

    International Nuclear Information System (INIS)

    Seehusen, J.

    1983-07-01

    The reconstructed cross sections of the code systems LINEAR-RECENT-GROUPIE (Version 1982) and NJOY (Version 1982) have been compared for several materials. Some fuel cycle isotopes and structural materials of the ENDF/B-4 general purpose and ENDF/B-5 dosimetry files have been choosen. The reconstructed total, capture and fission cross sections calculated by LINEAR-RECENT and NJOY have been analized. The two sets of pointwise cross sections differ significantly. Another disagreement was found in the transformation of ENDF/B-4 and 5 files into data with a linear interpolation scheme. Unshielded multigroup constants at O 0 K (620 groups, SANDII) have been calculated by the three code systems LINEAR-RECENT-GROUPIE, NJOY and RESEND5-INTEND. The code system RESEND5-INTEND calculates wrong group constants and should not be used any more. The two sets of group constants obtained from ENDF/B-4 data using GROUPIE and NJOY differ for some group constants by more than 2%. Some disagreements at low energies (10 -3 -eV) of the total cross section of Na-23 and Al-27 are difficult to understand. For ENDF/B-5 dosimetry data the capture group constants differ significantly. (Author) [pt

  18. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  19. Comparative evaluation of structural integrity for ITER blanket shield block based on SDC-IC and ASME code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hee-Jin [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Ha, Min-Su, E-mail: msha12@nfri.re.kr [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Sa-Woong; Jung, Hun-Chea [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Duck-Hoi [ITER Organization, Route de Vinon sur Verdon - CS 90046, 13067 Sant Paul Lez Durance (France)

    2016-11-01

    Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K{sub e} factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.

  20. Comparative evaluation of structural integrity for ITER blanket shield block based on SDC-IC and ASME code

    International Nuclear Information System (INIS)

    Shim, Hee-Jin; Ha, Min-Su; Kim, Sa-Woong; Jung, Hun-Chea; Kim, Duck-Hoi

    2016-01-01

    Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K_e factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.

  1. Linear calculations of edge current driven kink modes with BOUT++ code

    Energy Technology Data Exchange (ETDEWEB)

    Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y. [Institute of Plasma Physics, CAS, Hefei, Anhui 230031 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Snyder, P. B.; Turnbull, A. D. [General Atomics, San Diego, California 92186 (United States); Ma, C. H.; Xi, P. W. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); FSC, School of Physics, Peking University, Beijing 100871 (China)

    2014-10-15

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.

  2. Linear calculations of edge current driven kink modes with BOUT++ code

    International Nuclear Information System (INIS)

    Li, G. Q.; Xia, T. Y.; Xu, X. Q.; Snyder, P. B.; Turnbull, A. D.; Ma, C. H.; Xi, P. W.

    2014-01-01

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density

  3. Formation of nanophases in epoxy thermosets containing amphiphilic block copolymers with linear and star-like topologies.

    Science.gov (United States)

    Wang, Lei; Zhang, Chongyin; Cong, Houluo; Li, Lei; Zheng, Sixun; Li, Xiuhong; Wang, Jie

    2013-07-11

    In this work, we investigated the effect of topological structures of block copolymers on the formation of the nanophase in epoxy thermosets containing amphiphilic block copolymers. Two block copolymers composed of poly(ε-caprolactone) (PCL) and poly(2,2,2-trifluoroethyl acrylate) (PTFEA) blocks were synthesized to possess linear and star-shaped topologies. The star-shaped block copolymer composed a polyhedral oligomeric silsesquioxane (POSS) core and eight poly(ε-caprolactone)-block-poly(2,2,2-trifluoroethyl acrylate) (PCL-b-PTFEA) diblock copolymer arms. Both block copolymers were synthesized via the combination of ring-opening polymerization and reversible addition-fragmentation chain transfer/macromolecular design via the interchange of xanthate (RAFT/MADIX) process; they were controlled to have identical compositions of copolymerization and lengths of blocks. Upon incorporating both block copolymers into epoxy thermosets, the spherical PTFEA nanophases were formed in all the cases. However, the sizes of PTFEA nanophases from the star-like block copolymer were significantly lower than those from the linear diblock copolymer. The difference in the nanostructures gave rise to the different glass transition behavior of the nanostructured thermosets. The dependence of PTFEA nanophases on the topologies of block copolymers is interpreted in terms of the conformation of the miscible subchain (viz. PCL) at the surface of PTFEA microdomains and the restriction of POSS cages on the demixing of the thermoset-philic block (viz. PCL).

  4. Proceedings of the conference on computer codes and the linear accelerator community

    International Nuclear Information System (INIS)

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned

  5. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  6. Verification of thermal-irradiation stress analytical code VIENUS of graphite block

    International Nuclear Information System (INIS)

    Iyoku, Tatsuo; Ishihara, Masahiro; Shiozawa, Shusaku; Shirai, Hiroshi; Minato, Kazuo.

    1992-02-01

    The core graphite components of the High Temperature Engineering Test Reactor (HTTR) show both the dimensional change (irradiation shrinkage) and creep behavior due to fast neutron irradiation under the temperature and the fast neutron irradiation conditions of the HTTR. Therefore, thermal/irradiation stress analytical code, VIENUS, which treats these graphite irradiation behavior, is to be employed in order to design the core components such as fuel block etc. of the HTTR. The VIENUS is a two dimensional finite element viscoelastic stress analytical code to take account of changes in mechanical properties, thermal strain, irradiation-induced dimensional change and creep in the fast neutron irradiation environment. Verification analyses were carried out in order to prove the validity of this code based on the irradiation tests of the 8th OGL-1 fuel assembly and the fuel element of the Peach Bottom reactor. This report describes the outline of the VIENUS code and its verification analyses. (author)

  7. Decoding error-correcting codes with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Veldhuis, R.; Cronie, H.; Hoeksema, H.

    2007-01-01

    The decoding of arbitrary linear block codes is accomplished by solving a system of quadratic equations by means of Buchberger’s algorithm for finding a Gröbner basis. This generalizes the algorithm of Berlekamp-Massey for decoding Reed Solomon, Goppa and cyclic codes up to half the true minimum

  8. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  9. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  10. Space-Time Chip Equalization for Maximum Diversity Space-Time Block Coded DS-CDMA Downlink Transmission

    Directory of Open Access Journals (Sweden)

    Petré Frederik

    2004-01-01

    Full Text Available In the downlink of DS-CDMA, frequency-selectivity destroys the orthogonality of the user signals and introduces multiuser interference (MUI. Space-time chip equalization is an efficient tool to restore the orthogonality of the user signals and suppress the MUI. Furthermore, multiple-input multiple-output (MIMO communication techniques can result in a significant increase in capacity. This paper focuses on space-time block coding (STBC techniques, and aims at combining STBC techniques with the original single-antenna DS-CDMA downlink scheme. This results into the so-called space-time block coded DS-CDMA downlink schemes, many of which have been presented in the past. We focus on a new scheme that enables both the maximum multiantenna diversity and the maximum multipath diversity. Although this maximum diversity can only be collected by maximum likelihood (ML detection, we pursue suboptimal detection by means of space-time chip equalization, which lowers the computational complexity significantly. To design the space-time chip equalizers, we also propose efficient pilot-based methods. Simulation results show improved performance over the space-time RAKE receiver for the space-time block coded DS-CDMA downlink schemes that have been proposed for the UMTS and IS-2000 W-CDMA standards.

  11. Development of a 3D non-linear implicit MHD code

    International Nuclear Information System (INIS)

    Nicolas, T.; Ichiguchi, K.

    2016-06-01

    This paper details the on-going development of a 3D non-linear implicit MHD code, which aims at making possible large scale simulations of the non-linear phase of the interchange mode. The goal of the paper is to explain the rationale behind the choices made along the development, and the technical difficulties encountered. At the present stage, the development of the code has not been completed yet. Most of the discussion is concerned with the first approach, which utilizes cartesian coordinates in the poloidal plane. This approach shows serious difficulties in writing the preconditioner, closely related to the choice of coordinates. A second approach, based on curvilinear coordinates, also faced significant difficulties, which are detailed. The third and last approach explored involves unstructured tetrahedral grids, and indicates the possibility to solve the problem. The issue to domain meshing is addressed. (author)

  12. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    International Nuclear Information System (INIS)

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-01-01

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

  13. Linear algebra and matrices topics for a second course

    CERN Document Server

    Shapiro, Helene

    2015-01-01

    Linear algebra and matrix theory are fundamental tools for almost every area of mathematics, both pure and applied. This book combines coverage of core topics with an introduction to some areas in which linear algebra plays a key role, for example, block designs, directed graphs, error correcting codes, and linear dynamical systems. Notable features include a discussion of the Weyr characteristic and Weyr canonical forms, and their relationship to the better-known Jordan canonical form; the use of block cyclic matrices and directed graphs to prove Frobenius's theorem on the structure of the eigenvalues of a nonnegative, irreducible matrix; and the inclusion of such combinatorial topics as BIBDs, Hadamard matrices, and strongly regular graphs. Also included are McCoy's theorem about matrices with property P, the Bruck-Ryser-Chowla theorem on the existence of block designs, and an introduction to Markov chains. This book is intended for those who are familiar with the linear algebra covered in a typical first c...

  14. Random linear network coding for streams with unequally sized packets

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However, ...

  15. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  16. Rate-compatible protograph LDPC code families with linear minimum distance

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  17. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  18. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  19. Fractal image coding by an approximation of the collage error

    Science.gov (United States)

    Salih, Ismail; Smith, Stanley H.

    1998-12-01

    In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.

  20. Area, speed and power measurements of FPGA-based complex orthogonal space-time block code channel encoders

    Science.gov (United States)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-01-01

    Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.

  1. Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel

    Directory of Open Access Journals (Sweden)

    M. Rezaei

    2016-03-01

    Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel.

  2. MULTISTAGE BITRATE REDUCTION IN ABSOLUTE MOMENT BLOCK TRUNCATION CODING FOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Vimala

    2012-05-01

    Full Text Available Absolute Moment Block Truncation Coding (AMBTC is one of the lossy image compression techniques. The computational complexity involved is less and the quality of the reconstructed images is appreciable. The normal AMBTC method requires 2 bits per pixel (bpp. In this paper, two novel ideas have been incorporated as part of AMBTC method to improve the coding efficiency. Generally, the quality degrades with the reduction in the bit-rate. But in the proposed method, the quality of the reconstructed image increases with the decrease in the bit-rate. The proposed method has been tested with standard images like Lena, Barbara, Bridge, Boats and Cameraman. The results obtained are better than that of the existing AMBTC method in terms of bit-rate and the quality of the reconstructed images.

  3. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  4. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  5. Low-Complexity Multiple Description Coding of Video Based on 3D Block Transforms

    Directory of Open Access Journals (Sweden)

    Andrey Norkin

    2007-02-01

    Full Text Available The paper presents a multiple description (MD video coder based on three-dimensional (3D transforms. Two balanced descriptions are created from a video sequence. In the encoder, video sequence is represented in a form of coarse sequence approximation (shaper included in both descriptions and residual sequence (details which is split between two descriptions. The shaper is obtained by block-wise pruned 3D-DCT. The residual sequence is coded by 3D-DCT or hybrid, LOT+DCT, 3D-transform. The coding scheme is targeted to mobile devices. It has low computational complexity and improved robustness of transmission over unreliable networks. The coder is able to work at very low redundancies. The coding scheme is simple, yet it outperforms some MD coders based on motion-compensated prediction, especially in the low-redundancy region. The margin is up to 3 dB for reconstruction from one description.

  6. Cross-code gyrokinetic verification and benchmark on the linear collisionless dynamics of the geodesic acoustic mode

    Science.gov (United States)

    Biancalani, A.; Bottino, A.; Ehrlacher, C.; Grandgirard, V.; Merlo, G.; Novikau, I.; Qiu, Z.; Sonnendrücker, E.; Garbet, X.; Görler, T.; Leerink, S.; Palermo, F.; Zarzoso, D.

    2017-06-01

    The linear properties of the geodesic acoustic modes (GAMs) in tokamaks are investigated by means of the comparison of analytical theory and gyrokinetic numerical simulations. The dependence on the value of the safety factor, finite-orbit-width of the ions in relation to the radial mode width, magnetic-flux-surface shaping, and electron/ion mass ratio are considered. Nonuniformities in the plasma profiles (such as density, temperature, and safety factor), electro-magnetic effects, collisions, and the presence of minority species are neglected. Also, only linear simulations are considered, focusing on the local dynamics. We use three different gyrokinetic codes: the Lagrangian (particle-in-cell) code ORB5, the Eulerian code GENE, and semi-Lagrangian code GYSELA. One of the main aims of this paper is to provide a detailed comparison of the numerical results and analytical theory, in the regimes where this is possible. This helps understanding better the behavior of the linear GAM dynamics in these different regimes, the behavior of the codes, which is crucial in the view of a future work where more physics is present, and the regimes of validity of each specific analytical dispersion relation.

  7. Performance of Turbo Interference Cancellation Receivers in Space-Time Block Coded DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Emmanuel Oluremi Bejide

    2008-07-01

    Full Text Available We investigate the performance of turbo interference cancellation receivers in the space time block coded (STBC direct-sequence code division multiple access (DS-CDMA system. Depending on the concatenation scheme used, we divide these receivers into the partitioned approach (PA and the iterative approach (IA receivers. The performance of both the PA and IA receivers is evaluated in Rayleigh fading channels for the uplink scenario. Numerical results show that the MMSE front-end turbo space-time iterative approach receiver (IA effectively combats the mixture of MAI and intersymbol interference (ISI. To further investigate the possible achievable data rates in the turbo interference cancellation receivers, we introduce the puncturing of the turbo code through the use of rate compatible punctured turbo codes (RCPTCs. Simulation results suggest that combining interference cancellation, turbo decoding, STBC, and RCPTC can significantly improve the achievable data rates for a synchronous DS-CDMA system for the uplink in Rayleigh flat fading channels.

  8. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan; Pasciak, Joseph E.; Sirenko, Kostyantyn

    2014-01-01

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  9. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan

    2014-11-11

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  10. New approach to derive linear power/burnup history input for CANDU fuel codes

    International Nuclear Information System (INIS)

    Lac Tang, T.; Richards, M.; Parent, G.

    2003-01-01

    The fuel element linear power / burnup history is a required input for the ELESTRES code in order to simulate CANDU fuel behavior during normal operating conditions and also to provide input for the accident analysis codes ELOCA and SOURCE. The purpose of this paper is to present a new approach to derive 'true', or at least more realistic linear power / burnup histories. Such an approach can be used to recreate any typical bundle power history if only a single pair of instantaneous values of bundle power and burnup, together with the position in the channel, are known. The histories obtained could be useful to perform more realistic simulations for safety analyses for cases where the reference (overpower) history is not appropriate. (author)

  11. Generalized Block Failure

    DEFF Research Database (Denmark)

    Jönsson, Jeppe

    2015-01-01

    Block tearing is considered in several codes as a pure block tension or a pure block shear failure mechanism. However in many situations the load acts eccentrically and involves the transfer of a substantial moment in combination with the shear force and perhaps a normal force. A literature study...... shows that no readily available tests with a well-defined substantial eccentricity have been performed. This paper presents theoretical and experimental work leading towards generalized block failure capacity methods. Simple combination of normal force, shear force and moment stress distributions along...... yield lines around the block leads to simple interaction formulas similar to other interaction formulas in the codes....

  12. Cognitive radio networks with orthogonal space-time block coding and multiuser diversity

    KAUST Repository

    Yang, Liang; Qaraqe, Khalid A.; Serpedin, Erchin; Alouini, Mohamed-Slim; Liu, Weiping

    2013-01-01

    This paper considers a multiuser spectrum sharing (SS) system operating in a Rayleigh fading environment and in which every node is equipped with multiple antennas. The system employs orthogonal space-time block coding at the secondary users. Under such a framework, the average capacity and error performance under a peak interference constraint are first analyzed. For a comparison purpose, an analysis of the transmit antenna selection scheme is also presented. Finally, some selected numerical results are presented to corroborate the proposed analysis. © 1997-2012 IEEE.

  13. Cognitive radio networks with orthogonal space-time block coding and multiuser diversity

    KAUST Repository

    Yang, Liang

    2013-04-01

    This paper considers a multiuser spectrum sharing (SS) system operating in a Rayleigh fading environment and in which every node is equipped with multiple antennas. The system employs orthogonal space-time block coding at the secondary users. Under such a framework, the average capacity and error performance under a peak interference constraint are first analyzed. For a comparison purpose, an analysis of the transmit antenna selection scheme is also presented. Finally, some selected numerical results are presented to corroborate the proposed analysis. © 1997-2012 IEEE.

  14. ANALYSIS OF EXISTING AND PROSPECTIVE TECHNICAL CONTROL SYSTEMS OF NUMERIC CODES AUTOMATIC BLOCKING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2013-09-01

    Full Text Available Purpose. To identify the characteristic features of the engineering control measures system of automatic block of numeric code, identifying their advantages and disadvantages, to analyze the possibility of their use in the problems of diagnosing status of the devices automatic block and setting targets for the development of new diagnostic systems. Methodology. In order to achieve targets the objective theoretical and analytical method and the method of functional analysis have been used. Findings. The analysis of existing and future facilities of the remote control and diagnostics automatic block devices had shown that the existing systems of diagnosis were not sufficiently informative, designed primarily to control the discrete parameters, which in turn did not allow them to construct a decision support subsystem. In developing of new systems of technical diagnostics it was proposed to use the principle of centralized distributed processing of diagnostic data, to include a subsystem support decision-making in to the diagnostics system, it will reduce the amount of work to maintain the devices blocking and reduce recovery time after the occurrence injury. Originality. As a result, the currently existing engineering controls facilities of automatic block can not provide a full assessment of the state distillation alarms and locks. Criteria for the development of new systems of technical diagnostics with increasing amounts of diagnostic information and its automatic analysis were proposed. Practical value. These results of the analysis can be used in practice in order to select the technical control of automatic block devices, as well as the further development of diagnostic systems automatic block that allows for a gradual transition from a planned preventive maintenance service model to the actual state of the monitored devices.

  15. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  16. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  17. Multicarrier Block-Spread CDMA for Broadband Cellular Downlink

    Directory of Open Access Journals (Sweden)

    Leus Geert

    2004-01-01

    Full Text Available Effective suppression of multiuser interference (MUI and mitigation of frequency-selective fading effects within the complexity constraints of the mobile constitute major challenges for broadband cellular downlink transceiver design. Existing wideband direct-sequence (DS code division multiple access (CDMA transceivers suppress MUI statistically by restoring the orthogonality among users at the receiver. However, they call for receive diversity and multichannel equalization to improve the fading effects caused by deep channel fades. Relying on redundant block spreading and linear precoding, we design a so-called multicarrier block-spread- (MCBS-CDMA transceiver that preserves the orthogonality among users and guarantees symbol detection, regardless of the underlying frequency-selective fading channels. These properties allow for deterministic MUI elimination through low-complexity block despreading and enable full diversity gains, irrespective of the system load. Different options to perform equalization and decoding, either jointly or separately, strike the trade-off between performance and complexity. To improve the performance over multi-input multi-output (MIMO multipath fading channels, our MCBS-CDMA transceiver combines well with space-time block-coding (STBC techniques, to exploit both multiantenna and multipath diversity gains, irrespective of the system load. Simulation results demonstrate the superior performance of MCBS-CDMA compared to competing alternatives.

  18. Linear dispersion codes in space-frequency domain for SCFDE

    DEFF Research Database (Denmark)

    Marchetti, Nicola; Cianca, Ernestina; Prasad, Ramjee

    2007-01-01

    This paper presents a general framework for applying the Linear Dispersion Codes (LDC) in the space and frequency domains to Single Carrier - Frequency Domain Equalization (SCFDE) systems. Space-Frequency (SF)LDC are more suitable than Space-Time (ST)-LDC in high mobility environment. However......, the application of LDC in space-frequency domain in SCFDE systems is not straightforward as in Orthogonal Frequency Division Multiplexing (OFDM), since there is no direct access to the subcarriers at the transmitter. This paper describes how to build the space-time dispersion matrices to be used...

  19. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    Science.gov (United States)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  20. Typological analysis of social linear blocks: Spain 1950-1983. The case study of western Andalusia

    Directory of Open Access Journals (Sweden)

    A. Guajardo

    2017-04-01

    Full Text Available A main challenge that cities will need to face in the next few years is the regeneration of the social housing estates built during the decades of 1950s, 1960s and 1970s. One of the causes of their obsolescence is the mismatch between their hous-ing typologies and the contemporary needs. The main target of this study is to contribute to take a step forward in the un-derstanding of these typologies to be able to intervene on them efficiently. With this purpose, a study on 42 linear blocks built in Spain between 1950 and 1983 in western Andalusia has been carried out. The analysis includes three stages: 1 classification of the houses in recognizable groups; 2 an identification of the most used spatial configurations and 3 definition of their programmatic and size characteristics. As a result, a characterization of linear blocks is proposed as a reference model for future regenerative interventions.

  1. A two-dimensional linear elasticity problem for anisotropic materials, solved with a parallelization code

    Directory of Open Access Journals (Sweden)

    Mihai-Victor PRICOP

    2010-09-01

    Full Text Available The present paper introduces a numerical approach of static linear elasticity equations for anisotropic materials. The domain and boundary conditions are simple, to enhance an easy implementation of the finite difference scheme. SOR and gradient are used to solve the resulting linear system. The simplicity of the geometry is also useful for MPI parallelization of the code.

  2. A three-dimensional magnetostatics computer code for insertion devices

    International Nuclear Information System (INIS)

    Chubar, O.; Elleaume, P.; Chavanne, J.

    1998-01-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica (Mathematica is a registered trademark of Wolfram Research, Inc.). The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented

  3. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  4. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  5. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  6. Reliability of Broadcast Communications Under Sparse Random Linear Network Coding

    OpenAIRE

    Brown, Suzie; Johnson, Oliver; Tassi, Andrea

    2018-01-01

    Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kin...

  7. Effects of Grafting Density on Block Polymer Self-Assembly: From Linear to Bottlebrush.

    Science.gov (United States)

    Lin, Tzu-Pin; Chang, Alice B; Luo, Shao-Xiong; Chen, Hsiang-Yun; Lee, Byeongdu; Grubbs, Robert H

    2017-11-28

    Grafting density is an important structural parameter that exerts significant influences over the physical properties of architecturally complex polymers. In this report, the physical consequences of varying the grafting density (z) were studied in the context of block polymer self-assembly. Well-defined block polymers spanning the linear, comb, and bottlebrush regimes (0 ≤ z ≤ 1) were prepared via grafting-through ring-opening-metathesis polymerization. ω-Norbornenyl poly(d,l-lactide) and polystyrene macromonomers were copolymerized with discrete comonomers in different feed ratios, enabling precise control over both the grafting density and molecular weight. Small-angle X-ray scattering experiments demonstrate that these graft block polymers self-assemble into long-range-ordered lamellar structures. For 17 series of block polymers with variable z, the scaling of the lamellar period with the total backbone degree of polymerization (d* ∼ N bb α ) was studied. The scaling exponent α monotonically decreases with decreasing z and exhibits an apparent transition at z ≈ 0.2, suggesting significant changes in the chain conformations. Comparison of two block polymer systems, one that is strongly segregated for all z (System I) and one that experiences weak segregation at low z (System II), indicates that the observed trends are primarily caused by the polymer architectures, not segregation effects. A model is proposed in which the characteristic ratio (C ∞ ), a proxy for the backbone stiffness, scales with N bb as a function of the grafting density: C ∞ ∼ N bb f(z) . The scaling behavior disclosed herein provides valuable insights into conformational changes with grafting density, thus introducing opportunities for block polymer and material design.

  8. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  9. Optimization of an Electromagnetics Code with Multicore Wavefront Diamond Blocking and Multi-dimensional Intra-Tile Parallelization

    KAUST Repository

    Malas, Tareq M.

    2016-07-21

    Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell\\'s Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.

  10. Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding

    DEFF Research Database (Denmark)

    Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani

    2014-01-01

    This work proposes a new protocol applying on– the–fly random linear network coding in wireless mesh net- works. The protocol provides increased reliability, low delay, and high throughput to the upper layers, while being oblivious to their specific requirements. This seemingly conflicting goals ...

  11. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  12. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  13. FEAST: a two-dimensional non-linear finite element code for calculating stresses

    International Nuclear Information System (INIS)

    Tayal, M.

    1986-06-01

    The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%

  14. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  16. Bounded distance decoding of linear error-correcting codes with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.

    2009-01-01

    The problem of bounded distance decoding of arbitrary linear codes using Gröbner bases is addressed. A new method is proposed, which is based on reducing an initial decoding problem to solving a certain system of polynomial equations over a finite field. The peculiarity of this system is that, when

  17. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  18. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  19. Distributed Space-Time Block Coded Transmission with Imperfect Channel Estimation: Achievable Rate and Power Allocation

    Directory of Open Access Journals (Sweden)

    Sonia Aïssa

    2008-05-01

    Full Text Available This paper investigates the effects of channel estimation error at the receiver on the achievable rate of distributed space-time block coded transmission. We consider that multiple transmitters cooperate to send the signal to the receiver and derive lower and upper bounds on the mutual information of distributed space-time block codes (D-STBCs when the channel gains and channel estimation error variances pertaining to different transmitter-receiver links are unequal. Then, assessing the gap between these two bounds, we provide a limiting value that upper bounds the latter at any input transmit powers, and also show that the gap is minimum if the receiver can estimate the channels of different transmitters with the same accuracy. We further investigate positioning the receiving node such that the mutual information bounds of D-STBCs and their robustness to the variations of the subchannel gains are maximum, as long as the summation of these gains is constant. Furthermore, we derive the optimum power transmission strategy to achieve the outage capacity lower bound of D-STBCs under arbitrary numbers of transmit and receive antennas, and provide closed-form expressions for this capacity metric. Numerical simulations are conducted to corroborate our analysis and quantify the effects of imperfect channel estimation.

  20. Monte Carlo simulation of medical linear accelerator using primo code

    International Nuclear Information System (INIS)

    Omer, Mohamed Osman Mohamed Elhasan

    2014-12-01

    The use of monte Carlo simulation has become very important in the medical field and especially in calculation in radiotherapy. Various Monte Carlo codes were developed simulating interactions of particles and photons with matter. One of these codes is PRIMO that performs simulation of radiation transport from the primary electron source of a linac to estimate the absorbed dose in a water phantom or computerized tomography (CT). PRIMO is based on Penelope Monte Carlo code. Measurements of 6 MV photon beam PDD and profile were done for Elekta precise linear accelerator at Radiation and Isotopes Center Khartoum using computerized Blue water phantom and CC13 Ionization Chamber. accept Software was used to control the phantom to measure and verify dose distribution. Elektalinac from the list of available linacs in PRIMO was tuned to model Elekta precise linear accelerator. Beam parameter of 6.0 MeV initial electron energy, 0.20 MeV FWHM, and 0.20 cm focal spot FWHM were used, and an error of 4% between calculated and measured curves was found. The buildup region Z max was 1.40 cm and homogenous profile in cross line and in line were acquired. A number of studies were done to verily the model usability one of them is the effect of the number of histories on accuracy of the simulation and the resulted profile for the same beam parameters. The effect was noticeable and inaccuracies in the profile were reduced by increasing the number of histories. Another study was the effect of Side-step errors on the calculated dose which was compared with the measured dose for the same setting.It was in range of 2% for 5 cm shift, but it was higher in the calculated dose because of the small difference between the tuned model and measured dose curves. Future developments include simulating asymmetrical fields, calculating the dose distribution in computerized tomographic (CT) volume, studying the effect of beam modifiers on beam profile for both electron and photon beams.(Author)

  1. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    Science.gov (United States)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  2. Modelling a multi-crystal detector block for PET

    International Nuclear Information System (INIS)

    Carroll, L.R.; Nutt, R.; Casey, M.

    1985-01-01

    A simple mathematical model describes the performance of a modular detector ''block'' which is a key component in an advanced, high-resolution PET Scanner. Each block contains 32 small bismuth germanate (BGO) crystals coupled to four photomultiplier tubes (PMTs) through a coded light pipe. AT each PMT cathode the charge released for 511 keV coincidence events may be characterized as Poisson random variables in which the variance grows as the mean of the observed current. Given the light from BGO, one must; arrange the best coding - the distribution of light to the four PMTs, specify an optimum decoding scheme for choosing the correct crystal location from a noisy ensemble of PMT currents, and estimate the average probability of error. The statistical fluctuation or ''noise'' becomes decoupled from the ''signal'' and can be regarded as independent, additive components with zero mean and unit variance. Moreover, the envelope of the transformed noise distribution approximates very closely a normal (Gaussian) distribution with variance = 1. Specifying the coding and decoding strategy becomes a problem of signalling through a channel corrupted by additive, white, Gaussian noise; a classic problem long since solved within the context of Communication Engineering using geometry: i.e. distance, volume, angle, inner product, etc., in a linear space of higher dimension

  3. A Linear Algebra Framework for Static High Performance Fortran Code Distribution

    Directory of Open Access Journals (Sweden)

    Corinne Ancourt

    1997-01-01

    Full Text Available High Performance Fortran (HPF was developed to support data parallel programming for single-instruction multiple-data (SIMD and multiple-instruction multiple-data (MIMD machines with distributed memory. The programmer is provided a familiar uniform logical address space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors, and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode HPF directives and to synthesize distributed code with space-efficient array allocation, tight loop bounds, and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, and overlap analysis. The systematic use of an affine framework makes it possible to prove the compilation scheme correct.

  4. Implementation and Performance Evaluation of Distributed Cloud Storage Solutions using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Fitzek, Frank; Toth, Tamas; Szabados, Áron

    2014-01-01

    This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...

  5. Block diagonalization for algebra's associated with block codes

    NARCIS (Netherlands)

    D. Gijswijt (Dion)

    2009-01-01

    htmlabstractFor a matrix *-algebra B, consider the matrix *-algebra A consisting of the symmetric tensors in the n-fold tensor product of B. Examples of such algebras in coding theory include the Bose-Mesner algebra and Terwilliger algebra of the (non)binary Hamming cube, and algebras arising in

  6. Further development of the V-code for recirculating linear accelerator simulations

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Sylvain; Ackermann, Wolfgang; Weiland, Thomas [Institut fuer Theorie Elektromagnetischer Felder, Technische Universitaet Darmstadt (Germany); Eichhorn, Ralf; Hug, Florian; Kleinmann, Michaela; Platz, Markus [Institut fuer Kernphysik, Technische Universitaet Darmstadt (Germany)

    2011-07-01

    The Superconducting Darmstaedter LINear Accelerator (S-DALINAC) installed at the institute of nuclear physics (IKP) at TU Darmstadt is designed as a recirculating linear accelerator. The beam is first accelerated up to 10 MeV in the injector beam line. Then it is deflected by 180 degrees into the main linac. The linac section with eight superconducting cavities is passed up to three times, providing a maximal energy gain of 40 MeV on each passage. Due to this recirculating layout it is complicated to find an accurate setup for the various beam line elements. Fast online beam dynamics simulations can advantageously assist the operators because they provide a more detailed insight into the actual machine status. In this contribution further developments of the moment based simulation tool V-code which enables to simulate recirculating machines are presented together with simulation results.

  7. Simulating the performance of a distance-3 surface code in a linear ion trap

    Science.gov (United States)

    Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.

    2018-04-01

    We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.

  8. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  9. Computer codes for three dimensional mass transport with non-linear sorption

    International Nuclear Information System (INIS)

    Noy, D.J.

    1985-03-01

    The report describes the mathematical background and data input to finite element programs for three dimensional mass transport in a porous medium. The transport equations are developed and sorption processes are included in a general way so that non-linear equilibrium relations can be introduced. The programs are described and a guide given to the construction of the required input data sets. Concluding remarks indicate that the calculations require substantial computer resources and suggest that comprehensive preliminary analysis with lower dimensional codes would be important in the assessment of field data. (author)

  10. Variable Rate, Adaptive Transform Tree Coding Of Images

    Science.gov (United States)

    Pearlman, William A.

    1988-10-01

    A tree code, asymptotically optimal for stationary Gaussian sources and squared error distortion [2], is used to encode transforms of image sub-blocks. The variance spectrum of each sub-block is estimated and specified uniquely by a set of one-dimensional auto-regressive parameters. The expected distortion is set to a constant for each block and the rate is allowed to vary to meet the given level of distortion. Since the spectrum and rate are different for every block, the code tree differs for every block. Coding simulations for target block distortion of 15 and average block rate of 0.99 bits per pel (bpp) show that very good results can be obtained at high search intensities at the expense of high computational complexity. The results at the higher search intensities outperform a parallel simulation with quantization replacing tree coding. Comparative coding simulations also show that the reproduced image with variable block rate and average rate of 0.99 bpp has 2.5 dB less distortion than a similarly reproduced image with a constant block rate equal to 1.0 bpp.

  11. A ligand exchange strategy for one-pot sequential synthesis of (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

    Science.gov (United States)

    Zhang, Zhichao; Ye, Zhibin

    2012-08-18

    Upon the addition of an equimolar amount of 2,2'-bipyridine, a cationic Pd-diimine complex capable of facilitating "living" ethylene polymerization is switched to catalyze "living" alternating copolymerization of 4-tertbutylstyrene and CO. This unique chemistry is thus employed to synthesize a range of well-defined treelike (hyperbranched polyethylene)-b-(linear polyketone) block polymers.

  12. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  13. Coherent Synchrotron Radiation A Simulation Code Based on the Non-Linear Extension of the Operator Splitting Method

    CERN Document Server

    Dattoli, Giuseppe

    2005-01-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...

  14. Edge localized linear ideal magnetohydrodynamic instability studies in an extended-magnetohydrodynamic code

    International Nuclear Information System (INIS)

    Burke, B. J.; Kruger, S. E.; Hegna, C. C.; Zhu, P.; Snyder, P. B.; Sovinec, C. R.; Howell, E. C.

    2010-01-01

    A linear benchmark between the linear ideal MHD stability codes ELITE [H. R. Wilson et al., Phys. Plasmas 9, 1277 (2002)], GATO [L. Bernard et al., Comput. Phys. Commun. 24, 377 (1981)], and the extended nonlinear magnetohydrodynamic (MHD) code, NIMROD [C. R. Sovinec et al.., J. Comput. Phys. 195, 355 (2004)] is undertaken for edge-localized (MHD) instabilities. Two ballooning-unstable, shifted-circle tokamak equilibria are compared where the stability characteristics are varied by changing the equilibrium plasma profiles. The equilibria model an H-mode plasma with a pedestal pressure profile and parallel edge currents. For both equilibria, NIMROD accurately reproduces the transition to instability (the marginally unstable mode), as well as the ideal growth spectrum for a large range of toroidal modes (n=1-20). The results use the compressible MHD model and depend on a precise representation of 'ideal-like' and 'vacuumlike' or 'halo' regions within the code. The halo region is modeled by the introduction of a Lundquist-value profile that transitions from a large to a small value at a flux surface location outside of the pedestal region. To model an ideal-like MHD response in the core and a vacuumlike response outside the transition, separate criteria on the plasma and halo Lundquist values are required. For the benchmarked equilibria the critical Lundquist values are 10 8 and 10 3 for the ideal-like and halo regions, respectively. Notably, this gives a ratio on the order of 10 5 , which is much larger than experimentally measured values using T e values associated with the top of the pedestal and separatrix. Excellent agreement with ELITE and GATO calculations are made when sharp boundary transitions in the resistivity are used and a small amount of physical dissipation is added for conditions very near and below marginal ideal stability.

  15. Reduction of Under-Determined Linear Systems by Sparce Block Matrix Technique

    DEFF Research Database (Denmark)

    Tarp-Johansen, Niels Jacob; Poulsen, Peter Noe; Damkilde, Lars

    1996-01-01

    numerical stability of the aforementioned reduction. Moreover the coefficient matrix for the equilibrium equations is typically very sparse. The objective is to deal efficiently with the full pivoting reduction of sparse rectangular matrices using a dynamic storage scheme based on the block matrix concept.......Under-determined linear equation systems occur in different engineering applications. In structural engineering they typically appear when applying the force method. As an example one could mention limit load analysis based on The Lower Bound Theorem. In this application there is a set of under......-determined equilibrium equation restrictions in an LP-problem. A significant reduction of computer time spent on solving the LP-problem is achieved if the equilib rium equations are reduced before going into the optimization procedure. Experience has shown that for some structures one must apply full pivoting to ensure...

  16. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  17. Comparison of computer codes for evaluation of double-supply-frequency pulsations in linear induction pumps

    International Nuclear Information System (INIS)

    Kirillov, Igor R.; Obukhov, Denis M.; Ogorodnikov, Anatoly P.; Araseki, Hideo

    2004-01-01

    The paper describes and compares three computer codes that are able to estimate the double-supply-frequency (DSF) pulsations in annular linear induction pumps (ALIPs). The DSF pulsations are the result of interaction of the magnetic field and induced in liquid metal currents both changing with supply-frequency. They may be of some concern for electromagnetic pumps (EMP) exploitation and need to be evaluated at their design. The results of computer simulation are compared with experimental ones for annular linear induction pump ALIP-1

  18. Throughput vs. Delay in Lossy Wireless Mesh Networks with Random Linear Network Coding

    OpenAIRE

    Hundebøll, Martin; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Fitzek, Frank

    2014-01-01

    This work proposes a new protocol applying on–the–fly random linear network coding in wireless mesh net-works. The protocol provides increased reliability, low delay,and high throughput to the upper layers, while being obliviousto their specific requirements. This seemingly conflicting goalsare achieved by design, using an on–the–fly network codingstrategy. Our protocol also exploits relay nodes to increasethe overall performance of individual links. Since our protocolnaturally masks random p...

  19. Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services

    DEFF Research Database (Denmark)

    Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani

    2016-01-01

    Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...

  20. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  1. Improved Motion Estimation Using Early Zero-Block Detection

    Directory of Open Access Journals (Sweden)

    Y. Lin

    2008-07-01

    Full Text Available We incorporate the early zero-block detection technique into the UMHexagonS algorithm, which has already been adopted in H.264/AVC JM reference software, to speed up the motion estimation process. A nearly sufficient condition is derived for early zero-block detection. Although the conventional early zero-block detection method can achieve significant improvement in computation reduction, the PSNR loss, to whatever extent, is not negligible especially for high quantization parameter (QP or low bit-rate coding. This paper modifies the UMHexagonS algorithm with the early zero-block detection technique to improve its coding performance. The experimental results reveal that the improved UMHexagonS algorithm greatly reduces computation while maintaining very high coding efficiency.

  2. Protograph LDPC Codes Over Burst Erasure Channels

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  3. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  4. Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.

    2009-01-01

    In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.

  5. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  6. Approximate design theory for a simple block design with random block effects

    OpenAIRE

    Christof, Karin

    1985-01-01

    Approximate design theory for a simple block design with random block effects / K. Christof ; F. Pukelsheim. - In: Linear statistical inference / ed. by T. Calinski ... - Berlin u. a. : Springer, 1985. - S. 20-28. - (Lecture notes in statistics ; 35)

  7. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  8. Iterative linear solvers in a 2D radiation-hydrodynamics code: Methods and performance

    International Nuclear Information System (INIS)

    Baldwin, C.; Brown, P.N.; Falgout, R.; Graziani, F.; Jones, J.

    1999-01-01

    Computer codes containing both hydrodynamics and radiation play a central role in simulating both astrophysical and inertial confinement fusion (ICF) phenomena. A crucial aspect of these codes is that they require an implicit solution of the radiation diffusion equations. The authors present in this paper the results of a comparison of five different linear solvers on a range of complex radiation and radiation-hydrodynamics problems. The linear solvers used are diagonally scaled conjugate gradient, GMRES with incomplete LU preconditioning, conjugate gradient with incomplete Cholesky preconditioning, multigrid, and multigrid-preconditioned conjugate gradient. These problems involve shock propagation, opacities varying over 5--6 orders of magnitude, tabular equations of state, and dynamic ALE (Arbitrary Lagrangian Eulerian) meshes. They perform a problem size scalability study by comparing linear solver performance over a wide range of problem sizes from 1,000 to 100,000 zones. The fundamental question they address in this paper is: Is it more efficient to invert the matrix in many inexpensive steps (like diagonally scaled conjugate gradient) or in fewer expensive steps (like multigrid)? In addition, what is the answer to this question as a function of problem size and is the answer problem dependent? They find that the diagonally scaled conjugate gradient method performs poorly with the growth of problem size, increasing in both iteration count and overall CPU time with the size of the problem and also increasing for larger time steps. For all problems considered, the multigrid algorithms scale almost perfectly (i.e., the iteration count is approximately independent of problem size and problem time step). For pure radiation flow problems (i.e., no hydrodynamics), they see speedups in CPU time of factors of ∼15--30 for the largest problems, when comparing the multigrid solvers relative to diagonal scaled conjugate gradient

  9. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  10. Application of 3D coupled code ATHLET-QUABOX/CUBBOX for RBMK-1000 transients after graphite block modernization

    Energy Technology Data Exchange (ETDEWEB)

    Samokhin, Aleksei [Scientific and Engineering Centre for Nuclear and Radiation Safety (SEC NRS), Moscow (Russian Federation); Zilly, Matias [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work describes the application and the results of transient calculations for the RBMK-1000 with the coupled code system ATHLET 2.2A-QUABOX/CUBBOX which was developed in GRS. Within these studies the planned modernization of the graphite blocks of the RBMK-1000 reactor is taken into account. During the long-term operation of the uranium-graphite reactors RBMK-1000, a change of physical and mechanical properties of the reactor graphite blocks is observed due to the impact of radiation and temperature effects. These have led to a deformation of the reactor graphite columns and, as a result, a deformation of the control and protection system (CPS) and of fuel channels. Potentially, this deformation can lead to problems affecting the smooth movement of the control rods in the CPS channels and problems during the loading and unloading of fuel assemblies. The present paper analyzes two reactivity insertion transients, each taking into account three graphite removal scenarios. The presented work is directly connected with the modernization program of the RBMK- 1000 reactors and has an important contribution to the assessment of the safety-relevant parameters after the modification of the core graphite blocks.

  11. Blind and semi-blind ML detection for space-time block-coded OFDM wireless systems

    KAUST Repository

    Zaib, Alam; Al-Naffouri, Tareq Y.

    2014-01-01

    This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.

  12. Multicompartment micellar aggregates of linear ABC amphiphiles in solvents selective for the C block: A Monte Carlo simulation

    KAUST Repository

    Zhu, Yutian

    2012-01-01

    In the current study, we applied the Monte Carlo method to study the self-assembly of linear ABC amphiphiles composed of two solvophobic A and B blocks and a solvophilic C block. A great number of multicompartment micelles are discovered from the simulations and the detailed phase diagrams for the ABC amphiphiles with different block lengths are obtained. The simulation results reveal that the micellar structure is largely controlled by block length, solvent quality, and incompatibility between the different block types. When the B block is longer than or as same as the terminal A block, a rich variety of micellar structures can be formed from ABC amphiphiles. By adjusting the solvent quality or incompatibility between the different block types, multiple morphological transitions are observed. These morphological sequences are well explained and consistent with all the previous experimental and theoretical studies. Despite the complexity of the micellar structures and morphological transitions observed for the self-assembly of ABC amphiphiles, two important common features of the phase behavior are obtained. In general, the micellar structures obtained in the current study can be divided into zero-dimensional (sphere-like structures, including bumpy-surfaced spheres and sphere-on-sphere structures), one-dimensional (cylinder-like structures, including rod and ring structures), two-dimensional (layer-like structures, including disk, lamella and worm-like and hamburger structures) and three-dimensional (vesicle) structures. It is found that the micellar structures transform from low- to high- dimensional structures when the solvent quality for the solvophobic blocks is decreased. In contrast, the micellar structures transform from high- to low-dimensional structures as the incompatibility between different block types increases. Furthermore, several novel micellar structures, such as the CBABC five-layer vesicle, hamburger, CBA three-layer ring, wormlike shape with

  13. Turbo coding, turbo equalisation and space-time coding for transmission over fading channels

    CERN Document Server

    Hanzo, L; Yeap, B

    2002-01-01

    Against the backdrop of the emerging 3G wireless personal communications standards and broadband access network standard proposals, this volume covers a range of coding and transmission aspects for transmission over fading wireless channels. It presents the most important classic channel coding issues and also the exciting advances of the last decade, such as turbo coding, turbo equalisation and space-time coding. It endeavours to be the first book with explicit emphasis on channel coding for transmission over wireless channels. Divided into 4 parts: Part 1 - explains the necessary background for novices. It aims to be both an easy reading text book and a deep research monograph. Part 2 - provides detailed coverage of turbo conventional and turbo block coding considering the known decoding algorithms and their performance over Gaussian as well as narrowband and wideband fading channels. Part 3 - comprehensively discusses both space-time block and space-time trellis coding for the first time in literature. Par...

  14. Evaluation of linear heat rates for the power-to-melt tests on 'JOYO' using the Monte-Carlo code 'MVP'

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Ishikawa, Makoto

    2000-04-01

    The linear heat rates of the power-to-melt (PTM) tests, performed with B5D-1 and B5D-2 subassemblies on the Experimental Fast Reactor 'JOYO', are evaluated with the continuous energy Monte-Carlo code, MVP. We can apply a whole core model to MVP, but it takes very long time for the calculation. Therefore, judging from the structure of B5D subassembly, we used the MVP code to calculate the radial distribution of linear heat rate and used the deterministic method to calculate the axial distribution. We also derived the formulas for this method. Furthermore, we evaluated the error of the linear heat rate, by evaluating the experimental error of the reactor power, the statistical error of Monte-Carlo method, the calculational model error of the deterministic method and so on. On the other hand, we also evaluated the burnup rate of the B5D assembly and compared with the measured value in the post-irradiation test. The main results are following: B5D-1 (B5101, F613632, core center). Linear heat rate: 600 W/cm±2.2%. Burnup rate: 0.977. B5D-2 (B5214, G80124, core center). Linear heat rate: 641 W/cm±2.2%. Burnup rate: 0.886. (author)

  15. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    Science.gov (United States)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  16. Random Linear Network Coding for 5G Mobile Video Delivery

    Directory of Open Access Journals (Sweden)

    Dejan Vukobratovic

    2018-03-01

    Full Text Available An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G 3GPP New Radio (NR standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC. In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.

  17. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. Comparative Study on Code-based Linear Evaluation of an Existing RC Building Damaged during 1998 Adana-Ceyhan Earthquake

    Science.gov (United States)

    Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter

    2008-07-01

    Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher

  19. Comparative Study on Code-based Linear Evaluation of an Existing RC Building Damaged during 1998 Adana-Ceyhan Earthquake

    International Nuclear Information System (INIS)

    Toprak, A. Emre; Guelay, F. Guelten; Ruge, Peter

    2008-01-01

    Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m 2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher

  20. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  1. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  2. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find

  3. Program LINEAR (version 79-1): linearize data in the evaluated nuclear data file/version B (ENDF/B) format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1979-01-01

    Program LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form (i.e., removes points not needed for linear interpolability). The main advantage of the code is that it allows subsequent codes to consider only linear-linear data. A listing of the source deck is available on request

  4. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  5. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  6. Fokker-Planck code for the quasi-linear absorption of electron cyclotron waves in a tokamak plasma

    International Nuclear Information System (INIS)

    Meyer, R.L.; Giruzzi, G.; Krivenski, V.

    1986-01-01

    We present the solution of the kinetic equation describing the quasi-linear evolution of the electron momentum distribution function under the influence of the electron cyclotron wave absorption. Coulomb collisions and the dc electric field in a tokamak plasma. The solution of the quasi-linear equation is obtained numerically using a two-dimensional initial value code following an ADI scheme. Most emphasis is given to the full non-linear and self-consistent problem, namely, the wave amplitude is evaluated at any instant and any point in space according to the actual damping. This is necessary since wave damping is a very sensitive function of the slope of the local momentum distribution function because the resonance condition relates the electron momentum to the location of wave energy deposition. (orig.)

  7. Fixed capacity and variable member grouping assignment of orthogonal variable spreading factor code tree for code division multiple access networks

    Directory of Open Access Journals (Sweden)

    Vipin Balyan

    2014-08-01

    Full Text Available Orthogonal variable spreading factor codes are used in the downlink to maintain the orthogonality between different channels and are used to handle new calls arriving in the system. A period of operation leads to fragmentation of vacant codes. This leads to code blocking problem. The assignment scheme proposed in this paper is not affected by fragmentation, as the fragmentation is generated by the scheme itself. In this scheme, the code tree is divided into groups whose capacity is fixed and numbers of members (codes are variable. A group with maximum number of busy members is used for assignment, this leads to fragmentation of busy groups around code tree and compactness within group. The proposed scheme is well evaluated and compared with other schemes using parameters like code blocking probability and call establishment delay. Through simulations it has been demonstrated that the proposed scheme not only adequately reduces code blocking probability, but also requires significantly less time before assignment to locate a vacant code for assignment, which makes it suitable for the real-time calls.

  8. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  9. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  10. Coded communications with nonideal interleaving

    Science.gov (United States)

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  11. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  12. A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions

    Directory of Open Access Journals (Sweden)

    Marco Baldi

    2008-01-01

    Full Text Available Classic linear block codes, like Bose-Chaudhuri-Hocquenghem (BCH and Reed-Solomon (RS codes, are widely used in multimedia transmissions, but their soft-decision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to low-density parity-check (LDPC codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVB-S2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.

  13. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  14. A fast direct method for block triangular Toeplitz-like with tri-diagonal block systems from time-fractional partial differential equations

    Science.gov (United States)

    Ke, Rihuan; Ng, Michael K.; Sun, Hai-Wei

    2015-12-01

    In this paper, we study the block lower triangular Toeplitz-like with tri-diagonal blocks system which arises from the time-fractional partial differential equation. Existing fast numerical solver (e.g., fast approximate inversion method) cannot handle such linear system as the main diagonal blocks are different. The main contribution of this paper is to propose a fast direct method for solving this linear system, and to illustrate that the proposed method is much faster than the classical block forward substitution method for solving this linear system. Our idea is based on the divide-and-conquer strategy and together with the fast Fourier transforms for calculating Toeplitz matrix-vector multiplication. The complexity needs O (MNlog2 ⁡ M) arithmetic operations, where M is the number of blocks (the number of time steps) in the system and N is the size (number of spatial grid points) of each block. Numerical examples from the finite difference discretization of time-fractional partial differential equations are also given to demonstrate the efficiency of the proposed method.

  15. User's manual for seismic analysis code 'SONATINA-2V'

    International Nuclear Information System (INIS)

    Hanawa, Satoshi; Iyoku, Tatsuo

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  16. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    Science.gov (United States)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  17. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  18. Zero-block mode decision algorithm for H.264/AVC.

    Science.gov (United States)

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  19. Ultrahigh Molecular Weight Linear Block Copolymers: Rapid Access by Reversible-Deactivation Radical Polymerization and Self- Assembly into Large Domain Nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Mapas, Jose Kenneth D.; Thomay, Tim; Cartwright, Alexander N.; Ilavsky, Jan; Rzayev, Javid

    2016-05-05

    Block copolymer (BCP) derived periodic nanostructures with domain sizes larger than 150 nm present a versatile platform for the fabrication of photonic materials. So far, the access to such materials has been limited to highly synthetically involved protocols. Herein, we report a simple, “user-friendly” method for the preparation of ultrahigh molecular weight linear poly(solketal methacrylate-b-styrene) block copolymers by a combination of Cu-wire-mediated ATRP and RAFT polymerizations. The synthesized copolymers with molecular weights up to 1.6 million g/mol and moderate dispersities readily assemble into highly ordered cylindrical or lamella microstructures with domain sizes as large as 292 nm, as determined by ultra-small-angle x-ray scattering and scanning electron microscopy analyses. Solvent cast films of the synthesized block copolymers exhibit stop bands in the visible spectrum correlated to their domain spacings. The described method opens new avenues for facilitated fabrication and the advancement of fundamental understanding of BCP-derived photonic nanomaterials for a variety of applications.

  20. Testing .NET application blocks version 1.0

    CERN Document Server

    Microsoft. Redmond

    2005-01-01

    Complex software environments require more in-depth testing. This book delivers the detailed guidance you need to plan and execute testing for the solutions you develop with Microsoft PATTERNS & PRACTICES application blocks. Whether you're customizing the application blocks or integrating them into existing applications, you'll understand the key considerations for verifying that your code meets its requirements for performance, availability, scalability, compatibility, globalization, and security features. You'll find code examples, sample test cases, and checklists that demonstrate how to p

  1. Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel

    KAUST Repository

    Shaqfeh, Mohammad

    2013-05-27

    In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.

  2. Joint Network Coding and Opportunistic Scheduling for the Bidirectional Relay Channel

    KAUST Repository

    Shaqfeh, Mohammad; Alnuweiri, Hussein; Alouini, Mohamed-Slim; Zafar, Ammar

    2013-01-01

    In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users’ transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited.

  3. Further Generalisations of Twisted Gabidulin Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John

    2017-01-01

    We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....

  4. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  5. Development of seismic analysis model for HTGR core on commercial FEM code

    International Nuclear Information System (INIS)

    Tsuji, Nobumasa; Ohashi, Kazutaka

    2015-01-01

    The aftermath of the Great East Japan Earthquake prods to revise the design basis earthquake intensity severely. In aseismic design of block-type HTGR, the securement of structural integrity of core blocks and other structures which are made of graphite become more important. For the aseismic design of block-type HTGR, it is necessary to predict the motion of core blocks which are collided with adjacent blocks. Some seismic analysis codes have been developed in 1970s, but these codes are special purpose-built codes and have poor collaboration with other structural analysis code. We develop the vertical 2 dimensional analytical model on multi-purpose commercial FEM code, which take into account the multiple impacts and friction between block interfaces and rocking motion on contact with dowel pins of the HTGR core by using contact elements. This model is verified by comparison with the experimental results of 12 column vertical slice vibration test. (author)

  6. Joint opportunistic scheduling and network coding for bidirectional relay channel

    KAUST Repository

    Shaqfeh, Mohammad

    2013-07-01

    In this paper, we consider a two-way communication system in which two users communicate with each other through an intermediate relay over block-fading channels. We investigate the optimal opportunistic scheduling scheme in order to maximize the long-term average transmission rate in the system assuming symmetric information flow between the two users. Based on the channel state information, the scheduler decides that either one of the users transmits to the relay, or the relay transmits to a single user or broadcasts to both users a combined version of the two users\\' transmitted information by using linear network coding. We obtain the optimal scheduling scheme by using the Lagrangian dual problem. Furthermore, in order to characterize the gains of network coding and opportunistic scheduling, we compare the achievable rate of the system versus suboptimal schemes in which the gains of network coding and opportunistic scheduling are partially exploited. © 2013 IEEE.

  7. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Vecharynski, Eugene [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Brabec, Jiri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Shao, Meiyue [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Govind, Niranjan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States). Environmental Molecular Sciences Lab.; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division

    2017-12-01

    We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.

  8. An algorithm for the construction of substitution box for block ciphers based on projective general linear group

    Directory of Open Access Journals (Sweden)

    Anas Altaleb

    2017-03-01

    Full Text Available The aim of this work is to synthesize 8*8 substitution boxes (S-boxes for block ciphers. The confusion creating potential of an S-box depends on its construction technique. In the first step, we have applied the algebraic action of the projective general linear group PGL(2,GF(28 on Galois field GF(28. In step 2 we have used the permutations of the symmetric group S256 to construct new kind of S-boxes. To explain the proposed extension scheme, we have given an example and constructed one new S-box. The strength of the extended S-box is computed, and an insight is given to calculate the confusion-creating potency. To analyze the security of the S-box some popular algebraic and statistical attacks are performed as well. The proposed S-box has been analyzed by bit independent criterion, linear approximation probability test, non-linearity test, strict avalanche criterion, differential approximation probability test, and majority logic criterion. A comparison of the proposed S-box with existing S-boxes shows that the analyses of the extended S-box are comparatively better.

  9. Vectorization of nuclear codes 90-1

    International Nuclear Information System (INIS)

    Nonomiya, Iwao; Nemoto, Toshiyuki; Ishiguro, Misako; Harada, Hiroo; Hori, Takeo.

    1990-09-01

    The vectorization has been made for four codes: SONATINA-2V HTTR version, TRIDOSE, VIENUS, and SCRYU. SONATINA-2V HTTR version is a code for analyzing the dynamic behavior of fuel blocks in the vertical slice of the HTGR (High Temperature Gas-cooled Reactor) core under seismic perturbation, TRIDOSE is a code for calculating environmental tritium concentration and dose, VIENUS is a code for analyzing visco elastic stress of the fuel block of HTTR (High Temperature gas-cooled Test Reactor), and SCRYU is a thermal-hydraulics code with boundary fitted coordinate system. The total speedup ratio of the vectorized versions to the original scalar ones is 5.2 for SONATINA-2V HTTR version. 5.9 ∼ 6.9 for TRIDOSE, 6.7 for VIENUS, 7.6 for SCRYU, respectively. In this report, we describe outline of codes, techniques used for the vectorization, verification of computed results, and speedup effect on the vectorized codes. (author)

  10. Kneser-Hecke-operators in coding theory

    OpenAIRE

    Nebe, Gabriele

    2005-01-01

    The Kneser-Hecke-operator is a linear operator defined on the complex vector space spanned by the equivalence classes of a family of self-dual codes of fixed length. It maps a linear self-dual code $C$ over a finite field to the formal sum of the equivalence classes of those self-dual codes that intersect $C$ in a codimension 1 subspace. The eigenspaces of this self-adjoint linear operator may be described in terms of a coding-theory analogue of the Siegel $\\Phi $-operator.

  11. Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes

    Directory of Open Access Journals (Sweden)

    Jian Jiao

    2017-11-01

    Full Text Available In this paper, we propose a rate-compatible (RC parallel concatenated punctured polar (PCPP codes for incremental redundancy hybrid automatic repeat request (IR-HARQ transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes.

  12. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  13. Selective encryption for H.264/AVC video coding

    Science.gov (United States)

    Shi, Tuo; King, Brian; Salama, Paul

    2006-02-01

    Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.

  14. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  16. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  17. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  18. Fast linear solver for radiative transport equation with multiple right hand sides in diffuse optical tomography

    International Nuclear Information System (INIS)

    Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.

    2015-01-01

    It is well known that radiative transfer equation (RTE) provides more accurate tomographic results than its diffusion approximation (DA). However, RTE-based tomographic reconstruction codes have limited applicability in practice due to their high computational cost. In this article, we propose a new efficient method for solving the RTE forward problem with multiple light sources in an all-at-once manner instead of solving it for each source separately. To this end, we introduce here a novel linear solver called block biconjugate gradient stabilized method (block BiCGStab) that makes full use of the shared information between different right hand sides to accelerate solution convergence. Two parallelized block BiCGStab methods are proposed for additional acceleration under limited threads situation. We evaluate the performance of this algorithm with numerical simulation studies involving the Delta–Eddington approximation to the scattering phase function. The results show that the single threading block RTE solver proposed here reduces computation time by a factor of 1.5–3 as compared to the traditional sequential solution method and the parallel block solver by a factor of 1.5 as compared to the traditional parallel sequential method. This block linear solver is, moreover, independent of discretization schemes and preconditioners used; thus further acceleration and higher accuracy can be expected when combined with other existing discretization schemes or preconditioners. - Highlights: • We solve the multiple-right-hand-side problem in DOT with a block BiCGStab method. • We examine the CPU times of the block solver and the traditional sequential solver. • The block solver is faster than the sequential solver by a factor of 1.5–3.0. • Multi-threading block solvers give additional speedup under limited threads situation.

  19. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  20. Calculation of power spectra for block coded signals

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2001-01-01

    We present some improvements in the procedure for calculating power spectra of signals based on finite state descriptions and constant block size. In addition to simplified calculations, our results provide some insight into the form of the closed expressions and to the relation between the spect...

  1. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First-order perturbation analysis capability is available at the macroscopic cross section level

  2. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  3. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  4. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  5. Fast QC-LDPC code for free space optical communication

    Science.gov (United States)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  6. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  7. Real-time validation of receiver state information in optical space-time block code systems.

    Science.gov (United States)

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  8. SPORTS - a simple non-linear thermalhydraulic stability code

    International Nuclear Information System (INIS)

    Chatoorgoon, V.

    1986-01-01

    A simple code, called SPORTS, has been developed for two-phase stability studies. A novel method of solution of the finite difference equations was deviced and incorporated, and many of the approximations that are common in other stability codes are avoided. SPORTS is believed to be accurate and efficient, as small and large time-steps are permitted, and hence suitable for micro-computers. (orig.)

  9. Various semiclassical limits of torus conformal blocks

    Energy Technology Data Exchange (ETDEWEB)

    Alkalaev, Konstantin [I.E. Tamm Department of Theoretical Physics, P.N. Lebedev Physical Institute,Leninsky ave. 53, Moscow, 119991 (Russian Federation); Department of General and Applied Physics, Moscow Institute of Physics and Technology,Institutskiy per. 7, Dolgoprudnyi, Moscow region, 141700 (Russian Federation); Geiko, Roman [Mathematics Department, National Research University Higher School of Economics,Usacheva str. 6, Moscow, 119048 (Russian Federation); Rappoport, Vladimir [I.E. Tamm Department of Theoretical Physics, P.N. Lebedev Physical Institute,Leninsky ave. 53, Moscow, 119991 (Russian Federation); Department of Quantum Physics, Institute for Information Transmission Problems,Bolshoy Karetny per. 19, Moscow, 127994 (Russian Federation)

    2017-04-12

    We study four types of one-point torus blocks arising in the large central charge regime. There are the global block, the light block, the heavy-light block, and the linearized classical block, according to different regimes of conformal dimensions. It is shown that the blocks are not independent being connected to each other by various links. We find that the global, light, and heavy-light blocks correspond to three different contractions of the Virasoro algebra. Also, we formulate the c-recursive representation of the one-point torus blocks which is relevant in the semiclassical approximation.

  10. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  11. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques

    Science.gov (United States)

    Ishitani, Terry T.

    2010-01-01

    This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…

  12. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  13. Thin Films of Novel Linear-Dendritic Diblock Copolymers

    Science.gov (United States)

    Iyer, Jyotsna; Hammond, Paula

    1998-03-01

    A series of diblock copolymers with one linear block and one dendrimeric block have been synthesized with the objective of forming ultrathin film nanoporous membranes. Polyethyleneoxide serves as the linear hydrophilic portion of the diblock copolymer. The hyperbranched dendrimeric block consists of polyamidoamine with functional end groups. Thin films of these materials made by spin casting and the Langmuir-Blodgett techniques are being studied. The effect of the polyethylene oxide block size and the number and chemical nature of the dendrimer end group on the nature and stability of the films formed willbe discussed.

  14. Counting equations in algebraic attacks on block ciphers

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Miolane, Charlotte Vikkelsø

    2010-01-01

    This paper is about counting linearly independent equations for so-called algebraic attacks on block ciphers. The basic idea behind many of these approaches, e.g., XL, is to generate a large set of equations from an initial set of equations by multiplication of existing equations by the variables...... in the system. One of the most difficult tasks is to determine the exact number of linearly independent equations one obtain in the attacks. In this paper, it is shown that by splitting the equations defined over a block cipher (an SP-network) into two sets, one can determine the exact number of linearly...... independent equations which can be generated in algebraic attacks within each of these sets of a certain degree. While this does not give us a direct formula for the success of algebraic attacks on block ciphers, it gives some interesting bounds on the number of equations one can obtain from a given block...

  15. Least reliable bits coding (LRBC) for high data rate satellite communications

    Science.gov (United States)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  16. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    International Nuclear Information System (INIS)

    Batygin, Y.

    2004-01-01

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented

  17. Particle-in-Cell Code BEAMPATH for Beam Dynamics Simulations in Linear Accelerators and Beamlines

    Energy Technology Data Exchange (ETDEWEB)

    Batygin, Y.

    2004-10-28

    A code library BEAMPATH for 2 - dimensional and 3 - dimensional space charge dominated beam dynamics study in linear particle accelerators and beam transport lines is developed. The program is used for particle-in-cell simulation of axial-symmetric, quadrupole-symmetric and z-uniform beams in a channel containing RF gaps, radio-frequency quadrupoles, multipole lenses, solenoids and bending magnets. The programming method includes hierarchical program design using program-independent modules and a flexible combination of modules to provide the most effective version of the structure for every specific case of simulation. Numerical techniques as well as the results of beam dynamics studies are presented.

  18. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  19. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  20. Interleaver Design for Turbo Coding

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Zyablov, Viktor

    1997-01-01

    By a combination of construction and random search based on a careful analysis of the low weight words and the distance properties of the component codes, it is possible to find interleavers for turbo coding with a high minimum distance. We have designed a block interleaver with permutations...

  1. Monomial codes seen as invariant subspaces

    Directory of Open Access Journals (Sweden)

    García-Planas María Isabel

    2017-08-01

    Full Text Available It is well known that cyclic codes are very useful because of their applications, since they are not computationally expensive and encoding can be easily implemented. The relationship between cyclic codes and invariant subspaces is also well known. In this paper a generalization of this relationship is presented between monomial codes over a finite field and hyperinvariant subspaces of n under an appropriate linear transformation. Using techniques of Linear Algebra it is possible to deduce certain properties for this particular type of codes, generalizing known results on cyclic codes.

  2. Feedback equivalence of convolutional codes over finite rings

    Directory of Open Access Journals (Sweden)

    DeCastro-García Noemí

    2017-12-01

    Full Text Available The approach to convolutional codes from the linear systems point of view provides us with effective tools in order to construct convolutional codes with adequate properties that let us use them in many applications. In this work, we have generalized feedback equivalence between families of convolutional codes and linear systems over certain rings, and we show that every locally Brunovsky linear system may be considered as a representation of a code under feedback convolutional equivalence.

  3. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  4. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  5. Probabilistic Decision Based Block Partitioning for Future Video Coding

    KAUST Repository

    Wang, Zhao; Wang, Shiqi; Zhang, Jian; Wang, Shanshe; Ma, Siwei

    2017-01-01

    , the mode decision problem is casted into a probabilistic framework to select the final partition based on the confidence interval decision strategy. Experimental results show that the proposed CIET algorithm can speed up QTBT block partitioning structure

  6. 3D Scan-Based Wavelet Transform and Quality Control for Video Coding

    Directory of Open Access Journals (Sweden)

    Parisot Christophe

    2003-01-01

    Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.

  7. Reactivity effect of poisoned beryllium block shuffling in the MARIA reactor

    International Nuclear Information System (INIS)

    Andrzejewski, K.; Kulikowska, T.

    2000-01-01

    The paper is a continuation of the analysis of beryllium blocks poisoning by Li-6 and He-3 in the MARIA reactor, presented at the 22 RERTR Meeting in Budapest. A new computational tool, the REBUS-3 code, has been used for predicting the amount of poison. The code has been put into operation on a HP computer and the beryllium transmutation chains have been activated with assistance of the ANL RERTR staff. The horizontal and vertical poison distribution within beryllium blocks has been studied. A simple shuffling of beryllium blocks has been simulated to check the effect of exchanging a block with high poison concentration, adjacent to fuel elements, with a peripheral one with a low poison concentration

  8. Linear-Algebra Programs

    Science.gov (United States)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  9. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  10. Space-Frequency Block Code with Matched Rotation for MIMO-OFDM System with Limited Feedback

    Directory of Open Access Journals (Sweden)

    Thushara D. Abhayapala

    2009-01-01

    Full Text Available This paper presents a novel matched rotation precoding (MRP scheme to design a rate one space-frequency block code (SFBC and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

  11. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  12. Super-linear Precision in Simple Neural Population Codes

    Science.gov (United States)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  13. Improved lossless intra coding for H.264/MPEG-4 AVC.

    Science.gov (United States)

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  14. 2010 Census Blocks with Geographic Codes Southwestern PA

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This file can be used as a tool to append geographic codes to geocoded point data. The file was developed by Pitt's Center for Social and Urban Research and...

  15. Linear tree codes and the problem of explicit constructions

    Czech Academy of Sciences Publication Activity Database

    Pudlák, Pavel

    2016-01-01

    Roč. 490, February 1 (2016), s. 124-144 ISSN 0024-3795 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : tree code * error correcting code * triangular totally nonsingular matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S002437951500645X

  16. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  17. A Non-Linear Digital Computer Model Requiring Short Computation Time for Studies Concerning the Hydrodynamics of the BWR

    Energy Technology Data Exchange (ETDEWEB)

    Reisch, F; Vayssier, G

    1969-05-15

    This non-linear model serves as one of the blocks in a series of codes to study the transient behaviour of BWR or PWR type reactors. This program is intended to be the hydrodynamic part of the BWR core representation or the hydrodynamic part of the PWR heat exchanger secondary side representation. The equations have been prepared for the CSMP digital simulation language. By using the most suitable integration routine available, the ratio of simulation time to real time is about one on an IBM 360/75 digital computer. Use of the slightly different language DSL/40 on an IBM 7044 computer takes about four times longer. The code has been tested against the Eindhoven loop with satisfactory agreement.

  18. Vectorized Matlab Codes for Linear Two-Dimensional Elasticity

    Directory of Open Access Journals (Sweden)

    Jonas Koko

    2007-01-01

    Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.

  19. User's manual for seismic analysis code 'SONATINA-2V'

    Energy Technology Data Exchange (ETDEWEB)

    Hanawa, Satoshi; Iyoku, Tatsuo [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2001-08-01

    The seismic analysis code, SONATINA-2V, has been developed to analyze the behavior of the HTTR core graphite components under seismic excitation. The SONATINA-2V code is a two-dimensional computer program capable of analyzing the vertical arrangement of the HTTR graphite components, such as fuel blocks, replaceable reflector blocks, permanent reflector blocks, as well as their restraint structures. In the analytical model, each block is treated as rigid body and is restrained by dowel pins which restrict relative horizontal movement but allow vertical and rocking motions between upper and lower blocks. Moreover, the SONATINA-2V code is capable of analyzing the core vibration behavior under both simultaneous excitations of vertical and horizontal directions. The SONATINA-2V code is composed of the main program, pri-processor for making the input data to SONATINA-2V and post-processor for data processing and making the graphics from analytical results. Though the SONATINA-2V code was developed in order to work in the MSP computer system of Japan Atomic Energy Research Institute (JAERI), the computer system was abolished with the technical progress of computer. Therefore, improvement of this analysis code was carried out in order to operate the code under the UNIX machine, SR8000 computer system, of the JAERI. The users manual for seismic analysis code, SONATINA-2V, including pri- and post-processor is given in the present report. (author)

  20. Block level energy planning for domestic lighting - a multi-objective fuzzy linear programming approach

    Energy Technology Data Exchange (ETDEWEB)

    Jana, C. [Indian Inst. of Social Welfare and Business Management, Kolkata (India); Chattopadhyay, R.N. [Indian Inst. of Technology, Kharagpur (India). Rural Development Centre

    2004-09-01

    Creating provisions for domestic lighting is important for rural development. Its significance in rural economy is unquestionable since some activities, like literacy, education and manufacture of craft items and other cottage products are largely dependent on domestic lighting facilities for their progress and prosperity. Thus, in rural energy planning, domestic lighting remains a key sector for allocation of investments. For rational allocation, decision makers need alternative strategies for identifying adequate and proper investment structure corresponding to appropriate sources and precise devices. The present study aims at designing a model of energy utilisation by developing a decision support frame for an optimised solution to the problem, taking into consideration four sources and six devices suitable for the study area, namely Narayangarh Block of Midnapore District in India. Since the data available from rural and unorganised sectors are often ill-defined and subjective in nature, many coefficients are fuzzy numbers, and hence several constraints appear to be fuzzy expressions. In this study, the energy allocation model is initiated with three separate objectives for optimisation, namely minimising the total cost, minimising the use of non-local sources of energy and maximising the overall efficiency of the system. Since each of the above objective-based solutions has relevance to the needs of the society and economy, it is necessary to build a model that makes a compromise among the three individual solutions. This multi-objective fuzzy linear programming (MOFLP) model, solved in a compromising decision support frame, seems to be a more rational alternative than single objective linear programming model in rural energy planning. (author)

  1. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  2. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  3. Block-Parallel Data Analysis with DIY2

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-30

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.

  4. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  5. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to

  6. Binary Linear-Time Erasure Decoding for Non-Binary LDPC codes

    OpenAIRE

    Savin, Valentin

    2009-01-01

    In this paper, we first introduce the extended binary representation of non-binary codes, which corresponds to a covering graph of the bipartite graph associated with the non-binary code. Then we show that non-binary codewords correspond to binary codewords of the extended representation that further satisfy some simplex-constraint: that is, bits lying over the same symbol-node of the non-binary graph must form a codeword of a simplex code. Applied to the binary erasure channel, this descript...

  7. Multicore Performance of Block Algebraic Iterative Reconstruction Methods

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik B.; Hansen, Per Christian

    2014-01-01

    Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....

  8. Cytokinesis-block micronucleus method in micro-blood cultures

    International Nuclear Information System (INIS)

    Liu Jinwen; Wang Lianzhi; Yang Cangzhen; Yao Yanyu

    1991-01-01

    This paper reports the cytokinesis-block micronucleus method in micro-blood cultures. The observations on detection induced micronuclei of different doses of 60 Co γ-rays irradiation and spontaneous micronucleus of different ages were performed with CB method in comporison with conventional micronucleus (CM) method. The results showed that with direct peripheral micro-blood cultures the cytoknesis-block micronuclei is also obtained. Using CB method, the micronuclei fequency of different ages was linear relationship, Y = 1.62 + 0.74 D, the spontaneous micronuclei frequency of different ages was 4.14%, the induced micronuclei also was a linear relationship, Y = 6.01 + 0.692 D. Using CM method, it showed that the induced micronuclei was a linear relationship, Y = 0.486 D - 1.968, but there is no significant difference between the micronuclei frequency of different ages. Comparison with CM and direct blood smear methods confirmed that the cytokinesis-block method of micro-blood cultures is more sensitive and precise

  9. Modelling of multi-block data

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Svinning, K.

    2006-01-01

    Here is presented a unified approach to modelling multi-block regression data. The starting point is a partition of the data X into L data blocks, X = (X-1, X-2,...X-L), and the data Y into M data-blocks, Y = (Y-1, Y-2,...,Y-M). The methods of linear regression, X -> Y, are extended to the case...... of a linear relationship between each X-i and Y-j. X-i -> Y-j. A modelling strategy is used to decide if the residual X-i should take part in the modelling of one or more Y(j)s. At each step the procedure of finding score vectors is based on well-defined optimisation procedures. The principle of optimisation...... is based on that the score vectors should give the sizes of the resulting Y(j)s loading vectors as large as possible. The partition of X and Y are independent of each other. The choice of Y-j can be X-j, Y-i = X-i, thus including the possibility of modelling X -> X-i,i=1,...,L. It is shown how...

  10. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  11. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  12. Enabling Cognitive Load-Aware AR with Rateless Coding on a Wearable Network

    Directory of Open Access Journals (Sweden)

    R. Razavi

    2008-01-01

    Full Text Available Augmented reality (AR on a head-mounted display is conveniently supported by a wearable wireless network. If, in addition, the AR display is moderated to take account of the cognitive load of the wearer, then additional biosensors form part of the network. In this paper, the impact of these additional traffic sources is assessed. Rateless coding is proposed to not only protect the fragile encoded video stream from wireless noise and interference but also to reduce coding overhead. The paper proposes a block-based form of rateless channel coding in which the unit of coding is a block within a packet. The contribution of this paper is that it minimizes energy consumption by reducing the overhead from forward error correction (FEC, while error correction properties are conserved. Compared to simple packet-based rateless coding, with this form of block-based coding, data loss is reduced and energy efficiency is improved. Cross-layer organization of piggy-backed response blocks must take place in response to feedback, as detailed in the paper. Compared also to variants of its default FEC scheme, results from a Bluetooth (IEEE 802.15.1 wireless network show a consistent improvement in energy consumption, packet arrival latency, and video quality at the AR display.

  13. Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Distributed storage solutions have become widespread due to their ability to store large amounts of data reliably across a network of unreliable nodes, by employing repair mechanisms to prevent data loss. Conventional systems rely on static designs with a central control entity to oversee...... and control the repair process. Given the large costs for maintaining and cooling large data centers, our work proposes and studies the feasibility of a fully decentralized systems that can store data even on unreliable and, sometimes, unavailable mobile devices. This imposes new challenges on the design...... as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...

  14. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  15. Space-Time Code Designs for Broadband Wireless Communications

    National Research Council Canada - National Science Library

    Xia, Xiang-Gen

    2005-01-01

    The goal of this research is to design new space AND time codes, such as complex orthogonal space AND time block codes with rate above 1/2 from complex orthogonal designs for QAM, PSK, and CPM signals...

  16. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently

  17. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  18. Properties of a class of block-iterative methods

    International Nuclear Information System (INIS)

    Elfving, Tommy; Nikazad, Touraj

    2009-01-01

    We study a class of block-iterative (BI) methods proposed in image reconstruction for solving linear systems. A subclass, symmetric block-iteration (SBI), is derived such that for this subclass both semi-convergence analysis and stopping-rules developed for fully simultaneous iteration apply. Also results on asymptotic convergence are given, e.g., BI exhibit cyclic convergence irrespective of the consistency of the linear system. Further it is shown that the limit points of SBI satisfy a weighted least-squares problem. We also present numerical results obtained using a trained stopping rule on SBI

  19. Incomplete block factorization preconditioning for indefinite elliptic problems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Chun-Hua [Univ. of Calgary, Alberta (Canada)

    1996-12-31

    The application of the finite difference method to approximate the solution of an indefinite elliptic problem produces a linear system whose coefficient matrix is block tridiagonal and symmetric indefinite. Such a linear system can be solved efficiently by a conjugate residual method, particularly when combined with a good preconditioner. We show that specific incomplete block factorization exists for the indefinite matrix if the mesh size is reasonably small. And this factorization can serve as an efficient preconditioner. Some efforts are made to estimate the eigenvalues of the preconditioned matrix. Numerical results are also given.

  20. Coding and decoding libraries of sequence-defined functional copolymers synthesized via photoligation.

    Science.gov (United States)

    Zydziak, Nicolas; Konrad, Waldemar; Feist, Florian; Afonin, Sergii; Weidner, Steffen; Barner-Kowollik, Christopher

    2016-11-30

    Designing artificial macromolecules with absolute sequence order represents a considerable challenge. Here we report an advanced light-induced avenue to monodisperse sequence-defined functional linear macromolecules up to decamers via a unique photochemical approach. The versatility of the synthetic strategy-combining sequential and modular concepts-enables the synthesis of perfect macromolecules varying in chemical constitution and topology. Specific functions are placed at arbitrary positions along the chain via the successive addition of monomer units and blocks, leading to a library of functional homopolymers, alternating copolymers and block copolymers. The in-depth characterization of each sequence-defined chain confirms the precision nature of the macromolecules. Decoding of the functional information contained in the molecular structure is achieved via tandem mass spectrometry without recourse to their synthetic history, showing that the sequence information can be read. We submit that the presented photochemical strategy is a viable and advanced concept for coding individual monomer units along a macromolecular chain.

  1. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  2. CURRENT STATE ANALYSIS OF AUTOMATIC BLOCK SYSTEM DEVICES, METHODS OF ITS SERVICE AND MONITORING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2014-01-01

    Full Text Available Purpose. Development of formalized description of automatic block system of numerical code based on the analysis of characteristic failures of automatic block system and procedure of its maintenance. Methodology. For this research a theoretical and analytical methods have been used. Findings. Typical failures of the automatic block systems were analyzed, as well as basic reasons of failure occur were found out. It was determined that majority of failures occurs due to defects of the maintenance system. Advantages and disadvantages of the current service technology of automatic block system were analyzed. Works that can be automatized by means of technical diagnostics were found out. Formal description of the numerical code of automatic block system as a graph in the state space of the system was carried out. Originality. The state graph of the numerical code of automatic block system that takes into account gradual transition from the serviceable condition to the loss of efficiency was offered. That allows selecting diagnostic information according to attributes and increasing the effectiveness of recovery operations in the case of a malfunction. Practical value. The obtained results of analysis and proposed the state graph can be used as the basis for the development of new means of diagnosing devices for automatic block system, which in turn will improve the efficiency and service of automatic block system devices in general.

  3. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Debbah Mérouane

    2008-01-01

    Full Text Available Abstract The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  4. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova

    2000-12-01

    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  5. Efficient Eulerian gyrokinetic simulations with block-structured grids

    International Nuclear Information System (INIS)

    Jarema, Denis

    2017-01-01

    Gaining a deep understanding of plasma microturbulence is of paramount importance for the development of future nuclear fusion reactors, because it causes a strong outward transport of heat and particles. Gyrokinetics has proven itself as a valid mathematical model to simulate such plasma microturbulence effects. In spite of the advantages of this model, nonlinear radially extended (or global) gyrokinetic simulations are still extremely computationally expensive, involving a very large number of computational grid points. Hence, methods that reduce the number of grid points without a significant loss of accuracy are a prerequisite to be able to run high-fidelity simulations. At the level of the mathematical model, the gyrokinetic approach achieves a reduction from six to five coordinates in comparison to the fully kinetic models. This reduction leads to an important decrease in the total number of computational grid points. However, the velocity space mixed with the radial direction still requires a very fine resolution in grid based codes, due to the disparities in the thermal speed, which are caused by a strong temperature variation along the radial direction. An attempt to address this problem by modifying the underlying gyrokinetic set of equations leads to additional nonlinear terms, which are the most expensive parts to simulate. Furthermore, because of these modifications, well-established and computationally efficient implementations developed for the original set of equations can no longer be used. To tackle such issues, in this thesis we introduce an alternative approach of blockstructured grids. This approach reduces the number of grid points significantly, but without changing the underlying mathematical model. Furthermore, our technique is minimally invasive and allows the reuse of a large amount of already existing code using rectilinear grids, modifications being necessary only on the block boundaries. Moreover, the block-structured grid can be

  6. Efficient Eulerian gyrokinetic simulations with block-structured grids

    Energy Technology Data Exchange (ETDEWEB)

    Jarema, Denis

    2017-01-20

    Gaining a deep understanding of plasma microturbulence is of paramount importance for the development of future nuclear fusion reactors, because it causes a strong outward transport of heat and particles. Gyrokinetics has proven itself as a valid mathematical model to simulate such plasma microturbulence effects. In spite of the advantages of this model, nonlinear radially extended (or global) gyrokinetic simulations are still extremely computationally expensive, involving a very large number of computational grid points. Hence, methods that reduce the number of grid points without a significant loss of accuracy are a prerequisite to be able to run high-fidelity simulations. At the level of the mathematical model, the gyrokinetic approach achieves a reduction from six to five coordinates in comparison to the fully kinetic models. This reduction leads to an important decrease in the total number of computational grid points. However, the velocity space mixed with the radial direction still requires a very fine resolution in grid based codes, due to the disparities in the thermal speed, which are caused by a strong temperature variation along the radial direction. An attempt to address this problem by modifying the underlying gyrokinetic set of equations leads to additional nonlinear terms, which are the most expensive parts to simulate. Furthermore, because of these modifications, well-established and computationally efficient implementations developed for the original set of equations can no longer be used. To tackle such issues, in this thesis we introduce an alternative approach of blockstructured grids. This approach reduces the number of grid points significantly, but without changing the underlying mathematical model. Furthermore, our technique is minimally invasive and allows the reuse of a large amount of already existing code using rectilinear grids, modifications being necessary only on the block boundaries. Moreover, the block-structured grid can be

  7. Head simulation of linear accelerators and spectra considerations using EGS4 Monte Carlo code in a PC

    Energy Technology Data Exchange (ETDEWEB)

    Malatara, G; Kappas, K [Medical Physics Department, Faculty of Medicine, University of Patras, 265 00 Patras (Greece); Sphiris, N [Ethnodata S.A., Athens (Greece)

    1994-12-31

    In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors). 10 refs,5 figs, 2 tabs.

  8. Head simulation of linear accelerators and spectra considerations using EGS4 Monte Carlo code in a PC

    International Nuclear Information System (INIS)

    Malatara, G.; Kappas, K.; Sphiris, N.

    1994-01-01

    In this work, a Monte Carlo EGS4 code was used to simulate radiation transport through linear accelerators to produce and score energy spectra and angular distributions of 6, 12, 15 and 25 MeV bremsstrahlung photons exiting from different accelerator treatment heads. The energy spectra was used as input for a convolution method program to calculate the tissue-maximum ratio in water. 100.000 histories are recorded in the scoring plane for each simulation. The validity of the Monte Carlo simulation and the precision to the calculated spectra have been verified experimentally and were in good agreement. We believe that the accurate simulation of the different components of the linear accelerator head is very important for the precision of the results. The results of the Monte Carlo and the Convolution Method can be compared with experimental data for verification and they are powerful and practical tools to generate accurate spectra and dosimetric data. (authors)

  9. Error-Detecting Identification Codes for Algebra Students.

    Science.gov (United States)

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  10. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  11. Fabrication of long linear arrays of plastic optical fibers with squared ends for the use of code mark printing lithography

    Science.gov (United States)

    Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya

    2017-05-01

    Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.

  12. Reactivity-induced time-dependencies of EBR-II linear and non-linear feedbacks

    International Nuclear Information System (INIS)

    Grimm, K.N.; Meneghetti, D.

    1988-01-01

    Time-dependent linear feedback reactivities are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a kinetic code analysis of an experiment in which the change in power resulted from the dropping of a control rod. Shown with these linear reactivities are the reactivity associated with the control-rod shaft contraction and also time-dependent non-linear (mainly bowing) component deduced from the inverse kinetics of the experimentally measured fission power and the calculated linear reactivities. (author)

  13. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  14. The BL-QMR algorithm for non-Hermitian linear systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Freund, R.W. [AT& T Bell Labs., Murray Hill, NJ (United States)

    1996-12-31

    Many applications require the solution of multiple linear systems that have the same coefficient matrix, but differ in their right-hand sides. Instead of applying an iterative method to each of these systems individually, it is potentially much more efficient to employ a block version of the method that generates iterates for all the systems simultaneously. However, it is quite intricate to develop robust and efficient block iterative methods. In particular, a key issue in the design of block iterative methods is the need for deflation. The iterates for the different systems that are produced by a block method will, in general, converge at different stages of the block iteration. An efficient and robust block method needs to be able to detect and then deflate converged systems. Each such deflation reduces the block size, and thus the block method needs to be able to handle varying block sizes. For block Krylov-subspace methods, deflation is also crucial in order to delete linearly and almost linearly dependent vectors in the underlying block Krylov sequences. An added difficulty arises for Lanczos-type block methods for non-Hermitian systems, since they involve two different block Krylov sequences. In these methods, deflation can now occur independently in both sequences, and consequently, the block sizes in the two sequences may become different in the course of the iteration, even though they were identical at the beginning. We present a block version of Freund and Nachtigal`s quasi-minimal residual method for the solution of non-Hermitian linear systems with single right-hand sides.

  15. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  16. A non-linear, finite element, heat conduction code to calculate temperatures in solids of arbitrary geometry

    International Nuclear Information System (INIS)

    Tayal, M.

    1987-01-01

    Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated

  17. Study of coherent Synchrotron Radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    International Nuclear Information System (INIS)

    Dattoli, G.; Schiavi, A.; Migliorati, M.

    2006-03-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of this type of problems should be fast and reliable, conditions that are usually hardly achieved at the same rime. In the past, codes based on Lie algebraic techniques , have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators. We show that the integration procedure is capable of reproducing the onset of an instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed [it

  18. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    International Nuclear Information System (INIS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-01-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed

  19. Economic analysis of sectional concrete blocks uses in biological shieldings

    International Nuclear Information System (INIS)

    Ivanov, V.N.

    1977-01-01

    The relative economy of different structural embodiments of the biological protection of a research reactor has been evaluated. The alternatives include cast in-situ concrete and prefabricated blocks with different linear dimension tolerances (+-2, +-5 and +-7 mm). The cost-benefit estimates have been done according to the reduced cost calculated for the final products - the erected structures. It has been found that the optimum tolerances for 6 meter-long blocks are not less than +-5 mm for the other linear dimensions. The optimum concrete block volume for dismountable structures is 1 to 1.5 m 3 and for prefabricated protection structures -more than 4 m 3

  20. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    Science.gov (United States)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary

  1. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    Science.gov (United States)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  2. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes and al...

  3. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs

  4. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J M; Ahnert, C; Gomez Santamaria, J; Rodriguez Olabarria, I

    1985-07-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs.

  5. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mérouane Debbah

    2008-05-01

    Full Text Available The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to Eb/N0 and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  6. Micro- and nanophase separations in hierarchical self-assembly of strongly amphiphilic block copolymer-based ionic supramolecules

    DEFF Research Database (Denmark)

    Ayoubi, Mehran Asad; Zhu, Kaizheng; Nyström, Bo

    2013-01-01

    block), a class of ionic supramolecules are successfully synthesized whose molecular architecture consists of a poly(styrene) PS block (Linear block) covalently connected to a strongly amphiphilic comb-like block (AmphComb block), i.e. Linear-b-AmphComb. In the melt state, these ionic supramolecules can.......20 (SLL/C and SBCC/C) and ∼0.28 (C/L). Finally, the specific influences of the strongly amphiphilic nature of the AmphComb blocks on the observed morphological and hierarchical behaviours of our system are discussed. For reference, stoichiometric strongly amphiphilic comb-like (AmphComb) ionic...

  7. Annotating non-coding regions of the genome.

    Science.gov (United States)

    Alexander, Roger P; Fang, Gang; Rozowsky, Joel; Snyder, Michael; Gerstein, Mark B

    2010-08-01

    Most of the human genome consists of non-protein-coding DNA. Recently, progress has been made in annotating these non-coding regions through the interpretation of functional genomics experiments and comparative sequence analysis. One can conceptualize functional genomics analysis as involving a sequence of steps: turning the output of an experiment into a 'signal' at each base pair of the genome; smoothing this signal and segmenting it into small blocks of initial annotation; and then clustering these small blocks into larger derived annotations and networks. Finally, one can relate functional genomics annotations to conserved units and measures of conservation derived from comparative sequence analysis.

  8. Influence of Compacting Rate on the Properties of Compressed Earth Blocks

    Directory of Open Access Journals (Sweden)

    Humphrey Danso

    2016-01-01

    Full Text Available Compaction of blocks contributes significantly to the strength properties of compressed earth blocks. This paper investigates the influence of compacting rates on the properties of compressed earth blocks. Experiments were conducted to determine the density, compressive strength, splitting tensile strength, and erosion properties of compressed earth blocks produced with different rates of compacting speed. The study concludes that although the low rate of compaction achieved slightly better performance characteristics, there is no statistically significant difference between the soil blocks produced with low compacting rate and high compacting rate. The study demonstrates that there is not much influence on the properties of compressed earth blocks produced with low and high compacting rates. It was further found that there are strong linear correlations between the compressive strength test and density, and density and the erosion. However, a weak linear correlation was found between tensile strength and compressive strength, and tensile strength and density.

  9. Round-Robin Streaming with Generations

    DEFF Research Database (Denmark)

    Li, Yao; Vingelmann, Peter; Pedersen, Morten Videbæk

    2012-01-01

    We consider three types of application layer coding for streaming over lossy links: random linear coding, systematic random linear coding, and structured coding. The file being streamed is divided into sub-blocks (generations). Code symbols are formed by combining data belonging to the same...

  10. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  11. Dynamic analysis of aircraft impact using the linear elastic finite element codes FINEL, SAP and STARDYNE

    International Nuclear Information System (INIS)

    Lundsager, P.; Krenk, S.

    1975-08-01

    The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)

  12. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  13. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P/sub 1/) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently.

  14. Cheat-sensitive commitment of a classical bit coded in a block of mxn round-trip qubits

    International Nuclear Information System (INIS)

    Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki

    2011-01-01

    This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S±> by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C is an element of {X,Y} to perform a measurement and returns a resultant state |C±>. Alice then performs a measurement with the other basis R (≠S) and obtains an outcome |R±>. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C±> by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of mxn qubit particles, impartial examinations and probabilistic security criteria can be achieved.

  15. Cheat-sensitive commitment of a classical bit coded in a block of mxn round-trip qubits

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Kaoru; Fukasaka, Hiroyuki [NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198 (Japan); Tamaki, Kiyoshi [NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198 (Japan); National Institute of Information and Communications Technology (NICT), 4-2-1 Nukui-kitamachi, Koganei, Tokyo 184-8795 (Japan); Imoto, Nobuyuki [Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531 (Japan)

    2011-08-15

    This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S{+-}> by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C is an element of {l_brace}X,Y{r_brace} to perform a measurement and returns a resultant state |C{+-}>. Alice then performs a measurement with the other basis R ({ne}S) and obtains an outcome |R{+-}>. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C{+-}> by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of mxn qubit particles, impartial examinations and probabilistic security criteria can be achieved.

  16. Scaling Optimization of the SIESTA MHD Code

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  18. GYSELA, a full-f global gyrokinetic Semi-Lagrangian code for ITG turbulence simulations

    International Nuclear Information System (INIS)

    Grandgirard, V.; Sarazin, Y.; Garbet, X.; Dif-Pradalier, G.; Ghendrih, Ph.; Crouseilles, N.; Latu, G.; Sonnendruecker, E.; Besse, N.; Bertrand, P.

    2006-01-01

    This work addresses non-linear global gyrokinetic simulations of ion temperature gradient (ITG) driven turbulence with the GYSELA code. The particularity of GYSELA code is to use a fixed grid with a Semi-Lagrangian (SL) scheme and this for the entire distribution function. The 4D non-linear drift-kinetic version of the code already showns the interest of such a SL method which exhibits good properties of energy conservation in non-linear regime as well as an accurate description of fine spatial scales. The code has been upgrated to run 5D simulations of toroidal ITG turbulence. Linear benchmarks and non-linear first results prove that semi-lagrangian codes can be a credible alternative for gyrokinetic simulations

  19. Induced radioactivity in Bevatron concrete radiation shielding blocks

    International Nuclear Information System (INIS)

    Moeller, G.C.; Donahue, R.J.

    1994-07-01

    The Bevatron accelerated protons up to 6.2 GeV and heavy ions up to 2.1 GeV/amu. It operated from 1954 to 1993. Radioactivity was induced in some concrete radiation shielding blocks by prompt radiation. Prompt radiation is primarily neutrons and protons that were generated by the Bevatron's primary beam interactions with targets and other materials. The goal was to identify the gamma-ray emitting nuclides (t 1/2 > 0.5 yr) that could be present in the concrete blocks and estimate the depth at which the maximum radioactivity presently occurs. It is shown that the majority of radioactivity was produced via thermal neutron capture by trace elements present in concrete. The depth of maximum thermal neutron flux, in theory, corresponds with the depth of maximum induced activity. To estimate the depth at which maximum activity occurs in the concrete blocks, the LAHET Code System was used to calculate the depth of maximum thermal neutron flux. The primary beam interactions that generate the neutrons are also modeled by the LAHET Code System

  20. H-Seda: Partial Packet Recovery with Heterogeneous Block Sizes for Wireless Sensor Networks

    KAUST Repository

    Meer, Ammar M.

    2012-01-01

    . Maximizing bandwidth utilization while maintaining low frame error rate has been an interesting problem. Frame fragmentation into small blocks with dedicated error detection codes per block can reduce the unnecessary retransmission of the correctly received

  1. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J M; Ahnert, C; Gomez Santamaria, J; Rodriguez Olabarria, I

    1985-07-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs.

  2. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs

  3. Advanced linear algebra for engineers with Matlab

    CERN Document Server

    Dianat, Sohail A

    2009-01-01

    Matrices, Matrix Algebra, and Elementary Matrix OperationsBasic Concepts and NotationMatrix AlgebraElementary Row OperationsSolution of System of Linear EquationsMatrix PartitionsBlock MultiplicationInner, Outer, and Kronecker ProductsDeterminants, Matrix Inversion and Solutions to Systems of Linear EquationsDeterminant of a MatrixMatrix InversionSolution of Simultaneous Linear EquationsApplications: Circuit AnalysisHomogeneous Coordinates SystemRank, Nu

  4. KTOE, KEDAK to ENDF/B Format Conversion with Linear Linear Interpolation

    International Nuclear Information System (INIS)

    Panini, Gian Carlo

    1985-01-01

    1 - Nature of physical problem solved: This code performs a fully automated translation from KEDAK into ENDF-4 or -5 format. Output is on tape in card image format. 2 - Method of solution: Before translation the reactions are sorted in the ENDF format order. Linear-linear interpolation rule is preserved. The resonance parameters for both resolved and unresolved, could also be translated and a background cross section is formed as the difference of the calculated contribution from the parameters and the point-wise data given in the original file. Elastic angular distributions originally given in tabulated form are converted into Legendre polynomial coefficients. Energy distributions are calculated using a simple evaporation model with the temperature expressed as a function of the incident mass. 3 - Restrictions on the complexity of the problem: The existing restrictions both on KEDAK and ENDF have been applied to the array sizes used in the code, except for the number of points in a section which in the ENDF format are limited to 5000 points. The code only translates one material at a time

  5. Hybrid MPI-OpenMP Parallelism in the ONETEP Linear-Scaling Electronic Structure Code: Application to the Delamination of Cellulose Nanofibrils.

    Science.gov (United States)

    Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton

    2014-11-11

    We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.

  6. Linear-time non-malleable codes in the bit-wise independent tampering model

    NARCIS (Netherlands)

    R.J.F. Cramer (Ronald); I.B. Damgård (Ivan); N.M. Döttling (Nico); I. Giacomelli (Irene); C. Xing (Chaoping)

    2017-01-01

    textabstractNon-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m′

  7. Efficient decoding of random errors for quantum expander codes

    OpenAIRE

    Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony

    2017-01-01

    We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...

  8. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  9. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  10. CRUCIB: an axisymmetric convection code

    International Nuclear Information System (INIS)

    Bertram, L.A.

    1975-03-01

    The CRUCIB code was written in support of an experimental program aimed at measurement of thermal diffusivities of refractory liquids. Precise values of diffusivity are necessary to realistic analysis of reactor safety problems, nuclear waste disposal procedures, and fundamental metal forming processes. The code calculates the axisymmetric transient convective motions produced in a right circular cylindrical crucible, which is surface heated by an annular heat pulse. Emphasis of this report is placed on the input-output options of the CRUCIB code, which are tailored to assess the importance of the convective heat transfer in determining the surface temperature distribution. Use is limited to Prandtl numbers less than unity; larger values can be accommodated by replacement of a single block of the code, if desired. (U.S.)

  11. Thermal stress analysis of HTGR fuel and control rod fuel blocks in the HTGR in-block carbonization and annealing furnace

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; McAfee, W.J.

    1977-01-01

    A new approach that utilizes the equivalent solid plate method has been applied to the thermal stress analysis of HTGR fuel and control rod fuel blocks. Cases were considered where these blocks, loaded with reprocessed HTGR fuel pellets, were being cured at temperatures up to 1800 0 C. A two-dimensional segment of a fuel block cross section including fuel, coolant holes, and graphite matrix was analyzed using the ORNL HEATING3 heat transfer code to determine the temperature-dependent effective thermal conductivity for the perforated region of the block. Using this equivalent conductivity to calculate the temperature distributions through different cross sections of the blocks, two-dimensional thermal-stress analyses were performed through application of the equivalent solid plate method. In this approach, the perforated material is replaced by solid homogeneous material of the same external dimensions but whose material properties have been modified to account for the perforations

  12. Power Optimization of Wireless Media Systems With Space-Time Block Codes

    OpenAIRE

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-01-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes in to consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and...

  13. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    Science.gov (United States)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  14. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  15. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  16. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  17. Adaptive Space–Time Coding Using ARQ

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    We study the energy-limited outage probability of the block space-time coding (STC)-based systems utilizing automatic repeat request (ARQ) feedback and adaptive power allocation. Taking the ARQ feedback costs into account, we derive closed

  18. Hermitian self-dual quasi-abelian codes

    Directory of Open Access Journals (Sweden)

    Herbert S. Palines

    2017-12-01

    Full Text Available Quasi-abelian codes constitute an important class of linear codes containing theoretically and practically interesting codes such as quasi-cyclic codes, abelian codes, and cyclic codes. In particular, the sub-class consisting of 1-generator quasi-abelian codes contains large families of good codes. Based on the well-known decomposition of quasi-abelian codes, the characterization and enumeration of Hermitian self-dual quasi-abelian codes are given. In the case of 1-generator quasi-abelian codes, we offer necessary and sufficient conditions for such codes to be Hermitian self-dual and give a formula for the number of these codes. In the case where the underlying groups are some $p$-groups, the actual number of resulting Hermitian self-dual quasi-abelian codes are determined.

  19. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  20. Thermal, Catalytic Conversion of Alkanes to Linear Aldehydes and Linear Amines.

    Science.gov (United States)

    Tang, Xinxin; Jia, Xiangqing; Huang, Zheng

    2018-03-21

    Alkanes, the main constituents of petroleum, are attractive feedstocks for producing value-added chemicals. Linear aldehydes and amines are two of the most important building blocks in the chemical industry. To date, there have been no effective methods for directly converting n-alkanes to linear aldehydes and linear amines. Here, we report a molecular dual-catalyst system for production of linear aldehydes via regioselective carbonylation of n-alkanes. The system is comprised of a pincer iridium catalyst for transfer-dehydrogenation of the alkane using t-butylethylene or ethylene as a hydrogen acceptor working sequentially with a rhodium catalyst for olefin isomerization-hydroformylation with syngas. The system exhibits high regioselectivity for linear aldehydes and gives high catalytic turnover numbers when using ethylene as the acceptor. In addition, the direct conversion of light alkanes, n-pentane and n-hexane, to siloxy-terminated alkyl aldehydes through a sequence of Ir/Fe-catalyzed alkane silylation and Ir/Rh-catalyzed alkane carbonylation, is described. Finally, the Ir/Rh dual-catalyst strategy has been successfully applied to regioselective alkane aminomethylation to form linear alkyl amines.

  1. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  2. Three-dimensional computer code for the nonlinear dynamic response of an HTGR core

    International Nuclear Information System (INIS)

    Subudhi, M.; Lasker, L.; Koplik, B.; Curreri, J.; Goradia, H.

    1979-01-01

    A three-dimensional dynamic code has been developed to determine the nonlinear response of an HTGR core. The HTGR core consists of several thousands of hexagonal core blocks. These are arranged inlayers stacked together. Each layer contains many core blocks surrounded on their outer periphery by reflector blocks. The entire assembly is contained within a prestressed concrete reactor vessel. Gaps exist between adjacent blocks in any horizontal plane. Each core block in a given layer is connected to the blocks directly above and below it via three dowell pins. The present analystical study is directed towards an invesstigation of the nonlinear response of the reactor core blocks in the event of a seismic occurrence. The computer code is developed for a specific mathemtical model which represents a vertical arrangement of layers of blocks. This comprises a block module of core elements which would be obtained by cutting a cylindrical portion consisting of seven fuel blocks per layer. It is anticipated that a number of such modules properly arranged could represent the entire core. Hence, the predicted response of this module would exhibit the response characteristics of the core

  3. Explicit MDS Codes with Complementary Duals

    DEFF Research Database (Denmark)

    Beelen, Duals Peter; Jin, Lingfei

    2018-01-01

    In 1964, Massey introduced a class of codes with complementary duals which are called Linear Complimentary Dual (LCD for short) codes. He showed that LCD codes have applications in communication system, side-channel attack (SCA) and so on. LCD codes have been extensively studied in literature....... On the other hand, MDS codes form an optimal family of classical codes which have wide applications in both theory and practice. The main purpose of this paper is to give an explicit construction of several classes of LCD MDS codes, using tools from algebraic function fields. We exemplify this construction...

  4. Method and device for decoding coded digital video signals

    NARCIS (Netherlands)

    2000-01-01

    The invention relates to a video coding method and system including a quantization and coding sub-assembly (38) in which a quantization parameter is controlled by another parameter defined as being in direct relation with the dynamic range value of the data contained in given blocks of pixels.

  5. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  6. Block recursive LU preconditioners for the thermally coupled incompressible inductionless MHD problem

    Science.gov (United States)

    Badia, Santiago; Martín, Alberto F.; Planas, Ramon

    2014-10-01

    The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 × 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.

  7. LDPC concatenated space-time block coded system in multipath fading environment: Analysis and evaluation

    Directory of Open Access Journals (Sweden)

    Surbhi Sharma

    2011-06-01

    Full Text Available Irregular low-density parity-check (LDPC codes have been found to show exceptionally good performance for single antenna systems over a wide class of channels. In this paper, the performance of LDPC codes with multiple antenna systems is investigated in flat Rayleigh and Rician fading channels for different modulation schemes. The focus of attention is mainly on the concatenation of irregular LDPC codes with complex orthogonal space-time codes. Iterative decoding is carried out with a density evolution method that sets a threshold above which the code performs well. For the proposed concatenated system, the simulation results show that the QAM technique achieves a higher coding gain of 8.8 dB and 3.2 dB over the QPSK technique in Rician (LOS and Rayleigh (NLOS faded environments respectively.

  8. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  9. Ocean circulation code on machine connection

    International Nuclear Information System (INIS)

    Vitart, F.

    1993-01-01

    This work is part of a development of a global climate model based on a coupling between an ocean model and an atmosphere model. The objective was to develop this global model on a massively parallel machine (CM2). The author presents the OPA7 code (equations, boundary conditions, equation system resolution) and parallelization on the CM2 machine. CM2 data structure is briefly evoked, and two tests are reported (on a flat bottom basin, and a topography with eight islands). The author then gives an overview of studies aimed at improving the ocean circulation code: use of a new state equation, use of a formulation of surface pressure, use of a new mesh. He reports the study of the use of multi-block domains on CM2 through advection tests, and two-block tests

  10. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  11. Machine-Checked Sequencer for Critical Embedded Code Generator

    Science.gov (United States)

    Izerrouken, Nassima; Pantel, Marc; Thirioux, Xavier

    This paper presents the development of a correct-by-construction block sequencer for GeneAuto a qualifiable (according to DO178B/ED12B recommendation) automatic code generator. It transforms Simulink models to MISRA C code for safety critical systems. Our approach which combines classical development process and formal specification and verification using proof-assistants, led to preliminary fruitful exchanges with certification authorities. We present parts of the classical user and tools requirements and derived formal specifications, implementation and verification for the correctness and termination of the block sequencer. This sequencer has been successfully applied to real-size industrial use cases from various transportation domain partners and led to requirement errors detection and a correct-by-construction implementation.

  12. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  13. A 3D heat conduction model for block-type high temperature reactors and its implementation into the code DYN3D

    International Nuclear Information System (INIS)

    Baier, Silvio; Kliem, Soeren; Rohde, Ulrich

    2011-01-01

    The gas-cooled high temperature reactor is a concept to produce energy at high temperatures with a high level of inherent safety. It gets special attraction due to e.g. high thermal efficiency and the possibility of hydrogen production. In addition to the PBMR (Pebble Bed Modular Reactor) the (V)HTR (Very high temperature reactor) concept has been established. The basic design of a prismatic HTR consists of the following elements. The fuel is coated with four layers of isotropic materials. These so-called TRISO particles are dispersed into compacts which are placed in a graphite block matrix. The graphite matrix additionally contains holes for the coolant gas. A one-dimensional model is sufficient to describe (the radial) heat transfer in LWRs. But temperature gradients in a prismatic HTR can occur in axial as well as in radial direction, since regions with different heat source release and with different coolant temperature heat up are coupled through the graphite matrix elements. Furthermore heat transfer into reflector elements is possible. DYN3D is a code system for coupled neutron and thermal hydraulics core calculations developed at the Helmholtzzentrum Dresden-Rossendorf. Concerning neutronics DYN3D consists of a two-group and multi-group diffusion approach based on nodal expansion methods. Furthermore a 1D thermal-hydraulics model for parallel coolant flow channels is included. The DYN3D code was extensively verified and validated via numerous numerical and experimental benchmark problems. That includes the NEA CRP benchmarks for PWR and BWR, the Three-Miles-Island-1 main steam line break and the Peach Bottom Turbine Trip benchmarks, as well as measurements carried out in an original-size VVER-1000 mock-up. An overview of the verification and validation activities can be found. Presently a DYN3D-HTR version is under development. It involves a 3D heat conduction model to deal with higher-(than one)-dimensional effects of heat transfer and heat conduction in

  14. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  15. Bit-wise arithmetic coding for data compression

    Science.gov (United States)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  16. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  17. Quantum secure direct communication with high-dimension quantum superdense coding

    International Nuclear Information System (INIS)

    Wang Chuan; Li Yansong; Liu Xiaoshu; Deng Fuguo; Long Guilu

    2005-01-01

    A protocol for quantum secure direct communication with quantum superdense coding is proposed. It combines the ideas of block transmission, the ping-pong quantum secure direct communication protocol, and quantum superdense coding. It has the advantage of being secure and of high source capacity

  18. PREREM: an interactive data preprocessing code for INREM II. Part I: user's manual. Part II: code structure

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, M.T.; Fields, D.E.

    1981-05-01

    PREREM is an interactive computer code developed as a data preprocessor for the INREM-II (Killough, Dunning, and Pleasant, 1978a) internal dose program. PREREM is intended to provide easy access to current and self-consistent nuclear decay and radionuclide-specific metabolic data sets. Provision is made for revision of metabolic data, and the code is intended for both production and research applications. Documentation for the code is in two parts. Part I is a user's manual which emphasizes interpretation of program prompts and choice of user input. Part II stresses internal structure and flow of program control and is intended to assist the researcher who wishes to revise or modify the code or add to its capabilities. PREREM is written for execution on a Digital Equipment Corporation PDP-10 System and much of the code will require revision before it can be run on other machines. The source program length is 950 lines (116 blocks) and computer core required for execution is 212 K bytes. The user must also have sufficient file space for metabolic and S-factor data sets. Further, 64 100 K byte blocks of computer storage space are required for the nuclear decay data file. Computer storage space must also be available for any output files produced during the PREREM execution. 9 refs., 8 tabs.

  19. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    Science.gov (United States)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  20. PCX, Interior-Point Linear Programming Solver

    International Nuclear Information System (INIS)

    Czyzyk, J.

    2004-01-01

    1 - Description of program or function: PCX solves linear programming problems using the Mehrota predictor-corrector interior-point algorithm. PCX can be called as a subroutine or used in stand-alone mode, with data supplied from an MPS file. The software incorporates modules that can be used separately from the linear programming solver, including a pre-solve routine and data structure definitions. 2 - Methods: The Mehrota predictor-corrector method is a primal-dual interior-point method for linear programming. The starting point is determined from a modified least squares heuristic. Linear systems of equations are solved at each interior-point iteration via a sparse Cholesky algorithm native to the code. A pre-solver is incorporated in the code to eliminate inefficiencies in the user's formulation of the problem. 3 - Restriction on the complexity of the problem: There are no size limitations built into the program. The size of problem solved is limited by RAM and swap space on the user's computer

  1. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  2. Acoustic emission linear pulse holography

    International Nuclear Information System (INIS)

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-01-01

    This paper describes the emission linear pulse holography which produces a chronological linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. A thirty two point sampling array is used to construct phase-only linear holograms of simulated acoustic emission sources on large metal plates. The concept behind the AE linear pulse holography is illustrated, and a block diagram of a data acquisition system to implement the concept is given. Array element spacing, synthetic frequency criteria, and lateral depth resolution are specified. A reference timing transducer positioned between the array and the inspection zone and which inititates the time-of-flight measurements is described. The results graphically illustrate the technique using a one-dimensional FFT computer algorithm (ie. linear backward wave) for an AE image reconstruction

  3. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  4. A three-dimensional computer code for the nonlinear dynamic response of an HTGR core

    International Nuclear Information System (INIS)

    Subudhi, M.; Lasker, L.; Koplik, B.; Curreri, J.; Goradia, H.

    1979-01-01

    A three-dimensional dynamic code has been developed to determine the nonlinear response of an HTGR core. The HTGR core consists of several thousands of hexagonal core blocks. These are arranged in layers stacked together. Each layer contains many core blocks surrounded on their outer periphery by reflector blocks. The entire assembly is contained within a prestressed concrete reactor vessel. Gaps exist between adjacent blocks in any horizontal plane. Each core block in a given layer is connected to the blocks directly above and below it via three dowell pins. The present analytical study is directed towards an investigation of the nonlinear response of the reactor core blocks in the event of a seismic occurrence. The computer code is developed for a specific mathematical model which represents a vertical arrangement of layers of blocks. This comprises a 'block module' of core elements which would be obtained by cutting a cylindrical portion consisting of seven fuel blocks per layer. It is anticipated that a number of such modules properly arranged could represent the entire core. Hence, the predicted response of this module would exhibit the response characteristics of the core. (orig.)

  5. Parallel linear solvers for simulations of reactor thermal hydraulics

    International Nuclear Information System (INIS)

    Yan, Y.; Antal, S.P.; Edge, B.; Keyes, D.E.; Shaver, D.; Bolotnov, I.A.; Podowski, M.Z.

    2011-01-01

    The state-of-the-art multiphase fluid dynamics code, NPHASE-CMFD, performs multiphase flow simulations in complex domains using implicit nonlinear treatment of the governing equations and in parallel, which is a very challenging environment for the linear solver. The present work illustrates how the Portable, Extensible Toolkit for Scientific Computation (PETSc) and scalable Algebraic Multigrid (AMG) preconditioner from Hypre can be utilized to construct robust and scalable linear solvers for the Newton correction equation obtained from the discretized system of governing conservation equations in NPHASE-CMFD. The overall long-tem objective of this work is to extend the NPHASE-CMFD code into a fully-scalable solver of multiphase flow and heat transfer problems, applicable to both steady-state and stiff time-dependent phenomena in complete fuel assemblies of nuclear reactors and, eventually, the entire reactor core (such as the Virtual Reactor concept envisioned by CASL). This campaign appropriately begins with the linear algebraic equation solver, which is traditionally a bottleneck to scalability in PDE-based codes. The computational complexity of the solver is usually superlinear in problem size, whereas the rest of the code, the “physics” portion, usually has its complexity linear in the problem size. (author)

  6. Evaluation of three coding schemes designed for improved data communication

    Science.gov (United States)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  7. Self-assembling block copolymer systems involving competing length scales : A route toward responsive materials

    NARCIS (Netherlands)

    Nap, R; Erukhimovich, [No Value; ten Brinke, G; Erukhimovich, Igor

    2004-01-01

    The phase behavior of block copolymers melts involving competing length scales, i.e., able to microphase separate on two different length scales, is theoretically investigated using a self-consistent field approach. The specific block copolymers studied consist of a linear A-block linked to an

  8. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  9. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  10. THE McELIECE CRYPTOSYSTEM WITH ARRAY CODES

    Directory of Open Access Journals (Sweden)

    Vedat Şiap

    2011-12-01

    Full Text Available Public-key cryptosystems form an important part of cryptography. In these systems, every user has a public and a private key. The public key allows other users to encrypt messages, which can only be decoded using the secret private key. In that way, public-key cryptosystems allow easy and secure communication between all users without the need to actually meet and exchange keys. One such system is the McEliece Public-Key cryptosystem, sometimes also called McEliece Scheme. However, as we live in the information age, coding is used in order to protecet or correct the messages in the transferring or the storing processes. So, linear codes are important in the transferring or the storing. Due to richness of their structure array codes which are linear are also an important codes. However, the information is then transferred into the source more securely by increasing the error correction capability with array codes. In this paper, we combine two interesting topics, McEliece cryptosystem and array codes.

  11. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  12. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  13. On Field Size and Success Probability in Network Coding

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh; Thomsen, Casper

    2008-01-01

    Using tools from algebraic geometry and Gröbner basis theory we solve two problems in network coding. First we present a method to determine the smallest field size for which linear network coding is feasible. Second we derive improved estimates on the success probability of random linear network...... coding. These estimates take into account which monomials occur in the support of the determinant of the product of Edmonds matrices. Therefore we finally investigate which monomials can occur in the determinant of the Edmonds matrix....

  14. Analytical study of stress and deformation of HTR fuel blocks

    International Nuclear Information System (INIS)

    Tanaka, M.

    1982-01-01

    A two-dimensional finite element computer code named HANS-GR has been developed to predict the mechanical behavior of the graphite fuel blocks with realistic material properties and core environment. When graphite material is exposed to high temperature and fast neutron flux of high density, strains arise due to thermal expansion, irradiation-induced shrinkage and creep. Thus stresses and distortions are induced in the fuel block in which there are spatial variation of these strains. The analytical method used in the program to predcit these induced stresses and distortions by finite element method is discussed. In order to illustrate the versatility of the computer code, numerical results of two example analyses of the multi-hole type fuel elements in the VHTR Reactor are given. Two example analyses presented are those concerning the stresses in fuel blocks with control rod holes and distortions of the fuel blocks at the periphery of the reactor core. It is considered these phenomena should be carefully examined when the multi-hole type fuel elements are applied to VHTR. It is assured that the predicted mechanical behavior of the graphite components is strongly dependent on the material properties used and obtaining the reliable material property is important to make the analytical prediction a reliable one

  15. Final Report for 'Implimentation and Evaluation of Multigrid Linear Solvers into Extended Magnetohydrodynamic Codes for Petascale Computing'

    International Nuclear Information System (INIS)

    Vadlamani, Srinath; Kruger, Scott; Austin, Travis

    2008-01-01

    Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems. For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.

  16. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  17. Rate-Compatible Protograph LDPC Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  18. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...

  19. Molecular Mobility in Phase Segregated Bottlebrush Block Copolymer Melts

    Science.gov (United States)

    Yavitt, Benjamin; Gai, Yue; Song, Dongpo; Winter, H. Henning; Watkins, James

    We investigate the linear viscoelastic behavior of poly(styrene)-block-poly(ethylene oxide) (PS-b-PEO) brush block copolymer (BBCP) materials over a range of vol. fractions and with side chain lengths below the entanglement molecular weights. The high chain mobility of the brush architecture results in rapid micro-phase segregation of the brush copolymer segments, which occurs during thermal annealing at mild temperatures. Master curves of the dynamic moduli were obtained by time-temperature superposition. The reduced degree of chain entanglements leads to a unique liquid-like rheology similar to that of bottlebrush homopolymers, even in the phase segregated state. We also explore the alignment of phase segregated domains at exceptionally low strain amplitudes (γ = 0.01) and mild processing temperatures using small angle X-ray scattering (SAXS). Domain orientation occurred readily at strains within the linear viscoelastic regime without noticeable effect on the moduli. This interplay of high molecular mobility and rapid phase segregation that are exhibited simultaneously in BBCPs is in contrast to the behavior of conventional linear block copolymer (LBCP) analogs and opens up new possibilities for processing BBCP materials for a wide range of nanotechnology applications. NSF Center for Hierarchical Manufacturing at the University of Massachusetts, Amherst (CMMI-1025020).

  20. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  1. Protograph LDPC Codes for the Erasure Channel

    Science.gov (United States)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  2. Evaluation of ETOG-3Q/ETOG-3, FLANGE-II, XLACS, NJOY and linear/recent/groupie codes for calculations of resonance and reference cross sections

    International Nuclear Information System (INIS)

    Anaf, J.; Chalhoub, E.S.

    1991-01-01

    The NJOY and LINEAR/RECENT/GROUPIE calculational procedures for the resolved and unresolved resonance contributions and background cross sections are evaluated. Elastic scattering, fission and capture multigroup cross sections generated by these codes and the previously validated ETOG-3Q, ETOG-3, FLANGE-II and XLACS are compared. Constant weighting function and zero Kelvin temperature are considered. Discrepancies are presented and analyzed. (author)

  3. Self-orthogonal codes from some bush-type Hadamard matrices ...

    African Journals Online (AJOL)

    By means of a construction method outlined by Harada and Tonchev, we determine some non-binary self-orthogonal codes obtained from the row span of orbit matrices of Bush-type Hadamard matrices that admit a xed-point-free and xed-block-free automorphism of prime order. We show that the code [20; 15; 4]5 obtained ...

  4. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  5. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  6. Applications of Coding in Network Communications

    Science.gov (United States)

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  7. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  8. ARKAS: A three-dimensional finite element code for the analysis of core distortions and mechanical behaviour

    International Nuclear Information System (INIS)

    Nakagawa, M.

    1984-01-01

    Computer program ARKAS has been developed for the purpose of predicting core distortions and mechanical behaviour in a cluster of subassemblies under steady state conditions in LMFBR cores. This report describes the analytical models and numerical procedures employed in the code together with some typical results of the analysis made on large LMFBR cores. ARKAS is programmed in the FORTRAN-IV language and is capable of treating up to 260 assemblies in a cluster with flexible boundary conditions including mirror and rotational symmetry. The nonlinearity of the problem due to contact and separation is solved by the step iterative procedure based on the Newton-Raphson method. In each step iterative procedure, the linear matrix equation must be reconstructed and then solved directly. To save computer time and memory, the substructure method is adopted in the step of reconstructing the linear matrix equation, and in the step of solving the linear matrix equation, the block successive over-relaxation method is adopted. The program ARKAS computes, at every time step, 3-dimensional displacements and rotations of the subassemblies in the core and the interduct forces including at the nozzle tips and nozzle bases with friction effects. The code also has an ability to deal with the refueling and shuffling of subassemblies and to calculate the values of withdrawal forces. For the qualitative validation of the code, sample calculations were performed on the several bundle arrays. In these calculations, contact and separation processes under the influences of friction forces, off-center loading, duct rotations and torsion, thermal expansion and irradiation induced swelling and creep were analyzed. These results are quite reasonable in the light of the expected behaviour. This work was performed under the sponsorship of Toshiba Corporation

  9. International linear collider simulations using BDSIM

    Indian Academy of Sciences (India)

    BDSIM is a Geant4 [1] extension toolkit for the simulation of particle transport in accelerator beamlines. It is a code that combines accelerator-style particle tracking with traditional Geant-style tracking based on Runga–Kutta techniques. A more detailed description of the code can be found in [2]. In an e+e− linear collider ...

  10. Cement Stabilized Soil Blocks Admixed with Sugarcane Bagasse Ash

    Directory of Open Access Journals (Sweden)

    Jijo James

    2016-01-01

    Full Text Available The study involved investigating the performance of ordinary Portland cement (OPC stabilized soil blocks amended with sugarcane bagasse ash (SBA. Locally available soil was tested for its properties and characterized as clay of medium plasticity. This soil was stabilized using 4% and 10% OPC for manufacture of blocks of size 19 cm × 9 cm × 9 cm. The blocks were admixed with 4%, 6%, and 8% SBA by weight of dry soil during casting, with plain OPC stabilized blocks acting as control. All blocks were cast to one target density and water content followed by moist curing for a period of 28 days. They were then subjected to compressive strength, water absorption, and efflorescence tests in accordance with Bureau of Indian standards (BIS specifications. The results of the tests indicated that OPC stabilization resulted in blocks that met the specifications of BIS. Addition of SBA increased the compressive strength of the blocks and slightly increased the water absorption but still met the standard requirement of BIS code. It is concluded that addition of SBA to OPC in stabilized block manufacture was capable of producing stabilized blocks at reduced OPC content that met the minimum required standards.

  11. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  12. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  13. Cheat-sensitive commitment of a classical bit coded in a block of m × n round-trip qubits

    Science.gov (United States)

    Shimizu, Kaoru; Fukasaka, Hiroyuki; Tamaki, Kiyoshi; Imoto, Nobuyuki

    2011-08-01

    This paper proposes a quantum protocol for a cheat-sensitive commitment of a classical bit. Alice, the receiver of the bit, can examine dishonest Bob, who changes or postpones his choice. Bob, the sender of the bit, can examine dishonest Alice, who violates concealment. For each round-trip case, Alice sends one of two spin states |S±⟩ by choosing basis S at random from two conjugate bases X and Y. Bob chooses basis C ∈ {X,Y} to perform a measurement and returns a resultant state |C±⟩. Alice then performs a measurement with the other basis R (≠S) and obtains an outcome |R±⟩. In the opening phase, she can discover dishonest Bob, who unveils a wrong basis with a faked spin state, or Bob can discover dishonest Alice, who infers basis C but destroys |C±⟩ by setting R to be identical to S in the commitment phase. If a classical bit is coded in a block of m × n qubit particles, impartial examinations and probabilistic security criteria can be achieved.

  14. Improved Linear Cryptanalysis of Reduced-Round SIMON-32 and SIMON-48

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed; Alizadeh, Javad; Alkhzaimi, Hoda A.

    2015-01-01

    In this paper we analyse two variants of SIMON family of light-weight block ciphers against variants of linear cryptanalysis and present the best linear cryptanalytic results on these variants of reducedround SIMON to date. We propose a time-memory trade-off method that finds differential/ linear...

  15. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  16. Reference manual for the KfK code PCROSS

    International Nuclear Information System (INIS)

    Ravndal, S.; Oblozinsky, P.; Kelzenberg, S.; Cierjacks, S.

    1991-12-01

    The PCROSS code calculates the so-called 'pseudo' cross sections for sequential (x,n) reactions and merges them together with 'effective' cross section for neutron induced reactions into one file of 'collapsed' cross sections. The file is tailored to provide an input for the FISPACT inventory code that calculates the activation and related radiological quantities of material irradiated in given neutron fields. The report summarizes calculational procedure and provides the reader with essential technical details of the code PCROSS (version 1.0) such as description of parameters, common blocks and subroutines. (orig.) [de

  17. Cipher block based authentication module: A hardware design perspective

    NARCIS (Netherlands)

    Michail, H.E.; Schinianakis, D.; Goutis, C.E.; Kakarountas, A.P.; Selimis, G.

    2011-01-01

    Message Authentication Codes (MACs) are widely used in order to authenticate data packets, which are transmitted thought networks. Typically MACs are implemented using modules like hash functions and in conjunction with encryption algorithms (like Block Ciphers), which are used to encrypt the

  18. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are

  19. The Preliminary GAMMA Code Thermal hydraulic Analysis for the Steady State of HTR-10 Initial Core

    Energy Technology Data Exchange (ETDEWEB)

    Jun, Ji Su; Lim, Hong Sik; Lee, Won Jae

    2006-07-15

    This report describes the preliminary thermalhydraulic analysis of HTR-10 steady state full power initial core to provide a benchmark calculation of VHTGR(Very High-Temperature Gas-Cooled Reactors) safety analysis code of GAMMA(GAs Multicomponent Mixture Analysis). The input data of GAMMA code are produced for the models of fluid block, wall block, radiation heat transfer and each component material properties in HTR-10 reactor. The temperature and flow distributions of HTR-10 steady state 10 MW{sub th} full power initial core are calculated by GAMMA code with boundary conditions of total reactor inlet flow rate of 4.32 kg/s, inlet temperature of 250 .deg. C, inlet pressure of 3 MPa, outlet pressure of 2.992 MPa and the fixed temperature at RCCS water cooling tube of 50 .deg C. The calculation results are compared with the measured solid material temperatures at 22 fixed instrumentation positions in HTR-10. The wall temperature distribution in pebble bed core shows that the minimum temperature of 358 .deg. C is located at upper core, a higher temperature zone than 829 .deg. C is located at the inner region of 0.45 m radius at the bottom of core centre, and the maximum wall temperature is 897 .deg. C. The wall temperatures linearly decreases at radially and axially farther side from the bottom of core centre. The maximum temperature of RPV is 230 .deg. C, and the maximum values of fuel average temperature and TRISO centreline temperature are 907 .deg. C and 929 .deg. C, respectively and they are much lower than the fuel temperature limitation of 1230 .deg. C. The comparsion between the GAMMA code predictions and the measured temperature data shows that the calculation results are very close to the measured values in top and side reflector region, but a great difference is appeared in bottom reflector region. Some measured data are abnormally high in bottom reflector region, and so the confirmation of data is necessary in future. Fifteen of twenty two data have a

  20. Joint Coding/Decoding for Multi-message HARQ

    OpenAIRE

    Benyouss , Abdellatif; Jabi , Mohammed; Le Treust , Maël; Szczecinski , Leszek

    2016-01-01

    International audience; In this work, we propose and investigate a new coding strategy devised to increase the throughput of hybrid ARQ (HARQ) transmission over block fading channel. In our proposition, the transmitter jointly encodes a variable number of bits for each round of HARQ. The parameters (rates) of this joint coding can vary and may be based on the negative acknowledgment (NACK) signals provided by the receiver or, on the past (outdated) information about the channel states. The re...

  1. Entanglement-assisted quantum low-density parity-check codes

    International Nuclear Information System (INIS)

    Fujiwara, Yuichiro; Clark, David; Tonchev, Vladimir D.; Vandendriessche, Peter; De Boeck, Maarten

    2010-01-01

    This article develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error-correction performance, high rates, and low decoding complexity. The proposed method produces several infinite families of codes with a wide variety of parameters and entanglement requirements. Our framework encompasses the previously known entanglement-assisted quantum LDPC codes having the best error-correction performance and many other codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.

  2. Observations on the SIMON Block Cipher Family

    DEFF Research Database (Denmark)

    Kölbl, Stefan; Leander, Gregor; Tiessen, Tyge

    2015-01-01

    In this paper we analyse the general class of functions underlying the Simon block cipher. In particular, we derive efficiently computable and easily implementable expressions for the exact differential and linear behaviour of Simon-like round functions. Following up on this, we use those...

  3. Optimal block-tridiagonalization of matrices for coherent charge transport

    International Nuclear Information System (INIS)

    Wimmer, Michael; Richter, Klaus

    2009-01-01

    Numerical quantum transport calculations are commonly based on a tight-binding formulation. A wide class of quantum transport algorithms require the tight-binding Hamiltonian to be in the form of a block-tridiagonal matrix. Here, we develop a matrix reordering algorithm based on graph partitioning techniques that yields the optimal block-tridiagonal form for quantum transport. The reordered Hamiltonian can lead to significant performance gains in transport calculations, and allows to apply conventional two-terminal algorithms to arbitrarily complex geometries, including multi-terminal structures. The block-tridiagonalization algorithm can thus be the foundation for a generic quantum transport code, applicable to arbitrary tight-binding systems. We demonstrate the power of this approach by applying the block-tridiagonalization algorithm together with the recursive Green's function algorithm to various examples of mesoscopic transport in two-dimensional electron gases in semiconductors and graphene.

  4. Evaluation of left atrial linear ablation using contiguous and optimized radiofrequency lesions: the ALINE study.

    Science.gov (United States)

    Wolf, Michael; El Haddad, Milad; Fedida, Joël; Taghji, Philippe; Van Beeumen, Katarina; Strisciuglio, Teresa; De Pooter, Jan; Lepièce, Caroline; Vandekerckhove, Yves; Tavernier, René; Duytschaever, Mattias; Knecht, Sébastien

    2018-01-08

    Achieving block across linear lesions is challenging. We prospectively evaluated radiofrequency (RF) linear ablation at the roof and mitral isthmus (MI) using point-by-point contiguous and optimized RF lesions. Forty-one consecutive patients with symptomatic persistent AF underwent stepwise contact force (CF)-guided catheter ablation during ongoing AF. A single linear set of RF lesions was delivered at the roof and posterior MI according to the 'Atrial LINEar' (ALINE) criteria, i.e. point-by-point RF delivery (up to 35 W) respecting strict criteria of contiguity (inter-lesion distance ≤ 6 mm) and indirect lesion depth assessment (ablation index ≥550). We assessed the incidence of bidirectional block across both lines only after restoration of sinus rhythm. After a median RF time of 7 min [interquartile range (IQR) 5-9], first-pass block across roof lines was observed in 38 of 41 (93%) patients. Final bidirectional roof block was achieved in 40 of 41 (98%) patients. First-pass block was observed in 8 of 35 (23%) MI lines, after a median RF time of 8 min (IQR 7-12). Additional endo- and epicardial (54% of patients) RF applications resulted in final bidirectional MI block in 28 of 35 (80%) patients. During a median follow-up of 396 (IQR 310-442) days, 12 patients underwent repeat procedures, with conduction recovery in 4 of 12 and 5 of 10 previously blocked roof lines and MI lines, respectively. No complications occurred. Anatomical linear ablation using contiguous and optimized RF lesions results in a high rate of first-pass block at the roof but not at the MI. Due to its complex 3D architecture, the MI frequently requires additional endo- and epicardial RF lesions to be blocked. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2018. For permissions, please email: journals.permissions@oup.com.

  5. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    Energy Technology Data Exchange (ETDEWEB)

    Tang, T. F. [Dalian University of Technology, Dalian 116024 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Ma, C. H. [Fusion Simulation Center, School of Physics, Peking University, Beijing (China); Bass, E. M.; Candy, J. [General Atomics, P.O. Box 85608, San Diego, California 92186-5608 (United States); Holland, C. [University of California San Diego, La Jolla, California 92093-0429 (United States)

    2016-03-15

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan based on “Cyclone base case parameter set.” We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.

  6. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  7. Iterative solution of linear equations in ODE codes. [Krylov subspaces

    Energy Technology Data Exchange (ETDEWEB)

    Gear, C. W.; Saad, Y.

    1981-01-01

    Each integration step of a stiff equation involves the solution of a nonlinear equation, usually by a quasi-Newton method that leads to a set of linear problems. Iterative methods for these linear equations are studied. Of particular interest are methods that do not require an explicit Jacobian, but can work directly with differences of function values using J congruent to f(x + delta) - f(x). Some numerical experiments using a modification of LSODE are reported. 1 figure, 2 tables.

  8. Jackknife Variance Estimator for Two Sample Linear Rank Statistics

    Science.gov (United States)

    1988-11-01

    Accesion For - - ,NTIS GPA&I "TIC TAB Unann c, nc .. [d Keywords: strong consistency; linear rank test’ influence function . i , at L By S- )Distribut...reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Strong consistency; linear rank test; influence function . 19. ABSTRACT

  9. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  10. Duals of Affine Grassmann Codes and Their Relatives

    DEFF Research Database (Denmark)

    Beelen, P.; Ghorpade, S. R.; Hoholdt, T.

    2012-01-01

    Affine Grassmann codes are a variant of generalized Reed-Muller codes and are closely related to Grassmann codes. These codes were introduced in a recent work by Beelen Here, we consider, more generally, affine Grassmann codes of a given level. We explicitly determine the dual of an affine...... Grassmann code of any level and compute its minimum distance. Further, we ameliorate the results by Beelen concerning the automorphism group of affine Grassmann codes. Finally, we prove that affine Grassmann codes and their duals have the property that they are linear codes generated by their minimum......-weight codewords. This provides a clean analogue of a corresponding result for generalized Reed-Muller codes....

  11. Bridging Inter-flow and Intra-flow Network Coding for Video Applications

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2013-01-01

    transmission approach to decide how much and when to send redundancy in the network, and a minimalistic feedback mechanism to guarantee delivery of generations of the different flows. Given the delay constraints of video applications, we proposed a simple yet effective coding mechanism, Block Coding On The Fly...

  12. Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems

    KAUST Repository

    Whiteley, J. P.

    2011-05-09

    Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.

  13. Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems

    KAUST Repository

    Whiteley, J. P.; Gillow, K.; Tavener, S. J.; Walter, A. C.

    2011-01-01

    Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.

  14. Software Design Document for the AMP Nuclear Fuel Performance Code

    International Nuclear Information System (INIS)

    Philip, Bobby; Clarno, Kevin T.; Cochran, Bill

    2010-01-01

    The purpose of this document is to describe the design of the AMP nuclear fuel performance code. It provides an overview of the decomposition into separable components, an overview of what those components will do, and the strategic basis for the design. The primary components of a computational physics code include a user interface, physics packages, material properties, mathematics solvers, and computational infrastructure. Some capability from established off-the-shelf (OTS) packages will be leveraged in the development of AMP, but the primary physics components will be entirely new. The material properties required by these physics operators include many highly non-linear properties, which will be replicated from FRAPCON and LIFE where applicable, as well as some computationally-intensive operations, such as gap conductance, which depends upon the plenum pressure. Because there is extensive capability in off-the-shelf leadership class computational solvers, AMP will leverage the Trilinos, PETSc, and SUNDIALS packages. The computational infrastructure includes a build system, mesh database, and other building blocks of a computational physics package. The user interface will be developed through a collaborative effort with the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Capability Transfer program element as much as possible and will be discussed in detail in a future document.

  15. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  16. A zero-dimensional EXTRAP computer code

    International Nuclear Information System (INIS)

    Karlsson, P.

    1982-10-01

    A zero-dimensional computer code has been designed for the EXTRAP experiment to predict the density and the temperature and their dependence upon paramenters such as the plasma current and the filling pressure of neutral gas. EXTRAP is a Z-pinch immersed in a vacuum octupole field and could be either linear or toroidal. In this code the density and temperature are assumed to be constant from the axis up to a breaking point from where they decrease linearly in the radial direction out to the plasma radius. All quantities, however, are averaged over the plasma volume thus giving the zero-dimensional character of the code. The particle, momentum and energy one-fluid equations are solved including the effects of the surrounding neutral gas and oxygen impurities. The code shows that the temperature and density are very sensitive to the shape of the plasma, flatter profiles giving higher temperatures and densities. The temperature, however, is not strongly affected for oxygen concentration less than 2% and is well above the radiation barrier even for higher concentrations. (Author)

  17. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  18. Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP

    International Nuclear Information System (INIS)

    Maruyama, Soh; Fujimoto, Nozomu; Sudo, Yukio; Kiso, Yoshihiro; Murakami, Tomoyuki.

    1988-09-01

    This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T 1-M ) with simulated fuel rods and fuel blocks. (author)

  19. Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP

    Science.gov (United States)

    Maruyama, Soh; Fujimoto, Nozomu; Kiso, Yoshihiro; Murakami, Tomoyuki; Sudo, Yukio

    1988-09-01

    This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T(sub 1-M)) with simulated fuel rods and fuel blocks.

  20. Effect of the electron transport through thin slabs on the simulation of linear electron accelerators of use in therapy: A comparative study of various Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain)], E-mail: mvilches@ugr.es; Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain); Guerrero, R. [Servicio de Radiofisica, Hospital Universitario ' San Cecilio' , Avda. Dr. Oloriz, 16, E-18012 Granada (Spain); Anguiano, M.; Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used.

  1. Effect of the electron transport through thin slabs on the simulation of linear electron accelerators of use in therapy: A comparative study of various Monte Carlo codes

    International Nuclear Information System (INIS)

    Vilches, M.; Garcia-Pareja, S.; Guerrero, R.; Anguiano, M.; Lallena, A.M.

    2007-01-01

    When a therapeutic electron linear accelerator is simulated using a Monte Carlo (MC) code, the tuning of the initial spectra and the renormalization of dose (e.g., to maximum axial dose) constitute a common practice. As a result, very similar depth dose curves are obtained for different MC codes. However, if renormalization is turned off, the results obtained with the various codes disagree noticeably. The aim of this work is to investigate in detail the reasons of this disagreement. We have found that the observed differences are due to non-negligible differences in the angular scattering of the electron beam in very thin slabs of dense material (primary foil) and thick slabs of very low density material (air). To gain insight, the effects of the angular scattering models considered in various MC codes on the dose distribution in a water phantom are discussed using very simple geometrical configurations for the LINAC. The MC codes PENELOPE 2003, PENELOPE 2005, GEANT4, GEANT3, EGSnrc and MCNPX have been used

  2. The block Gauss-Seidel method in sound transmission problems

    OpenAIRE

    Poblet-Puig, Jordi; Rodríguez Ferran, Antonio

    2009-01-01

    Sound transmission through partitions can be modelled as an acoustic fluid-elastic structure interaction problem. The block Gauss-Seidel iterative method is used in order to solve the finite element linear system of equations. The blocks are defined in a natural way, respecting the fluid and structural domains. The convergence criterion (spectral radius of iteration matrix smaller than one) is analysed and interpreted in physical terms by means of simple one-dimensional problems. This anal...

  3. Block preconditioners for linear systems arising from multiscale collocation with compactly supported RBFs

    KAUST Repository

    Farrell, Patricio

    2015-04-30

    © 2015John Wiley & Sons, Ltd. Symmetric collocation methods with RBFs allow approximation of the solution of a partial differential equation, even if the right-hand side is only known at scattered data points, without needing to generate a grid. However, the benefit of a guaranteed symmetric positive definite block system comes at a high computational cost. This cost can be alleviated somewhat by considering compactly supported RBFs and a multiscale technique. But the condition number and sparsity will still deteriorate with the number of data points. Therefore, we study certain block diagonal and triangular preconditioners. We investigate ideal preconditioners and determine the spectra of the preconditioned matrices before proposing more practical preconditioners based on a restricted additive Schwarz method with coarse grid correction. Numerical results verify the effectiveness of the preconditioners.

  4. An In vitro evaluation of the reliability of QR code denture labeling technique.

    Science.gov (United States)

    Poovannan, Sindhu; Jain, Ashish R; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R

    2016-01-01

    Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method.

  5. Abstract feature codes: The building blocks of the implicit learning system.

    Science.gov (United States)

    Eberhardt, Katharina; Esser, Sarah; Haider, Hilde

    2017-07-01

    According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Assessment of Neutron Contamination Originating from the Presence of Wedge and Block in Photon Beam Radiotherapy.

    Science.gov (United States)

    Bahreyni Toossi, M T; Khajetash, B; Ghorbani, M

    2018-03-01

    One of the main causes of induction of secondary cancer in radiation therapy is neutron contamination received by patients during treatment. Objective: In the present study the impact of wedge and block on neutron contamination production is investigated. The evaluations are conducted for a 15 MV Siemens Primus linear accelerator. Simulations were performed using MCNPX Monte Carlo code. 30˚, 45˚ and 60˚ wedges and a cerrobend block with dimensions of 1.5 × 1.5 × 7 cm 3 were simulated. The investigation were performed in the 10 × 10 cm 2 field size at source to surface distance of 100 cm for depth of 0.5, 2, 3 and 4 cm in a water phantom. Neutron dose was calculated using F4 tally with flux to dose conversion factors and F6 tally. Results showed that the presence of wedge increases the neutron contamination when the wedge factor was considered. In addition, 45˚ wedge produced the most amount of neutron contamination. If the block is in the center of the field, the cerrobend block caused less neutron contamination than the open field due to absorption of neutrons and photon attenuation. The results showed that neutron contamination is less in steeper depths. The results for two tallies showed practically equivalent results. Wedge causes neutron contamination hence should be considered in therapeutic protocols in which wedge is used. In terms of clinical aspects, the results of this study show that superficial tissues such as skin will tolerate more neutron contamination than the deep tissues.

  7. The Gift Code User Manual. Volume I. Introduction and Input Requirements

    Science.gov (United States)

    1975-07-01

    REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called

  8. Reforming residential electricity tariff in China: Block tariffs pricing approach

    International Nuclear Information System (INIS)

    Sun, Chuanwang; Lin, Boqiang

    2013-01-01

    The Chinese households that make up approximately a quarter of world households are facing a residential power tariff reform in which a rising block tariff structure will be implemented, and this tariff mechanism is widely used around the world. The basic principle of the structure is to assign a higher price for higher income consumers with low price elasticity of power demand. To capture the non-linear effects of price and income on elasticities, we set up a translog demand model. The empirical findings indicate that the higher income consumers are less sensitive than those with lower income to price changes. We further put forward three proposals of Chinese residential electricity tariffs. Compared to a flat tariff, the reasonable block tariff structure generates more efficient allocation of cross-subsidies, better incentives for raising the efficiency of electricity usage and reducing emissions from power generation, which also supports the living standards of low income households. - Highlights: • We design a rising block tariff structure of residential electricity in China. • We set up a translog demand model to find the non-linear effects on elasticities. • The higher income groups are less sensitive to price changes. • Block tariff structure generates more efficient allocation of cross-subsidies. • Block tariff structure supports the living standards of low income households

  9. GEOMECHANICAL OBSERVATIONS DURING THE LARGE BLOCK TEST

    International Nuclear Information System (INIS)

    STEPHEN C. BLAIR AND STEPHANIE A. WOOD

    1998-01-01

    This paper presents an overview of the geomechanical studies conducted at the Large Block Test at Fran Ridge, near Yucca Mountain, Nevada. The 3-dimensional geomechanical response of the rock to heating is being monitored using instrumentation mounted in boreholes and on the surface of the block. Results show that thermal expansion of the block began a few hours after the start of heating, and is closely correlated with the thermal history. Horizontal expansion increases as a linear function of height. Comparison of observed deformations with continuum simulations shows that below the heater plane deformation is smaller than predicted, while above the heater plane, observed deformation is larger than predicted, and is consistent with opening of vertical fractures. Fracture monitors indicate that movement on a large horizontal fracture is associated with hydrothermal behavior

  10. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    Science.gov (United States)

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  11. SIGMA1-2007, Doppler Broadening ENDF Format Linear-Linear. Interpolated Point Cross Section

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of problem or function: SIGMA-1 Doppler broadens evaluated Cross sections given in the linear-linear interpolation form of the ENDF/B Format to one final temperature. The data is Doppler broadened, thinned, and output in the ENDF/B Format. IAEA0854/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. 2 - Modifications from previous versions: Sigma-1 VERS. 2007-1 (Jan. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 360,000 energy points 3 - Method of solution: The energy grid is selected to ensure that the broadened data is linear-linear interpolable. SIGMA-1 starts from the free-atom Doppler broadening equations and adds the assumptions of linear data within the table and constant data outside the range of the table. If the Original data is not at zero Kelvin, the data is broadened by the effective temperature difference to the final temperature. If the data is already at a temperature higher than the final temperature, Doppler broadening is not performed. 4 - Restrictions on the complexity of the problem: The input to SIGMA-1 must be data which vary linearly in energy and cross section between tabulated points. The LINEAR program provides such data. LINEAR uses only the ENDF/B BCD Format tape and copies all sections except File 3 as read. Since File 3 data are in identical Format for ENDF/B Versions I through VI, the program can be used with all these versions. - The present version Doppler broadens only to one final temperature

  12. Simplified Linear Equation Solvers users manual

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. [Argonne National Lab., IL (United States); Smith, B. [California Univ., Los Angeles, CA (United States)

    1993-02-01

    The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.

  13. Language Recognition via Sparse Coding

    Science.gov (United States)

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  14. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  15. Development of a detailed core flow analysis code for prismatic fuel reactors

    International Nuclear Information System (INIS)

    Bennett, R.G.

    1990-01-01

    The detailed analysis of the core flow distribution in prismatic fuel reactors is of interest for modular high-temperature gas-cooled reactor (MHTGR) design and safety analyses. Such analyses involve the steady-state flow of helium through highly cross-connected flow paths in and around the prismatic fuel elements. Several computer codes have been developed for this purpose. However, since they are proprietary codes, they are not generally available for independent MHTGR design confirmation. The previously developed codes do not consider the exchange or diversion of flow between individual bypass gaps with much detail. Such a capability could be important in the analysis of potential fuel block motion, such as occurred in the Fort St. Vrain reactor, or for the analysis of the conditions around a flow blockage or misloaded fuel block. This work develops a computer code with fairly general-purpose capabilities for modeling the flow in regions of prismatic fuel cores. The code, called BYPASS solves a finite difference control volume formulation of the compressible, steady-state fluid flow in highly cross-connected flow paths typical of the MHTGR

  16. Two-step quantum direct communication protocol using the Einstein- Podolsky-Rosen pair block

    CERN Document Server

    Fu Guo Deng; Xiao Shu Liu; 10.1103/PhysRevA.68.042317

    2003-01-01

    A protocol for quantum secure direct communication using blocks of Einstein-Podolsky-Rosen (EPR) pairs is proposed. A set of ordered N EPR pairs is used as a data block for sending secret message directly. The ordered N EPR set is divided into two particle sequences, a checking sequence and a message-coding sequence. After transmitting the checking sequence, the two parties of communication check eavesdropping by measuring a fraction of particles randomly chosen, with random choice of two sets of measuring bases. After insuring the security of the quantum channel, the sender Alice encodes the secret message directly on the message-coding sequence and sends them to Bob. By combining the checking and message-coding sequences together, Bob is able to read out the encoded messages directly. The scheme is secure because an eavesdropper cannot get both sequences simultaneously. We also discuss issues in a noisy channel. (30 refs).

  17. Verification of gyrokinetic microstability codes with an LHD configuration

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, D. R. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Nunami, M. [National Inst. for Fusion Science (Japan); Watanabe, T. -H. [Nagoya Univ. (Japan); Sugama, H. [National Inst. for Fusion Science (Japan); Tanaka, K. [National Inst. for Fusion Science (Japan)

    2014-11-01

    We extend previous benchmarks of the GS2 and GKV-X codes to verify their algorithms for solving the gyrokinetic Vlasov-Poisson equations for plasma microturbulence. Code benchmarks are the most complete way of verifying the correctness of implementations for the solution of mathematical models for complex physical processes such as those studied here. The linear stability calculations reported here are based on the plasma conditions of an ion-ITB plasma in the LHD configuration. The plasma parameters and the magnetic geometry differ from previous benchmarks involving these codes. We find excellent agreement between the independently written pre-processors that calculate the geometrical coefficients used in the gyrokinetic equations. Grid convergence tests are used to establish the resolution and domain size needed to obtain converged linear stability results. The agreement of the frequencies, growth rates and eigenfunctions in the benchmarks reported here provides additional verification that the algorithms used by the GS2 and GKV-X codes are correctly finding the linear eigenvalues and eigenfunctions of the gyrokinetic Vlasov-Poisson equations.

  18. Block preconditioners for linear systems arising from multiscale collocation with compactly supported RBFs

    KAUST Repository

    Farrell, Patricio; Pestana, Jennifer

    2015-01-01

    . However, the benefit of a guaranteed symmetric positive definite block system comes at a high computational cost. This cost can be alleviated somewhat by considering compactly supported RBFs and a multiscale technique. But the condition number and sparsity

  19. Methodology for bus layout for topological quantum error correcting codes

    Energy Technology Data Exchange (ETDEWEB)

    Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)

    2016-12-15

    Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)

  20. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael Mohammed Abdullah

    2016-08-15

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  1. On the construction of capacity-achieving lattice Gaussian codes

    KAUST Repository

    Alghamdi, Wael; Abediseid, Walid; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3]. © 2016 IEEE.

  2. Non-linear M -sequences Generation Method

    Directory of Open Access Journals (Sweden)

    Z. R. Garifullina

    2011-06-01

    Full Text Available The article deals with a new method for modeling a pseudorandom number generator based on R-blocks. The gist of the method is the replacement of a multi digit XOR element by a stochastic adder in a parallel binary linear feedback shift register scheme.

  3. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  4. Screening synteny blocks in pairwise genome comparisons through integer programming.

    Science.gov (United States)

    Tang, Haibao; Lyons, Eric; Pedersen, Brent; Schnable, James C; Paterson, Andrew H; Freeling, Michael

    2011-04-18

    It is difficult to accurately interpret chromosomal correspondences such as true orthology and paralogy due to significant divergence of genomes from a common ancestor. Analyses are particularly problematic among lineages that have repeatedly experienced whole genome duplication (WGD) events. To compare multiple "subgenomes" derived from genome duplications, we need to relax the traditional requirements of "one-to-one" syntenic matchings of genomic regions in order to reflect "one-to-many" or more generally "many-to-many" matchings. However this relaxation may result in the identification of synteny blocks that are derived from ancient shared WGDs that are not of interest. For many downstream analyses, we need to eliminate weak, low scoring alignments from pairwise genome comparisons. Our goal is to objectively select subset of synteny blocks whose total scores are maximized while respecting the duplication history of the genomes in comparison. We call this "quota-based" screening of synteny blocks in order to appropriately fill a quota of syntenic relationships within one genome or between two genomes having WGD events. We have formulated the synteny block screening as an optimization problem known as "Binary Integer Programming" (BIP), which is solved using existing linear programming solvers. The computer program QUOTA-ALIGN performs this task by creating a clear objective function that maximizes the compatible set of synteny blocks under given constraints on overlaps and depths (corresponding to the duplication history in respective genomes). Such a procedure is useful for any pairwise synteny alignments, but is most useful in lineages affected by multiple WGDs, like plants or fish lineages. For example, there should be a 1:2 ploidy relationship between genome A and B if genome B had an independent WGD subsequent to the divergence of the two genomes. We show through simulations and real examples using plant genomes in the rosid superorder that the quota

  5. Dynamic photoinduced realignment processes in photoresponsive block copolymer films: effects of the chain length and block copolymer architecture.

    Science.gov (United States)

    Sano, Masami; Shan, Feng; Hara, Mitsuo; Nagano, Shusaku; Shinohara, Yuya; Amemiya, Yoshiyuki; Seki, Takahiro

    2015-08-07

    A series of block copolymers composed of an amorphous poly(butyl methacrylate) (PBMA) block connected with an azobenzene (Az)-containing liquid crystalline (PAz) block were synthesized by changing the chain length and polymer architecture. With these block copolymer films, the dynamic realignment process of microphase separated (MPS) cylinder arrays of PBMA in the PAz matrix induced by irradiation with linearly polarized light was studied by UV-visible absorption spectroscopy, and time-resolved grazing incidence small angle X-ray scattering (GI-SAXS) measurements using a synchrotron beam. Unexpectedly, the change in the chain length hardly affected the realignment rate. In contrast, the architecture of the AB-type diblock or the ABA-type triblock essentially altered the realignment feature. The strongly cooperative motion with an induction period before realignment was characteristic only for the diblock copolymer series, and the LPL-induced alignment change immediately started for triblock copolymers and the PAz homopolymer. Additionally, a marked acceleration in the photoinduced dynamic motions was unveiled in comparison with a thermal randomization process.

  6. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...... decoders perform repeated decoding trials and decoding information is exchanged between them...

  7. A linear programming model for preserving privacy when disclosing patient spatial information for secondary purposes.

    Science.gov (United States)

    Jung, Ho-Won; El Emam, Khaled

    2014-05-29

    A linear programming (LP) model was proposed to create de-identified data sets that maximally include spatial detail (e.g., geocodes such as ZIP or postal codes, census blocks, and locations on maps) while complying with the HIPAA Privacy Rule's Expert Determination method, i.e., ensuring that the risk of re-identification is very small. The LP model determines the transition probability from an original location of a patient to a new randomized location. However, it has a limitation for the cases of areas with a small population (e.g., median of 10 people in a ZIP code). We extend the previous LP model to accommodate the cases of a smaller population in some locations, while creating de-identified patient spatial data sets which ensure the risk of re-identification is very small. Our LP model was applied to a data set of 11,740 postal codes in the City of Ottawa, Canada. On this data set we demonstrated the limitations of the previous LP model, in that it produces improbable results, and showed how our extensions to deal with small areas allows the de-identification of the whole data set. The LP model described in this study can be used to de-identify geospatial information for areas with small populations with minimal distortion to postal codes. Our LP model can be extended to include other information, such as age and gender.

  8. Rupture of steam lines between blocks D and G

    International Nuclear Information System (INIS)

    1999-01-01

    Analysis of steam lines rupture between blocks D and G of Ignalina NPP was performed. Model for evaluation of thermo hydrodynamic parameters was developed. Structural analysis of the shaft building was done as well. State of the art codes such as RELAP5, ALGOR, NEPTUNE were used in these calculations

  9. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    Science.gov (United States)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  10. Development of the vacuum system pressure responce analysis code PRAC

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Kawasaki, Kouzou; Noshiroya, Shyoji; Koizumi, Jun-ichi.

    1985-03-01

    In this report, we show the method and numerical results of the vacuum system pressure responce analysis code. Since fusion apparatus is made up of many vacuum components, it is required to analyze pressure responce at any points of the system when vacuum system is designed or evaluated. For that purpose evaluating by theoretical solution is insufficient. Numerical analysis procedure such as finite difference method is usefull. In the PRAC code (Pressure Responce Analysis Code), pressure responce is obtained solving derivative equations which is obtained from the equilibrium relation of throughputs and contain the time derivative of pressure. As it considers both molecular and viscous flows, the coefficients of the equation depend on the pressure and the equations become non-linear. This non-linearity is treated as piece-wise linear within each time step. Verification of the code is performed for the simple problems. The agreement between numerical and theoretical solutions is good. To compare with the measured results, complicated model of gas puffing system is analyzed. The agreement is well for practical use. This code will be a useful analytical tool for designing and evaluating vacuum systems such as fusion apparatus. (author)

  11. Development of a relativistic Particle In Cell code PARTDYN for linear accelerator beam transport

    Energy Technology Data Exchange (ETDEWEB)

    Phadte, D., E-mail: deepraj@rrcat.gov.in [LPD, Raja Ramanna Centre for Advanced Technology, Indore 452013 (India); Patidar, C.B.; Pal, M.K. [MAASD, Raja Ramanna Centre for Advanced Technology, Indore (India)

    2017-04-11

    A relativistic Particle In Cell (PIC) code PARTDYN is developed for the beam dynamics simulation of z-continuous and bunched beams. The code is implemented in MATLAB using its MEX functionality which allows both ease of development as well higher performance similar to a compiled language like C. The beam dynamics calculations carried out by the code are compared with analytical results and with other well developed codes like PARMELA and BEAMPATH. The effect of finite number of simulation particles on the emittance growth of intense beams has been studied. Corrections to the RF cavity field expressions were incorporated in the code so that the fields could be calculated correctly. The deviations of the beam dynamics results between PARTDYN and BEAMPATH for a cavity driven in zero-mode have been discussed. The beam dynamics studies of the Low Energy Beam Transport (LEBT) using PARTDYN have been presented.

  12. The monomeric, tetrameric, and fibrillar organization of Fib: the dynamic building block of the bacterial linear motor of Spiroplasma melliferum BC3.

    Science.gov (United States)

    Cohen-Krausz, Sara; Cabahug, Pamela C; Trachtenberg, Shlomo

    2011-07-08

    Spiroplasmas belong to the class Mollicutes, representing the minimal, free-living, and self-replicating forms of life. Spiroplasmas are helical wall-less bacteria and the only ones known to swim by means of a linear motor (rather than the near-universal rotary bacterial motor). The linear motor follows the shortest path along the cell's helical membranal tube. The motor is composed of a flat monolayered ribbon of seven parallel fibrils and is believed to function in controlling cell helicity and motility through dynamic, coordinated, differential length changes in the fibrils. The latter cause local perturbations of helical symmetry, which are essential for net directional displacement in environments with a low Reynolds number. The underlying fibrils' core building block is a circular tetramer of the 59-kDa protein Fib. The fibrils' differential length changes are believed to be driven by molecular switching of Fib, leading consequently to axial ratio and length changes in tetrameric rings. Using cryo electron microscopy, diffractometry, single-particle analysis of isolated ribbons, and sequence analyses of Fib, we determined the overall molecular organization of the Fib monomer, tetramer, fibril, and linear motor of Spiroplasma melliferum BC3 that underlies cell geometry and motility. Fib appears to be a bidomained molecule, of which the N-terminal half is apparently a globular phosphorylase. By a combination of reversible rotation and diagonal shift of Fib monomers, the tetramer adopts either a cross-like nonhanded conformation or a ring-like handed conformation. The sense of Fib rotation may determine the handedness of the linear motor and, eventually, of the cell. A further change in the axial ratio of the ring-like tetramers controls fibril lengths and the consequent helical geometry. Analysis of tetramer quadrants from adjacent fibrils clearly demonstrates local differential fibril lengths. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  14. Using block diagonalization to determine dissociating autoionizing states: Application to N2H, and the outlook for SH

    Directory of Open Access Journals (Sweden)

    Kashinski D.O.

    2015-01-01

    Full Text Available We describe our implementation of the block diagonalization method for calculating the potential surfaces necessary to treat dissociative recombination (DR of electrons with N2H+. Using the methodology we have developed over the past few years, we performed multi-reference, configuration interaction calculations for N2H+ and N2H with a large active space using the GAMESS electronic structure code. We treated both linear and bent geometries of the molecules, with N2 fixed at its equilibrium separation. Because of the strong Rydberg-valence coupling in N2H, it is essential to isolate the appropriate dissociating, autoionizing states. Our procedure requires only modest additional effort beyond the standard methodology. The results indicate that the crossing between the dissociating neutral curve and the initial ion potential is not favorably located for DR, even if the molecule bends. The present calculations thereby confirm our earlier results for linear N2H and reinforce the conclusion that the direct mechanism for DR is likely to be inefficient. We also describe interesting features of our preliminary calculations on SH.

  15. A block variant of the GMRES method on massively parallel processors

    Energy Technology Data Exchange (ETDEWEB)

    Li, Guangye [Cray Research, Inc., Eagan, MN (United States)

    1996-12-31

    This paper presents a block variant of the GMRES method for solving general unsymmetric linear systems. This algorithm generates a transformed Hessenberg matrix by solely using block matrix operations and block data communications. It is shown that this algorithm with block size s, denoted by BVGMRES(s,m), is theoretically equivalent to the GMRES(s*m) method. The numerical results show that this algorithm can be more efficient than the standard GMRES method on a cache based single CPU computer with optimized BLAS kernels. Furthermore, the gain in efficiency is more significant on MPPs due to both efficient block operations and efficient block data communications. Our numerical results also show that in comparison to the standard GMRES method, the more PEs that are used on an MPP, the more efficient the BVGMRES(s,m) algorithm is.

  16. 2D arc-PIC code description: methods and documentation

    CERN Document Server

    Timko, Helga

    2011-01-01

    Vacuum discharges are one of the main limiting factors for future linear collider designs such as that of the Compact LInear Collider. To optimize machine efficiency, maintaining the highest feasible accelerating gradient below a certain breakdown rate is desirable; understanding breakdowns can therefore help us to achieve this goal. As a part of ongoing theoretical research on vacuum discharges at the Helsinki Institute of Physics, the build-up of plasma can be investigated through the particle-in-cell method. For this purpose, we have developed the 2D Arc-PIC code introduced here. We present an exhaustive description of the 2D Arc-PIC code in two parts. In the first part, we introduce the particle-in-cell method in general and detail the techniques used in the code. In the second part, we provide a documentation and derivation of the key equations occurring in the code. The code is original work of the author, written in 2010, and is therefore under the copyright of the author. The development of the code h...

  17. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    Science.gov (United States)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  18. Best linear decoding of random mask images

    International Nuclear Information System (INIS)

    Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.

    1975-01-01

    In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)

  19. Displacement measurement system for linear array detector

    International Nuclear Information System (INIS)

    Zhang Pengchong; Chen Ziyu; Shen Ji

    2011-01-01

    It presents a set of linear displacement measurement system based on encoder. The system includes displacement encoders, optical lens and read out circuit. Displacement read out unit includes linear CCD and its drive circuit, two amplifier circuits, second order Butterworth low-pass filter and the binarization circuit. The coding way is introduced, and various parts of the experimental signal waveforms are given, and finally a linear experimental test results are given. The experimental results are satisfactory. (authors)

  20. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  1. Description and applicability of the BEFEM-CODE

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T.

    1980-05-15

    The BEFEM-CODE, developed for rock mechanics problems in hard rock with joints, is a simple FEM code constructed using triangular and quadrilateral elements. As an option, a joint element of the Goodman type may be used. The Cook-Pian type quadrilateral stress hybrid element was introduced into the version of the code used for the Naesliden project, to replace the constant stress quadrilateral elements. This hybrid element, derived with assumed stress distributions, simplifies the excavation process for use in non-linear models. The shear behavior of the Goodman 1976 joint element has been replaced by Goodman's 1968 formulation. This element makes it possible to take dilation into account, but it was not considered necessary to use dilation to simulate proper joint behavior in the Naesliden project. The code uses Barton's shear strength criteria. Excessive nodal forces due to failure and non-linearities in the joint elements are redistributed with stress transfer iterations. Convergence can be speeded up by dividing each excavation sequence into several loadsteps in which the stiffness matrix is recalculated.

  2. Seed conformal blocks in 4D CFT

    Energy Technology Data Exchange (ETDEWEB)

    Echeverri, Alejandro Castedo; Elkhidir, Emtinan; Karateev, Denis [SISSA and INFN,Via Bonomea 265, I-34136 Trieste (Italy); Serone, Marco [SISSA and INFN,Via Bonomea 265, I-34136 Trieste (Italy); ICTP,Strada Costiera 11, I-34151 Trieste (Italy)

    2016-02-29

    We compute in closed analytical form the minimal set of “seed' conformal blocks associated to the exchange of generic mixed symmetry spinor/tensor operators in an arbitrary representation (ℓ,ℓ̄) of the Lorentz group in four dimensional conformal field theories. These blocks arise from 4-point functions involving two scalars, one (0,|ℓ−ℓ̄|) and one (|ℓ−ℓ̄|,0) spinors or tensors. We directly solve the set of Casimir equations, that can elegantly be written in a compact form for any (ℓ,ℓ̄), by using an educated ansatz and reducing the problem to an algebraic linear system. Various details on the form of the ansatz have been deduced by using the so called shadow formalism. The complexity of the conformal blocks depends on the value of p=|ℓ−ℓ̄| and grows with p, in analogy to what happens to scalar conformal blocks in d even space-time dimensions as d increases. These results open the way to bootstrap 4-point functions involving arbitrary spinor/tensor operators in four dimensional conformal field theories.

  3. Op-Ug TD Optimizer Tool Based on Matlab Code to Find Transition Depth From Open Pit to Block Caving / Narzędzie Optymalizacyjne Oparte O Kod Matlab Wykorzystane Do Określania Głębokości Przejściowej Od Wydobycia Odkrywkowego Do Wybierania Komorami

    Science.gov (United States)

    Bakhtavar, E.

    2015-09-01

    In this study, transition from open pit to block caving has been considered as a challenging problem. For this purpose, the linear integer programing code of Matlab was initially developed on the basis of the binary integer model proposed by Bakhtavar et al (2012). Then a program based on graphical user interface (GUI) was set up and named "Op-Ug TD Optimizer". It is a beneficial tool for simple application of the model in all situations where open pit is considered together with block caving method for mining an ore deposit. Finally, Op-Ug TD Optimizer has been explained step by step through solving the transition from open pit to block caving problem of a case ore deposit. W pracy tej rozważano skomplikowane zagadnienie przejścia od wybierania odkrywkowego do komorowego. W tym celu opracowano kod programowania liniowego w środowisku MATLAB w oparciu o model liczb binarnych zaproponowany przez Bakhtavara (2012). Następnie opracowano program z wykorzystujący graficzny interfejs użytkownika o nazwie Optymalizator Op-Ug TD. Jest to niezwykle cenne narzędzie umożliwiające stosowanie modelu dla wszystkich warunków w sytuacjach gdy rozważamy prowadzenie wydobycia metodą odkrywkową oraz wydobycie komorowe przy eksploatacji złóż rud żelaza. W końcowej części pracy podano szczegółową instrukcję stosowanie programu Optymalizator na przedstawionym przykładzie przejścia od wydobycia rud żelaza metodami odkrywkowymi poprzez wydobycie komorami.

  4. A Very Fast and Angular Momentum Conserving Tree Code

    International Nuclear Information System (INIS)

    Marcello, Dominic C.

    2017-01-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  5. A Very Fast and Angular Momentum Conserving Tree Code

    Energy Technology Data Exchange (ETDEWEB)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com [Department of Physics and Astronomy, and Center for Computation and Technology Louisiana State University, Baton Rouge, LA 70803 (United States)

    2017-09-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  6. An overset grid approach to linear wave-structure interaction

    DEFF Research Database (Denmark)

    Read, Robert; Bingham, Harry B.

    2012-01-01

    A finite-difference based approach to wave-structure interaction is reported that employs the overset approach to grid generation. A two-dimensional code that utilizes the Overture C++ library has been developed to solve the linear radiation problem for a floating body of arbitrary form. This sof......A finite-difference based approach to wave-structure interaction is reported that employs the overset approach to grid generation. A two-dimensional code that utilizes the Overture C++ library has been developed to solve the linear radiation problem for a floating body of arbitrary form...

  7. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  8. Linear-time general decoding algorithm for the surface code

    Science.gov (United States)

    Darmawan, Andrew S.; Poulin, David

    2018-05-01

    A quantum error correcting protocol can be substantially improved by taking into account features of the physical noise process. We present an efficient decoder for the surface code which can account for general noise features, including coherences and correlations. We demonstrate that the decoder significantly outperforms the conventional matching algorithm on a variety of noise models, including non-Pauli noise and spatially correlated noise. The algorithm is based on an approximate calculation of the logical channel using a tensor-network description of the noisy state.

  9. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  10. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  11. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  12. A new block cipher based on chaotic map and group theory

    International Nuclear Information System (INIS)

    Yang Huaqian; Liao Xiaofeng; Wong Kwokwo; Zhang Wei; Wei Pengcheng

    2009-01-01

    Based on the study of some existing chaotic encryption algorithms, a new block cipher is proposed. In the proposed cipher, two sequences of decimal numbers individually generated by two chaotic piecewise linear maps are used to determine the noise vectors by comparing the element of the two sequences. Then a sequence of decimal numbers is used to define a bijection map. The modular multiplication operation in the group Z 2 8 +1 * and permutations are alternately applied on plaintext with block length of multiples of 64 bits to produce ciphertext blocks of the same length. Analysis show that the proposed block cipher does not suffer from the flaws of pure chaotic cryptosystems.

  13. Recovery from distal ulnar motor conduction block injury: serial EMG studies.

    Science.gov (United States)

    Montoya, Liliana; Felice, Kevin J

    2002-07-01

    Acute conduction block injuries often result from nerve compression or trauma. The temporal pattern of clinical, electrophysiologic, and histopathologic changes following these injuries has been extensively studied in experimental animal models but not in humans. Our recent evaluation of a young man with an injury to the deep motor branch of the ulnar nerve following nerve compression from weightlifting exercises provided the opportunity to follow the course and recovery of a severe conduction block injury with sequential nerve conduction studies. The conduction block slowly and completely resolved, as did the clinical deficit, over a 14-week period. The reduction in conduction block occurred at a linear rate of -6.1% per week. Copyright 2002 Wiley Periodicals, Inc.

  14. Evaluation of ETOG-3Q, ETOG-3, FLANGE-II, XLACS, NJOY and LINEAR/RECENT/GROUPIE computer codes concerning to the resonance contribution and background cross sections

    International Nuclear Information System (INIS)

    Anaf, J.; Chalhoub, E.S.

    1988-12-01

    The NJOY and LINEAR/RECENT/GROUPIE calculational procedures for the resolved and unresolved resonance contributions and background cross sections are evaluated. Elastic scattering, fission and capture multigroup cross sections generated by these codes and the previously validated ETOG-3Q, ETOG-3, FLANGE-II and XLACS are compared. Constant weighting function and zero Kelvin temperature are considered. Discrepancies are presented and analysed. (author) [pt

  15. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  16. Striatal dopamine release codes uncertainty in pathological gambling

    DEFF Research Database (Denmark)

    Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka

    2012-01-01

    Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain—striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [11C]raclopride to measure...... dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...

  17. Striatal dopamine release codes uncertainty in pathological gambling

    DEFF Research Database (Denmark)

    Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka

    2012-01-01

    Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain-striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear...... function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [(11)C......]raclopride to measure dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand...

  18. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  19. Energy-Efficient Channel Coding Strategy for Underwater Acoustic Networks

    Directory of Open Access Journals (Sweden)

    Grasielli Barreto

    2017-03-01

    Full Text Available Underwater acoustic networks (UAN allow for efficiently exploiting and monitoring the sub-aquatic environment. These networks are characterized by long propagation delays, error-prone channels and half-duplex communication. In this paper, we address the problem of energy-efficient communication through the use of optimized channel coding parameters. We consider a two-layer encoding scheme employing forward error correction (FEC codes and fountain codes (FC for UAN scenarios without feedback channels. We model and evaluate the energy consumption of different channel coding schemes for a K-distributed multipath channel. The parameters of the FEC encoding layer are optimized by selecting the optimal error correction capability and the code block size. The results show the best parameter choice as a function of the link distance and received signal-to-noise ratio.

  20. Implementation of LT codes based on chaos

    International Nuclear Information System (INIS)

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  1. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  2. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  3. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    Science.gov (United States)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  4. On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry

    Science.gov (United States)

    2014-06-01

    longest channel impulse response must be inserted between the two intervals. Here, such an interval is assumed, although we won’t complicate the notation...linear or non-linear, with or without noise whitening ) with the usual performance- complexity tradeoffs. Here, we apply the approximate MMSE

  5. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  6. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  7. Linear theory radial and nonradial pulsations of DA dwarf stars

    International Nuclear Information System (INIS)

    Starrfield, S.; Cox, A.N.; Hodson, S.; Pesnell, W.D.

    1982-01-01

    The Los Alamos stellar envelope and radial linear non-adiabatic computer code, along with a new Los Alamos non-radial code are used to investigate the total hydrogen mass necessary to produce the non-radial instability of DA dwarfs

  8. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  9. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...... with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed-Solomon codes. A combination...... of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes....

  10. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  11. LiTrack A Fast longitudinal phase space tracking code with graphical user interface

    CERN Document Server

    Emma, Paul

    2005-01-01

    Many linear accelerators, such as linac-based light sources and linear colliders, apply longitudinal phase space manipulations in their design, including electron bunch compression and wakefield-induced energy spread control. Several computer codes handle such issues, but most require detailed information on the transverse focusing lattice. In fact, in most linear accelerators, the transverse distributions do not significantly affect the longitudinal, and can be ignored initially. This allows the use of a fast 2D code to study longitudinal aspects without time-consuming considerations of the transverse focusing. LiTrack is based on a 15-year old code (same name) originally written by one of us (KB), which is now a MATLAB-based code with additional features, such as a graphical user interface and output plotting. The single-bunch tracking includes RF acceleration, bunch compression to 3rd order, geometric and resistive wakefields, aperture limits, synchrotron radiation, and flexible output plotting. The code w...

  12. Goya - an MHD equilibrium code for toroidal plasmas

    International Nuclear Information System (INIS)

    Scheffel, J.

    1984-09-01

    A description of the GOYA free-boundary equilibrium code is given. The non-linear Grad-Shafranov equation of ideal MHD is solved in a toroidal geometry for plasmas with purely poloidal magnetic fields. The code is based on a field line-tracing procedure, making storage of a large amount of information on a grid unnecessary. Usage of the code is demonstrated by computations of equi/libria for the EXTRAP-T1 device. (Author)

  13. Block ground interaction of rockfalls

    Science.gov (United States)

    Volkwein, Axel; Gerber, Werner; Kummer, Peter

    2016-04-01

    During a rockfall the interaction of the falling block with the ground is one of the most important factors that define the evolution of a rockfall trajectory. It steers the rebound, the rotational movement, possibly brake effects, friction losses and damping effects. Therefore, if most reliable rockfall /trajectory simulation software is sought a good understanding of the block ground interaction is necessary. Today's rockfall codes enable the simulation of a fully 3D modelled block within a full 3D surface . However, the details during the contact, i.e. the contact duration, the penetration depth or the dimension of the marks in the ground are usually not part of the simulation. Recent field tests with rocks between 20 and 80 kg have been conducted on a grassy slope in 2014 [1]. A special rockfall sensor [2] within the blocks measured the rotational velocity and the acting accelerations during the tests. External video records and a so-called LocalPositioningSystem deliver information on the travel velocity. With these data not only the flight phases of the trajectories but also the contacts with the ground can be analysed. During the single jumps of a block the flight time, jump length, the velocity, and the rotation are known. During the single impacts their duration and the acting accelerations are visible. Further, the changes of rotational and translational velocity influence the next jump of the block. The change of the rotational velocity over the whole trajectory nicely visualizes the different phases of a rockfall regarding general acceleration and deceleration in respect to the inclination and the topography of the field. References: [1] Volkwein A, Krummenacher B, Gerber W, Lardon J, Gees F, Brügger L, Ott T (2015) Repeated controlled rockfall trajectory testing. [Abstract] Geophys. Res. Abstr. 17: EGU2015-9779. [2] Volkwein A, Klette J (2014) Semi-Automatic Determination of Rockfall Trajectories. Sensors 14: 18187-18210.

  14. An Improved EMD-Based Dissimilarity Metric for Unsupervised Linear Subspace Learning

    Directory of Open Access Journals (Sweden)

    Xiangchun Yu

    2018-01-01

    Full Text Available We investigate a novel way of robust face image feature extraction by adopting the methods based on Unsupervised Linear Subspace Learning to extract a small number of good features. Firstly, the face image is divided into blocks with the specified size, and then we propose and extract pooled Histogram of Oriented Gradient (pHOG over each block. Secondly, an improved Earth Mover’s Distance (EMD metric is adopted to measure the dissimilarity between blocks of one face image and the corresponding blocks from the rest of face images. Thirdly, considering the limitations of the original Locality Preserving Projections (LPP, we proposed the Block Structure LPP (BSLPP, which effectively preserves the structural information of face images. Finally, an adjacency graph is constructed and a small number of good features of a face image are obtained by methods based on Unsupervised Linear Subspace Learning. A series of experiments have been conducted on several well-known face databases to evaluate the effectiveness of the proposed algorithm. In addition, we construct the noise, geometric distortion, slight translation, slight rotation AR, and Extended Yale B face databases, and we verify the robustness of the proposed algorithm when faced with a certain degree of these disturbances.

  15. Isomonodromic tau-functions from Liouville conformal blocks

    International Nuclear Information System (INIS)

    Iorgov, N.; Lisovyy, O.

    2014-01-01

    The goal of this note is to show that the Riemann-Hilbert problem to find multivalued analytic functions with SL(2,C)-valued monodromy on Riemann surfaces of genus zero with n punctures can be solved by taking suitable linear combinations of the conformal blocks of Liouville theory at c=1. This implies a similar representation for the isomonodromic tau-function. In the case n=4 we thereby get a proof of the relation between tau-functions and conformal blocks discovered in O. Gamayun, N. Iorgov, and O. Lisovyy (2012). We briefly discuss a possible application of our results to the study of relations between certain N=2 supersymmetric gauge theories and conformal field theory.

  16. Multicompartment micellar aggregates of linear ABC amphiphiles in solvents selective for the C block: A Monte Carlo simulation

    KAUST Repository

    Zhu, Yutian; Yu, Haizhou; Wang, Yongmei; Cui, Jie; Kong, Weixin; Jiang, Wei

    2012-01-01

    the simulations and the detailed phase diagrams for the ABC amphiphiles with different block lengths are obtained. The simulation results reveal that the micellar structure is largely controlled by block length, solvent quality, and incompatibility between

  17. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  18. A 'quick-look' report on the THETIS 80% blocked cluster forced reflood experiments

    International Nuclear Information System (INIS)

    Cooper, C.A.; Pearson, K.G.

    1984-01-01

    A brief selection of results of forced reflooding experiments with the THETIS 80 percent blocked cluster is presented. A description of the THETIS blocked cluster test assemblies, and details of the test conditions, are given. The two forced reflooding experiments have been the subject of a blind calculation exercise with the BART code, and the results of these experiments are compared with the results from corresponding experiments with the 90 percent blocked cluster test assembly. Some general observations are made, arising from the comparison of these two series of experiments, and a qualitative explanation for the relatively complex variation of the heat transfer within the THETIS blockages is advanced. A full report on the 80 percent blocked cluster forced reflooding experiments will be available later. (U.K.)

  19. Non-linear model supported predicted strategy of regulation for the block regulation of a membrane based oxyfuel power plant process; Nichtlineare modellgestuetzte praediktive Regelungsstrategie fuer Blockregelung eines membranbasierten Oxyfuel-Kraftwerksprozesses

    Energy Technology Data Exchange (ETDEWEB)

    Hoelemann, Sebastian

    2011-07-01

    As a part of the OXYCOAL AC project a concept of a fossil-fired power plant without emissions of CO{sub 2} is developed in which the recirculated flue gas in a high-temperature ceramic membrane is enriched with oxygen for the combustion of coal. This enables a separation of CO{sub 2} at relatively low efficiency losses. The contribution under consideration deals with the design of a block control strategy for this dynamic extremely demanding process. A cascaded control structure with two non-linear model-based predictive controllers is implemented. An essential component of the cascade structure is an adaptation of this specially developed algorithm with which the underlying controller determines the values for restrictions that are valid in the super-posed regulator in determining the setpoint. The block control approach is examined using a simulation model.

  20. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output