WorldWideScience

Sample records for protograph based ldpc

  1. Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs

    OpenAIRE

    Sun, Yizeng

    2013-01-01

    Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...

  2. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  3. Protograph LDPC Codes Over Burst Erasure Channels

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  4. Protograph LDPC Codes for the Erasure Channel

    Science.gov (United States)

    Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed

  5. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  6. Rate-Compatible Protograph LDPC Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  7. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  8. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  9. A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening

    Science.gov (United States)

    Dolinar, Sam

    2005-01-01

    We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.

  10. Protograph LDPC Codes with Node Degrees at Least 3

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  11. Rate-compatible protograph LDPC code families with linear minimum distance

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  12. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  13. QC-LDPC code-based cryptography

    CERN Document Server

    Baldi, Marco

    2014-01-01

    This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...

  14. High-throughput GPU-based LDPC decoding

    Science.gov (United States)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  15. Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic

    Directory of Open Access Journals (Sweden)

    Hai Zhu

    2018-01-01

    Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.

  16. Design of ACM system based on non-greedy punctured LDPC codes

    Science.gov (United States)

    Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng

    2017-08-01

    In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.

  17. LDPC and SHA based iris recognition for image authentication

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2012-11-01

    Full Text Available We introduce a novel way to authenticate an image using Low Density Parity Check (LDPC and Secure Hash Algorithm (SHA based iris recognition method with reversible watermarking scheme, which is based on Integer Wavelet Transform (IWT and threshold embedding technique. The parity checks and parity matrix of LDPC encoding and cancellable biometrics i.e., hash string of unique iris code from SHA-512 are embedded into an image for authentication purpose using reversible watermarking scheme based on IWT and threshold embedding technique. Simply by reversing the embedding process, the original image, parity checks, parity matrix and SHA-512 hash are extracted back from watermarked-image. For authentication, the new hash string produced by employing SHA-512 on error corrected iris code from live person is compared with hash string extracted from watermarked-image. The LDPC code reduces the hamming distance for genuine comparisons by a larger amount than for the impostor comparisons. This results in better separation between genuine and impostor users which improves the authentication performance. Security of this scheme is very high due to the security complexity of SHA-512, which is 2256 under birthday attack. Experimental results show that this approach can assure more accurate authentication with a low false rejection or false acceptance rate and outperforms the prior arts in terms of PSNR.

  18. A novel construction method of QC-LDPC codes based on CRT for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-05-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.

  19. A novel QC-LDPC code based on the finite field multiplicative group for optical communications

    Science.gov (United States)

    Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen

    2013-09-01

    A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.

  20. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  1. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  2. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  3. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to

  4. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    Science.gov (United States)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  5. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  6. A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric

    Directory of Open Access Journals (Sweden)

    BHURTAH-SEEWOOSUNGKUR, I.

    2016-05-01

    Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.

  7. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    Science.gov (United States)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  8. Rate-Compatible LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  9. Bilayer Protograph Codes for Half-Duplex Relay Channels

    Science.gov (United States)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive

  10. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    Science.gov (United States)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  11. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  12. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    Science.gov (United States)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  13. High Girth Column-Weight-Two LDPC Codes Based on Distance Graphs

    Directory of Open Access Journals (Sweden)

    Gabofetswe Malema

    2007-01-01

    Full Text Available LDPC codes of column weight of two are constructed from minimal distance graphs or cages. Distance graphs are used to represent LDPC code matrices such that graph vertices that represent rows and edges are columns. The conversion of a distance graph into matrix form produces an adjacency matrix with column weight of two and girth double that of the graph. The number of 1's in each row (row weight is equal to the degree of the corresponding vertex. By constructing graphs with different vertex degrees, we can vary the rate of corresponding LDPC code matrices. Cage graphs are used as examples of distance graphs to design codes with different girths and rates. Performance of obtained codes depends on girth and structure of the corresponding distance graphs.

  14. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  15. Instanton-based techniques for analysis and reduction of error floors of LDPC codes

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  16. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  17. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    Science.gov (United States)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  18. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    Science.gov (United States)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  19. LDPC-based iterative joint source-channel decoding for JPEG2000.

    Science.gov (United States)

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  20. Spatially coupled LDPC coding in cooperative wireless networks

    NARCIS (Netherlands)

    Jayakody, D.N.K.; Skachek, V.; Chen, B.

    2016-01-01

    This paper proposes a novel technique of spatially coupled low-density parity-check (SC-LDPC) code-based soft forwarding relaying scheme for a two-way relay system. We introduce an array-based optimized SC-LDPC codes in relay channels. A more precise model is proposed to characterize the residual

  1. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  2. Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes

    Science.gov (United States)

    Su, Hualing; He, Yucheng; Zhou, Lin

    2017-08-01

    In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.

  3. CONSTRUCTION OF REGULAR LDPC LIKE CODES BASED ON FULL RANK CODES AND THEIR ITERATIVE DECODING USING A PARITY CHECK TREE

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2011-09-01

    Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.

  4. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  5. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    Science.gov (United States)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  6. On the performance of 1-level LDPC lattices

    OpenAIRE

    Sadeghi, Mohammad-Reza; Sakzad, Amin

    2013-01-01

    The low-density parity-check (LDPC) lattices perform very well in high dimensions under generalized min-sum iterative decoding algorithm. In this work we focus on 1-level LDPC lattices. We show that these lattices are the same as lattices constructed based on Construction A and low-density lattice-code (LDLC) lattices. In spite of having slightly lower coding gain, 1-level regular LDPC lattices have remarkable performances. The lower complexity nature of the decoding algorithm for these type ...

  7. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    Science.gov (United States)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  8. Multilevel LDPC Codes Design for Multimedia Communication CDMA System

    Directory of Open Access Journals (Sweden)

    Hou Jia

    2004-01-01

    Full Text Available We design multilevel coding (MLC with a semi-bit interleaved coded modulation (BICM scheme based on low density parity check (LDPC codes. Different from the traditional designs, we joined the MLC and BICM together by using the Gray mapping, which is suitable to transmit the data over several equivalent channels with different code rates. To perform well at signal-to-noise ratio (SNR to be very close to the capacity of the additive white Gaussian noise (AWGN channel, random regular LDPC code and a simple semialgebra LDPC (SA-LDPC code are discussed in MLC with parallel independent decoding (PID. The numerical results demonstrate that the proposed scheme could achieve both power and bandwidth efficiency.

  9. Experimental research and comparison of LDPC and RS channel coding in ultraviolet communication systems.

    Science.gov (United States)

    Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin

    2014-03-10

    We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.

  10. Protograph-Based Raptor-Like Codes

    Science.gov (United States)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  11. Fast QC-LDPC code for free space optical communication

    Science.gov (United States)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  12. Resource Efficient LDPC Decoders for Multimedia Communication

    OpenAIRE

    Chandrasetty, Vikram Arkalgud; Aziz, Syed Mahfuzul

    2013-01-01

    Achieving high image quality is an important aspect in an increasing number of wireless multimedia applications. These applications require resource efficient error correction hardware to detect and correct errors introduced by the communication channel. This paper presents an innovative flexible architecture for error correction using Low-Density Parity-Check (LDPC) codes. The proposed partially-parallel decoder architecture utilizes a novel code construction technique based on multi-level H...

  13. DNA Barcoding through Quaternary LDPC Codes.

    Science.gov (United States)

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  14. DNA Barcoding through Quaternary LDPC Codes.

    Directory of Open Access Journals (Sweden)

    Elizabeth Tapia

    Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.

  15. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  16. New Technique for Improving Performance of LDPC Codes in the Presence of Trapping Sets

    Directory of Open Access Journals (Sweden)

    Mohamed Adnan Landolsi

    2008-06-01

    Full Text Available Trapping sets are considered the primary factor for degrading the performance of low-density parity-check (LDPC codes in the error-floor region. The effect of trapping sets on the performance of an LDPC code becomes worse as the code size decreases. One approach to tackle this problem is to minimize trapping sets during LDPC code design. However, while trapping sets can be reduced, their complete elimination is infeasible due to the presence of cycles in the underlying LDPC code bipartite graph. In this work, we introduce a new technique based on trapping sets neutralization to minimize the negative effect of trapping sets under belief propagation (BP decoding. Simulation results for random, progressive edge growth (PEG and MacKay LDPC codes demonstrate the effectiveness of the proposed technique. The hardware cost of the proposed technique is also shown to be minimal.

  17. Performance analysis of LDPC codes on OOK terahertz wireless channels

    International Nuclear Information System (INIS)

    Liu Chun; Wang Chang; Cao Jun-Cheng

    2016-01-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. (paper)

  18. Enhancement of Unequal Error Protection Properties of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Poulliat Charly

    2007-01-01

    Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.

  19. LDPC coding for QKD at higher photon flux levels based on spatial entanglement of twin beams in PDC

    International Nuclear Information System (INIS)

    Daneshgaran, Fred; Mondin, Marina; Bari, Inam

    2014-01-01

    Twin beams generated by Parametric Down Conversion (PDC) exhibit quantum correlations that has been effectively used as a tool for many applications including calibration of single photon detectors. By now, detection of multi-mode spatial correlations is a mature field and in principle, only depends on the transmission and detection efficiency of the devices and the channel. In [2, 4, 5], the authors utilized their know-how on almost perfect selection of modes of pairwise correlated entangled beams and the optimization of the noise reduction to below the shot-noise level, for absolute calibration of Charge Coupled Device (CCD) cameras. The same basic principle is currently being considered by the same authors for possible use in Quantum Key Distribution (QKD) [3, 1]. The main advantage in such an approach would be the ability to work with much higher photon fluxes than that of a single photon regime that is theoretically required for discrete variable QKD applications (in practice, very weak laser pulses with mean photon count below one are used).The natural setup of quantization of CCD detection area and subsequent measurement of the correlation statistic needed to detect the presence of the eavesdropper Eve, leads to a QKD channel model that is a Discrete Memoryless Channel (DMC) with a number of inputs and outputs that can be more than two (i.e., the channel is a multi-level DMC). This paper investigates the use of Low Density Parity Check (LDPC) codes for information reconciliation on the effective parallel channels associated with the multi-level DMC. The performance of such codes are shown to be close to the theoretical limits.

  20. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  1. FPGA implementation of high-performance QC-LDPC decoder for optical communications

    Science.gov (United States)

    Zou, Ding; Djordjevic, Ivan B.

    2015-01-01

    Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.

  2. Optical LDPC decoders for beyond 100 Gbits/s optical transmission.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2009-05-01

    We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.

  3. LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.

    Science.gov (United States)

    Djordjevic, Ivan B; Arabaci, Murat

    2010-11-22

    An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.

  4. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    Science.gov (United States)

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  5. Analysis of Non-binary Hybrid LDPC Codes

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2008-01-01

    In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.

  6. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  7. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  8. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  9. Improving a Power Line Communications Standard with LDPC Codes

    Directory of Open Access Journals (Sweden)

    Hsu Christine

    2007-01-01

    Full Text Available We investigate a power line communications (PLC scheme that could be used to enhance the HomePlug 1.0 standard, specifically its ROBO mode which provides modest throughput for the worst case PLC channel. The scheme is based on using a low-density parity-check (LDPC code, in lieu of the concatenated Reed-Solomon and convolutional codes in ROBO mode. The PLC channel is modeled with multipath fading and Middleton's class A noise. Clipping is introduced to mitigate the effect of impulsive noise. A simple and effective method is devised to estimate the variance of the clipped noise for LDPC decoding. Simulation results show that the proposed scheme outperforms the HomePlug 1.0 ROBO mode and has lower computational complexity. The proposed scheme also dispenses with the repetition of information bits in ROBO mode to gain time diversity, resulting in 4-fold increase in physical layer throughput.

  10. LDPC kódy

    OpenAIRE

    Hrouza, Ondřej

    2012-01-01

    Práce se zabývá problematikou LDPC kódů. Jsou zde popsány metody vytváření paritní matice, kde je kladen důraz především na strukturované vytváření této matice za použití konečné geometrie: Euklidovské geometrie a projektivní geometrie. Další oblastí, které se práce věnuje je dekódování LDPC kódů. Práce porovnává čtyři dekódovací metody: Hard-Decision algoritmus, Bit-Flipping algoritmus, The Sum-Product algoritmus a Log Likelihood algoritmus, při kterých je kladen důraz především na iterativn...

  11. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  12. LDPC-PPM Coding Scheme for Optical Communication

    Science.gov (United States)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  13. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijun Zeng

    2016-01-01

    Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

  14. An Area-Efficient Reconfigurable LDPC Decoder with Conflict Resolution

    Science.gov (United States)

    Zhou, Changsheng; Huang, Yuebin; Huang, Shuangqu; Chen, Yun; Zeng, Xiaoyang

    Based on Turbo-Decoding Message-Passing (TDMP) and Normalized Min-Sum (NMS) algorithm, an area efficient LDPC decoder that supports both structured and unstructured LDPC codes is proposed in this paper. We introduce a solution to solve the memory access conflict problem caused by TDMP algorithm. We also arrange the main timing schedule carefully to handle the operations of our solution while avoiding much additional hardware consumption. To reduce the memory bits needed, the extrinsic message storing strategy is also optimized. Besides the extrinsic message recover and the accumulate operation are merged together. To verify our architecture, a LDPC decoder that supports both China Multimedia Mobile Broadcasting (CMMB) and Digital Terrestrial/ Television Multimedia Broadcasting (DTMB) standards is developed using SMIC 0.13µm standard CMOS process. The core area is 4.75mm2 and the maximum operating clock frequency is 200MHz. The estimated power consumption is 48.4mW at 25MHz for CMMB and 130.9mW at 50MHz for DTMB with 5 iterations and 1.2V supply.

  15. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  16. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  17. PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.

    Science.gov (United States)

    Djordjevic, Ivan B

    2007-04-02

    The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.

  18. Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-07-07

    Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.

  19. Construction of LDPC codes over GF(q) with modified progressive edge growth

    Institute of Scientific and Technical Information of China (English)

    CHEN Xin; MEN Ai-dong; YANG Bo; QUAN Zi-yi

    2009-01-01

    A parity check matrix construction method for constructing a low-density parity-check (LDPC) codes over GF(q) (q>2) based on the modified progressive edge growth (PEG) algorithm is introduced. First, the nonzero locations of the parity check matrix are selected using the PEG algorithm. Then the nonzero elements are defined by avoiding the definition of subcode. A proof is given to show the good minimum distance property of constructed GF(q)-LDPC codes. Simulations are also presented to illustrate the good error performance of the designed codes.

  20. Bounded-Angle Iterative Decoding of LDPC Codes

    Science.gov (United States)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  1. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  2. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  3. Performance analysis of LDPC codes on OOK terahertz wireless channels

    Science.gov (United States)

    Chun, Liu; Chang, Wang; Jun-Cheng, Cao

    2016-02-01

    Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).

  4. Codeword Structure Analysis for LDPC Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Hua Zhou

    2015-12-01

    Full Text Available The codewords of a low-density parity-check (LDPC convolutional code (LDPC-CC are characterised into structured and non-structured. The number of the structured codewords is dominated by the size of the polynomial syndrome former matrix H T ( D , while the number of the non-structured ones depends on the particular monomials or polynomials in H T ( D . By evaluating the relationship of the codewords between the mother code and its super codes, the low weight non-structured codewords in the super codes can be eliminated by appropriately choosing the monomials or polynomials in H T ( D , resulting in improved distance spectrum of the mother code.

  5. Experimental demonstration of the transmission performance for LDPC-coded multiband OFDM ultra-wideband over fiber system

    Science.gov (United States)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu

    2015-01-01

    To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.

  6. Pilotless Frame Synchronization Using LDPC Code Constraints

    Science.gov (United States)

    Jones, Christopher; Vissasenor, John

    2009-01-01

    A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.

  7. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  8. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Marc A. Armand

    2008-07-01

    Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over ℤ2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  9. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  10. Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels

    Directory of Open Access Journals (Sweden)

    IMTAWIL, V.

    2011-11-01

    Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.

  11. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  12. LDPC coded OFDM over the atmospheric turbulence channel.

    Science.gov (United States)

    Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A

    2007-05-14

    Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).

  13. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  14. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  15. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  16. Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels

    Directory of Open Access Journals (Sweden)

    William G. Cowley

    2008-04-01

    Full Text Available This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of 2π/M radians, where “M” is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over ℤ4 with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of 2∘ standard deviation per symbol with small loss.

  17. Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels

    Directory of Open Access Journals (Sweden)

    Karuppasami Sridhar

    2008-01-01

    Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.

  18. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  19. Bilayer expurgated LDPC codes with uncoded relaying

    Directory of Open Access Journals (Sweden)

    Md. Noor-A-Rahim

    2017-08-01

    Full Text Available Bilayer low-density parity-check (LDPC codes are an effective coding technique for decode-and-forward relaying, where the relay forwards extra parity bits to help the destination to decode the source bits correctly. In the existing bilayer coding scheme, these parity bits are protected by an error correcting code and assumed reliably available at the receiver. We propose an uncoded relaying scheme, where the extra parity bits are forwarded to the destination without any protection. Through density evolution analysis and simulation results, we show that our proposed scheme achieves better performance in terms of bit erasure probability than the existing relaying scheme. In addition, our proposed scheme results in lower complexity at the relay.

  20. Experimental demonstration of nonbinary LDPC convolutional codes for DP-64QAM/256QAM

    NARCIS (Netherlands)

    Koike-Akino, T.; Sugihara, K.; Millar, D.S.; Pajovic, M.; Matsumoto, W.; Alvarado, A.; Maher, R.; Lavery, D.; Paskov, M.; Kojima, K.; Parsons, K.; Thomsen, B.C.; Savory, S.J.; Bayvel, P.

    2016-01-01

    We show the great potential of nonbinary LDPC convolutional codes (NB-LDPC-CC) with low-latency windowed decoding. It is experimentally demonstrated that NB-LDPC-CC can offer a performance improvement of up to 5 dB compared with binary coding.

  1. On the photonic implementation of universal quantum gates, bell states preparation circuit and quantum LDPC encoders and decoders based on directional couplers and HNLF.

    Science.gov (United States)

    Djordjevic, Ivan B

    2010-04-12

    The Bell states preparation circuit is a basic circuit required in quantum teleportation. We describe how to implement it in all-fiber technology. The basic building blocks for its implementation are directional couplers and highly nonlinear optical fiber (HNLF). Because the quantum information processing is based on delicate superposition states, it is sensitive to quantum errors. In order to enable fault-tolerant quantum computing the use of quantum error correction is unavoidable. We show how to implement in all-fiber technology encoders and decoders for sparse-graph quantum codes, and provide an illustrative example to demonstrate this implementation. We also show that arbitrary set of universal quantum gates can be implemented based on directional couplers and HNLFs.

  2. Performance Analysis for Cooperative Communication System with QC-LDPC Codes Constructed with Integer Sequences

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2015-01-01

    Full Text Available This paper presents four different integer sequences to construct quasi-cyclic low-density parity-check (QC-LDPC codes with mathematical theory. The paper introduces the procedure of the coding principle and coding. Four different integer sequences constructing QC-LDPC code are compared with LDPC codes by using PEG algorithm, array codes, and the Mackey codes, respectively. Then, the integer sequence QC-LDPC codes are used in coded cooperative communication. Simulation results show that the integer sequence constructed QC-LDPC codes are effective, and overall performance is better than that of other types of LDPC codes in the coded cooperative communication. The performance of Dayan integer sequence constructed QC-LDPC is the most excellent performance.

  3. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  4. Analysis of Minimal LDPC Decoder System on a Chip Implementation

    Directory of Open Access Journals (Sweden)

    T. Palenik

    2015-09-01

    Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.

  5. Transmission over UWB channels with OFDM system using LDPC coding

    Science.gov (United States)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  6. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  7. The application of LDPC code in MIMO-OFDM system

    Science.gov (United States)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  8. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    Science.gov (United States)

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  9. On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission

    Science.gov (United States)

    Djordjevic, Ivan B.; Xu, Lei; Wang, Ting

    2010-12-01

    Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.

  10. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  11. Hardwarearchitektur für einen universellen LDPC Decoder

    Directory of Open Access Journals (Sweden)

    C. Beuschel

    2009-05-01

    Full Text Available Im vorliegenden Beitrag wird eine universelle Decoderarchitektur für einen Low-Density Parity-Check (LDPC Code Decoder vorgestellt. Anders als bei den in der Literatur häufig beschriebenen Architekturen für strukturierte Codes ist die hier vorgestellte Architektur frei programmierbar, so dass jeder beliebige LDPC Code durch eine Änderung der Initialisierung des Speichers für die Prüfmatrix mit derselben Hardware decodiert werden kann. Die größte Herausforderung beim Entwurf von teilparallelen LDPC Decoder Architekturen liegt im konfliktfreien Datenaustausch zwischen mehreren parallelen Speichern und Berechnungseinheiten, wozu ein Mapping und Scheduling Algorithmus benötigt wird. Der hier vorgestellte Algorithmus stützt sich auf Graphentheorie und findet für jeden beliebigen LDPC Code eine für die Architektur optimale Lösung. Damit sind keine Wartezyklen notwendig und die Parallelität der Architektur wird zu jedem Zeitpunkt voll ausgenutzt.

  12. Optimisation des codes LDPC irréguliers et algorithmes de décodage des codes LDPC q-aires

    OpenAIRE

    Cances , Jean-Pierre

    2013-01-01

    Cette note technique rappelle les principes d'optimisation pour obtenir les profils de codes LDPC irréguliers performants et rappelle les principes des algorithmes de décodage utilizes pour les codes LDPC q-aires à grande efficacité spectrale.

  13. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    Science.gov (United States)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  14. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  15. Kódování a efektivita LDPC kódů

    OpenAIRE

    Kozlík, Andrew

    2011-01-01

    Low-density parity-check (LDPC) codes are linear error correcting codes which are capable of performing near channel capacity. Furthermore, they admit efficient decoding algorithms that provide near optimum performance. Their main disadvantage is that most LDPC codes have relatively complex encoders. In this thesis, we begin by giving a detailed discussion of the sum-product decoding algorithm, we then study the performance of LDPC codes on the binary erasure channel under sum-product decodin...

  16. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    Science.gov (United States)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  17. Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation

    Science.gov (United States)

    Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie

    2009-01-01

    In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.

  18. Evaluation of four-dimensional nonbinary LDPC-coded modulation for next-generation long-haul optical transport networks.

    Science.gov (United States)

    Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B

    2012-04-09

    Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.

  19. Cooperative optimization and their application in LDPC codes

    Science.gov (United States)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  20. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    Science.gov (United States)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  1. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    Science.gov (United States)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  2. MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot

    Science.gov (United States)

    Daoud, Omar; Alani, Omar

    This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.

  3. An LDPC decoder architecture for wireless sensor network applications.

    Science.gov (United States)

    Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.

  4. An LDPC Decoder Architecture for Wireless Sensor Network Applications

    Science.gov (United States)

    Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido

    2012-01-01

    The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724

  5. Design LDPC Codes without Cycles of Length 4 and 6

    Directory of Open Access Journals (Sweden)

    Kiseon Kim

    2008-04-01

    Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.

  6. Min-Max decoding for non binary LDPC codes

    OpenAIRE

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced c...

  7. Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2014-06-01

    Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.

  8. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications

    Directory of Open Access Journals (Sweden)

    M. Revathy

    2015-01-01

    Full Text Available Low-density parity-check (LDPC codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax, and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  9. A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.

    Science.gov (United States)

    Revathy, M; Saravanan, R

    2015-01-01

    Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.

  10. Encoding of QC-LDPC Codes of Rank Deficient Parity Matrix

    Directory of Open Access Journals (Sweden)

    Mohammed Kasim Mohammed Al-Haddad

    2016-05-01

    Full Text Available the encoding of long low density parity check (LDPC codes presents a challenge compared to its decoding. The Quasi Cyclic (QC LDPC codes offer the advantage for reducing the complexity for both encoding and decoding due to its QC structure. Most QC-LDPC codes have rank deficient parity matrix and this introduces extra complexity over the codes with full rank parity matrix. In this paper an encoding scheme of QC-LDPC codes is presented that is suitable for codes with full rank parity matrix and rank deficient parity matrx. The extra effort required by the codes with rank deficient parity matrix over the codes of full rank parity matrix is investigated.

  11. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    International Nuclear Information System (INIS)

    Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-01-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems

  12. Ultra high speed optical transmission using subcarrier-multiplexed four-dimensional LDPC-coded modulation.

    Science.gov (United States)

    Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted

    2010-09-13

    We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).

  13. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  14. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Li (Tiffany Jing

    2008-01-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified "convergence-constraint" density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional "threshold-constraint" method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  15. Joint Carrier-Phase Synchronization and LDPC Decoding

    Science.gov (United States)

    Simon, Marvin; Valles, Esteban

    2009-01-01

    A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine

  16. Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels

    Energy Technology Data Exchange (ETDEWEB)

    Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)

    2007-10-12

    Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.

  17. Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels

    International Nuclear Information System (INIS)

    Alamino, Roberto C; Saad, David

    2007-01-01

    Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases

  18. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  19. Blind Estimation of the Phase and Carrier Frequency Offsets for LDPC-Coded Systems

    Directory of Open Access Journals (Sweden)

    Houcke Sebastien

    2010-01-01

    Full Text Available Abstract We consider in this paper the problem of phase offset and Carrier Frequency Offset (CFO estimation for Low-Density Parity-Check (LDPC coded systems. We propose new blind estimation techniques based on the calculation and minimization of functions of the Log-Likelihood Ratios (LLR of the syndrome elements obtained according to the parity check matrix of the error-correcting code. In the first part of this paper, we consider phase offset estimation for a Binary Phase Shift Keying (BPSK modulation and propose a novel estimation technique. Simulation results show that the proposed method is very effective and outperforms many existing algorithms. Then, we modify the estimation criterion so that it can work for higher-order modulations. One interesting feature of the proposed algorithm when applied to high-order modulations is that the phase offset of the channel can be blindly estimated without any ambiguity. In the second part of the paper, we consider the problem of CFO estimation and propose estimation techniques that are based on the same concept as the ones presented for the phase offset estimation. The Mean Squared Error (MSE and Bit Error Rate (BER curves show the efficiency of the proposed estimation techniques.

  20. Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.

    Science.gov (United States)

    Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro

    2011-09-26

    The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America

  1. Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel

    Directory of Open Access Journals (Sweden)

    Declercq David

    2007-01-01

    Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.

  2. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    Science.gov (United States)

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  3. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  4. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  5. LDPC concatenated space-time block coded system in multipath fading environment: Analysis and evaluation

    Directory of Open Access Journals (Sweden)

    Surbhi Sharma

    2011-06-01

    Full Text Available Irregular low-density parity-check (LDPC codes have been found to show exceptionally good performance for single antenna systems over a wide class of channels. In this paper, the performance of LDPC codes with multiple antenna systems is investigated in flat Rayleigh and Rician fading channels for different modulation schemes. The focus of attention is mainly on the concatenation of irregular LDPC codes with complex orthogonal space-time codes. Iterative decoding is carried out with a density evolution method that sets a threshold above which the code performs well. For the proposed concatenated system, the simulation results show that the QAM technique achieves a higher coding gain of 8.8 dB and 3.2 dB over the QPSK technique in Rician (LOS and Rayleigh (NLOS faded environments respectively.

  6. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    Science.gov (United States)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  7. Threshold Multi Split-Row algorithm for decoding irregular LDPC codes

    Directory of Open Access Journals (Sweden)

    Chakir Aqil

    2017-12-01

    Full Text Available In this work, we propose a new threshold multi split-row algorithm in order to improve the multi split-row algorithm for LDPC irregular codes decoding. We give a complete description of our algorithm as well as its advantages for the LDPC codes. The simulation results over an additive white gaussian channel show that an improvement in code error performance between 0.4 dB and 0.6 dB compared to the multi split-row algorithm.

  8. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  9. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  10. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  11. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    Science.gov (United States)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  12. Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage

    Directory of Open Access Journals (Sweden)

    Guangjun Ge

    2018-01-01

    Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.

  13. Reply to "Comments on Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes"

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.

  14. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Jing Li (Tiffany

    2008-04-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified “convergence-constraint” density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional “threshold-constraint” method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  15. Analysis and Construction of Full-Diversity Joint Network-LDPC Codes for Cooperative Communications

    Directory of Open Access Journals (Sweden)

    Capirone Daniele

    2010-01-01

    Full Text Available Transmit diversity is necessary in harsh environments to reduce the required transmit power for achieving a given error performance at a certain transmission rate. In networks, cooperative communication is a well-known technique to yield transmit diversity and network coding can increase the spectral efficiency. These two techniques can be combined to achieve a double diversity order for a maximum coding rate on the Multiple-Access Relay Channel (MARC, where two sources share a common relay in their transmission to the destination. However, codes have to be carefully designed to obtain the intrinsic diversity offered by the MARC. This paper presents the principles to design a family of full-diversity LDPC codes with maximum rate. Simulation of the word error rate performance of the new proposed family of LDPC codes for the MARC confirms the full diversity.

  16. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    Directory of Open Access Journals (Sweden)

    Seongkwon Jeong

    2017-05-01

    Full Text Available Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI and inter-track interference (ITI occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  17. LDPC product coding scheme with extrinsic information for bit patterned media recoding

    Science.gov (United States)

    Jeong, Seongkwon; Lee, Jaejin

    2017-05-01

    Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.

  18. Performance analysis of WS-EWC coded optical CDMA networks with/without LDPC codes

    Science.gov (United States)

    Huang, Chun-Ming; Huang, Jen-Fa; Yang, Chao-Chin

    2010-10-01

    One extended Welch-Costas (EWC) code family for the wavelength-division-multiplexing/spectral-amplitude coding (WDM/SAC; WS) optical code-division multiple-access (OCDMA) networks is proposed. This system has a superior performance as compared to the previous modified quadratic congruence (MQC) coded OCDMA networks. However, since the performance of such a network is unsatisfactory when the data bit rate is higher, one class of quasi-cyclic low-density parity-check (QC-LDPC) code is adopted to improve that. Simulation results show that the performance of the high-speed WS-EWC coded OCDMA network can be greatly improved by using the LDPC codes.

  19. Simulasi Low Density Parity Check (Ldpc) dengan Standar Dvb-t2

    OpenAIRE

    Kurniawan, Yusuf; Hafizh, Idham

    2014-01-01

    Artikel ini berisi implementasi simulasi encoding-decoding yang dilakukanpada suatu sampel data biner acak sesuai dengan standar yang digunakanpada Digital Video Broadcasting – Terrestrial 2nd Generation (DVB-T2),dengan menggunakan MATLAB. Low Density Parity Check (LDPC)digunakan dalam proses encoding-decoding sebagai fitur untuk melakukankoreksi kesalahan pada saat pengiriman data. Modulasi yang digunakandalam simulasi adalah BPSK dengan model kanal AWGN. Dalam simulasitersebut, diperbanding...

  20. Construction of Rate-Compatible LDPC Codes Utilizing Information Shortening and Parity Puncturing

    Directory of Open Access Journals (Sweden)

    Jones Christopher R

    2005-01-01

    Full Text Available This paper proposes a method for constructing rate-compatible low-density parity-check (LDPC codes. The construction considers the problem of optimizing a family of rate-compatible degree distributions as well as the placement of bipartite graph edges. A hybrid approach that combines information shortening and parity puncturing is proposed. Local graph conditioning techniques for the suppression of error floors are also included in the construction methodology.

  1. Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.

    Science.gov (United States)

    Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C

    2013-12-30

    We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.

  2. Study regarding the density evolution of messages and the characteristic functions associated of a LDPC code

    Science.gov (United States)

    Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.

    2017-01-01

    In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.

  3. Optimized Fast Walsh–Hadamard Transform on GPUs for non-binary LDPC decoding

    OpenAIRE

    Andrade, Joao; Falcao, Gabriel; Silva, Vitor

    2014-01-01

    The Fourier Transform Sum-Product Algorithm (FT-SPA) used in non-binary Low-Density Parity-Check (LDPC) decoding makes extensive use of the Walsh–Hadamard Transform (WHT). We have developed a massively parallel Fast Walsh–Hadamard Transform (FWHT) which exploits the Graphics Processing Unit (GPU) pipeline and memory hierarchy, thereby minimizing the level of memory bank conflicts and maximizing the number of returned instructions per clock cycle for different generations of graphics processor...

  4. A Low-Complexity Joint Detection-Decoding Algorithm for Nonbinary LDPC-Coded Modulation Systems

    OpenAIRE

    Wang, Xuepeng; Bai, Baoming; Ma, Xiao

    2010-01-01

    In this paper, we present a low-complexity joint detection-decoding algorithm for nonbinary LDPC codedmodulation systems. The algorithm combines hard-decision decoding using the message-passing strategy with the signal detector in an iterative manner. It requires low computational complexity, offers good system performance and has a fast rate of decoding convergence. Compared to the q-ary sum-product algorithm (QSPA), it provides an attractive candidate for practical applications of q-ary LDP...

  5. Differentially Encoded LDPC Codes—Part I: Special Case of Product Accumulate Codes

    Directory of Open Access Journals (Sweden)

    (Tiffany JingLi

    2008-01-01

    Full Text Available Part I of a two-part series investigates product accumulate codes, a special class of differentially-encoded low density parity check (DE-LDPC codes with high performance and low complexity, on flat Rayleigh fading channels. In the coherent detection case, Divsalar's simple bounds and iterative thresholds using density evolution are computed to quantify the code performance at finite and infinite lengths, respectively. In the noncoherent detection case, a simple iterative differential detection and decoding (IDDD receiver is proposed and shown to be robust for different Doppler shifts. Extrinsic information transfer (EXIT charts reveal that, with pilot symbol assisted differential detection, the widespread practice of inserting pilot symbols to terminate the trellis actually incurs a loss in capacity, and a more efficient way is to separate pilots from the trellis. Through analysis and simulations, it is shown that PA codes perform very well with both coherent and noncoherent detections. The more general case of DE-LDPC codes, where the LDPC part may take arbitrary degree profiles, is studied in Part II Li 2008.

  6. 45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.

    Science.gov (United States)

    Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile

    2012-07-30

    In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.

  7. Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.

    Science.gov (United States)

    Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik

    2014-06-16

    Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.

  8. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    Science.gov (United States)

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  9. Diseño de decodificadores de altas prestaciones para código LDPC

    OpenAIRE

    Angarita Preciado, Fabian Enrique

    2013-01-01

    En esta tesis se han investigado los algoritmos de decodificación para códigos de comprobación de paridad de baja densidad (LDPC) y las arquitecturas para la implementación hardware de éstos. El trabajo realizado se centra en los algoritmos del tipo de intercambio de mensajes para códigos estructurados los cuales se incluyen en varios estándares de comunicaciones. Inicialmente se han evaluado las prestaciones de los algoritmos existentes Sum-product, Min-Sum y las principales variantes de...

  10. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    Science.gov (United States)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  11. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  12. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    Science.gov (United States)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  13. Recursive construction of (J,L (J,L QC LDPC codes with girth 6

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2016-06-01

    Full Text Available ‎In this paper‎, ‎a recursive algorithm is presented to generate some exponent matrices which correspond to Tanner graphs with girth at least 6‎. ‎For a J×L J×L exponent matrix E E‎, ‎the lower bound Q(E Q(E is obtained explicitly such that (J,L (J,L QC LDPC codes with girth at least 6 exist for any circulant permutation matrix (CPM size m≥Q(E m≥Q(E‎. ‎The results show that the exponent matrices constructed with our recursive algorithm have smaller lower-bound than the ones proposed recently with girth 6‎

  14. LDPC Code Design for Nonuniform Power-Line Channels

    Directory of Open Access Journals (Sweden)

    Sanaei Ali

    2007-01-01

    Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.

  15. A Novel Modified Algorithm with Reduced Complexity LDPC Code Decoder

    Directory of Open Access Journals (Sweden)

    Song Yang

    2014-10-01

    Full Text Available A novel efficient decoding algorithm reduced the sum-product algorithm (SPA Complexity with LPDC code is proposed. Base on the hyperbolic tangent rule, modified the Check node update with two horizontal process, which have similar calculation, Motivated by the finding that sun- min (MS algorithm reduce the complexity reducing the approximation error in the horizontal process, simplify the information weight small part. Compared with the exiting approximations, the proposed method is less computational complexity than SPA algorithm. Simulation results show that the author algorithm can achieve performance very close SPA.

  16. Comments on “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes”

    Directory of Open Access Journals (Sweden)

    Mark B. Yeary

    2009-01-01

    Full Text Available This is a comment article on the publication “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes” Rovini et al. (2009. We mention that there has been similar work reported in the literature before, and the previous work has not been cited correctly, for example Gunnam et al. (2006, 2007. This brief note serves to clarify these issues.

  17. BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL

    Directory of Open Access Journals (Sweden)

    Surbhi Sharma

    2010-06-01

    Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the ‘m’ parameter increases (Less fading.

  18. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  19. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    Science.gov (United States)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  20. Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs

    Science.gov (United States)

    Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong

    2016-12-01

    In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.

  1. Opportunistic error correction for OFDM-based DVB systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2013-01-01

    DVB-T2 (second generation terrestrial digital video broadcasting) employs LDPC (Low Density Parity Check) codes combined with BCH (Bose-Chaudhuri-Hocquengham) codes, which has a better performance in comparison to convolutional and Reed-Solomon codes used in other OFDM-based DVB systems. However,

  2. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme

    Science.gov (United States)

    He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin

    2017-07-01

    In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).

  3. 428-Gb/s single-channel coherent optical OFDM transmission over 960-km SSMF with constellation expansion and LDPC coding.

    Science.gov (United States)

    Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William

    2010-08-02

    High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.

  4. Um estudo sobre a construção, desempenho e implementação em VHDL de códigos LDPC binários, irregulares e estruturados para aplicação em comunicações ópticas

    OpenAIRE

    Antônio Unias de Lucena

    2015-01-01

    Resumo: O emprego de códigos LDPC em comunicações ópticas vem recebendo especial atenção nos últimos anos devido à sua elevada capacidade de correção de erros, fato que possibilita enlaces mais longos e com maior capacidade de transmissão. A presente dissertação apresenta um estudo de códigos LDPC binários, irregulares e estruturados (IE-LDPC), bem como, uma comparação do desempenho de dois algoritmos de decodificação comumente utilizados na decodificação de códigos LDPC: o algoritmo soma-pro...

  5. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    Science.gov (United States)

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  6. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  7. Quality-driven model-based design of multi-processor accelerators : an application to LDPC decoders

    NARCIS (Netherlands)

    Jan, Y.

    2012-01-01

    The recent spectacular progress in nano-electronic technology has enabled the implementation of very complex multi-processor systems on single chips (MPSoCs). However in parallel, new highly demanding complex embedded applications are emerging, in fields like communication and networking,

  8. Code-Hopping Based Transmission Scheme for Wireless Physical-Layer Security

    Directory of Open Access Journals (Sweden)

    Liuguo Yin

    2018-01-01

    Full Text Available Due to the broadcast and time-varying natures of wireless channels, traditional communication systems that provide data encryption at the application layer suffer many challenges such as error diffusion. In this paper, we propose a code-hopping based secrecy transmission scheme that uses dynamic nonsystematic low-density parity-check (LDPC codes and automatic repeat-request (ARQ mechanism to jointly encode and encrypt source messages at the physical layer. In this scheme, secret keys at the transmitter and the legitimate receiver are generated dynamically upon the source messages that have been transmitted successfully. During the transmission, each source message is jointly encoded and encrypted by a parity-check matrix, which is dynamically selected from a set of LDPC matrices based on the shared dynamic secret key. As for the eavesdropper (Eve, the uncorrectable decoding errors prevent her from generating the same secret key as the legitimate parties. Thus she cannot select the correct LDPC matrix to recover the source message. We demonstrate that our scheme can be compatible with traditional cryptosystems and enhance the security without sacrificing the error-correction performance. Numerical results show that the bit error rate (BER of Eve approaches 0.5 as the number of transmitted source messages increases and the security gap of the system is small.

  9. Two-stage cross-talk mitigation in an orbital-angular-momentum-based free-space optical communication system.

    Science.gov (United States)

    Qu, Zhen; Djordjevic, Ivan B

    2017-08-15

    We propose and experimentally demonstrate a two-stage cross-talk mitigation method in an orbital-angular-momentum (OAM)-based free-space optical communication system, which is enabled by combining spatial offset and low-density parity-check (LDPC) coded nonuniform signaling. Different from traditional OAM multiplexing, where the OAM modes are centrally aligned for copropagation, the adjacent OAM modes (OAM states 2 and -6 and OAM states -2 and 6) in our proposed scheme are spatially offset to mitigate the mode cross talk. Different from traditional rectangular modulation formats, which transmit equidistant signal points with uniform probability, the 5-quadrature amplitude modulation (5-QAM) and 9-QAM are introduced to relieve cross-talk-induced performance degradation. The 5-QAM and 9-QAM formats are based on the Huffman coding technique, which can potentially achieve great cross-talk tolerance by combining them with corresponding nonbinary LDPC codes. We demonstrate that cross talk can be reduced by 1.6 dB and 1 dB via spatial offset for OAM states ±2 and ±6, respectively. Compared to quadrature phase shift keying and 8-QAM formats, the LDPC-coded 5-QAM and 9-QAM are able to bring 1.1 dB and 5.4 dB performance improvements in the presence of atmospheric turbulence, respectively.

  10. Robotic Mobile System's Performance-Based MIMO-OFDM Technology

    Directory of Open Access Journals (Sweden)

    Omar Alani

    2009-10-01

    Full Text Available In this paper, a predistortion neural network (PDNN architecture has been imposed to the Sniffer Mobile Robot (SNFRbot that is based on spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM transmission technology. This proposal is used to improve the system performance by combating one of the main drawbacks that is encountered by OFDM technology; Peak-to-Average Power Ratio (PAPR. Simulation results show that using PDNN resulted in better PAPR performance than the previously published work that is based on linear coding, such as Low Density Parity Check (LDPC codes and turbo encoding whether using flat fading channel or a Doppler spread channel.

  11. Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels

    Science.gov (United States)

    Wang, Han

    2010-01-01

    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…

  12. Improved Iterative Hard- and Soft-Reliability Based Majority-Logic Decoding Algorithms for Non-Binary Low-Density Parity-Check Codes

    Science.gov (United States)

    Xiong, Chenrong; Yan, Zhiyuan

    2014-10-01

    Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.

  13. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  14. Αρχιτεκτονική και υλοποίηση κωδικοποιητών VLSI για κώδικες LDPC

    OpenAIRE

    Mahdi, Ahmed

    2010-01-01

    Η διόρθωση λαθών με κώδικες LDPC είναι μεγάλου ενδιαφέροντος σε σημαντικές νέες τηλεπικοινωνιακές εφαρμογές, όπως δορυφορικό Digital Video Broadcast (DVB) DVB-S2, IEEE 802.3an (10GBASE-T) και IEEE 802.16 (WiMAX). Οι κώδικες LDPC ανήκουν στην κατηγορία των γραμμικών μπλοκ κωδικών. Πρόκειται για κώδικες ελέγχου και διόρθωσης σφαλμάτων μετάδοσης, με κυριότερο χαρακτηριστικό τους τον χαμηλής πυκνότητας πίνακα ελέγχου ισοτιμίας (Low Density Parity Check), από τον οποίο και πήραν το όνομά του...

  15. Σχεδίαση και υλοποίηση ενός LDPC αποκωδικοποιητή για DVB-S2 συστήματα

    OpenAIRE

    Κορδώνη, Μαρίνα

    2009-01-01

    Tα σύγχρονα τηλεπικοινωνιακά συστήματα έχουν υιοθετήσει κώδικες διόρθωσης λαθών με στόχο να αυξήσουν της αξιοπιστία των συστημάτων κατά τη μετάδοση πληροφορίας. Οι LDPC (Low-Density-Parity-Check codes) κώδικες είναι μία κατηγορία κωδίκων που πρόσφατα άρχισαν να απασχολούν την επιστημονική κοινότητα κι αυτό γιατί διαθέτουν εξαιρετικές επιδόσεις. Οι κώδικες αυτοί είναι γραμμικοί block κώδικες με απόδοση πολύ κοντά στο όριο του Shannon. Επιπλέον, ο εύκολος παραλληλισμός της διαδικασίας αποκωδικο...

  16. PPLN-waveguide-based polarization entangled QKD simulator

    Science.gov (United States)

    Gariano, John; Djordjevic, Ivan B.

    2017-08-01

    We have developed a comprehensive simulator to study the polarization entangled quantum key distribution (QKD) system, which takes various imperfections into account. We assume that a type-II SPDC source using a PPLN-based nonlinear optical waveguide is used to generate entangled photon pairs and implements the BB84 protocol, using two mutually unbiased basis with two orthogonal polarizations in each basis. The entangled photon pairs are then simulated to be transmitted to both parties; Alice and Bob, through the optical channel, imperfect optical elements and onto the imperfect detector. It is assumed that Eve has no control over the detectors, and can only gain information from the public channel and the intercept resend attack. The secure key rate (SKR) is calculated using an upper bound and by using actual code rates of LDPC codes implementable in FPGA hardware. After the verification of the simulation results, such as the pair generation rate and the number of error due to multiple pairs, for the ideal scenario, available in the literature, we then introduce various imperfections. Then, the results are compared to previously reported experimental results where a BBO nonlinear crystal is used, and the improvements in SKRs are determined for when a PPLN-waveguide is used instead.

  17. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  18. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  19. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  20. Adaptive Channel Estimation based on Soft Information Processing in Broadband Spatial Multiplexing Receivers

    Directory of Open Access Journals (Sweden)

    P. Beinschob

    2010-11-01

    Full Text Available In this paper we present a novel approach in Multiple-Input Multiple Output (MIMO Orthogonal Frequency Division Multiplexing (OFDM channel estimation technique based on a Decision Directed Recursive Least Squares (RLS algorithm in which no pilot symbols need to be integrated in the data after a short initial preamble. The novelty and key concept of the proposed technique is the block-wise causal and anti-causal RLS processing that yields two independent processings of RLS along with the associated decisions. Due to the usage of low density parity check (LDPC channel code, the receiver operates with soft information, which enables us to introduce a new modification of the Turbo principle as well as a simple information combining approach based on approximated aposteriori log-likelihood ratios (LLRs. Although the computational complexity is increased by both of our approaches, the latter is relatively less complex than the former. Simulation results show that these implementations outperform the simple RLS-DDCE algorithm and yield lower bit error rates (BER and more accurate channel estimates.

  1. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    Science.gov (United States)

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  2. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  3. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  4. A Novel Multiple-Bits Collision Attack Based on Double Detection with Error-Tolerant Mechanism

    Directory of Open Access Journals (Sweden)

    Ye Yuan

    2018-01-01

    Full Text Available Side-channel collision attacks are more powerful than traditional side-channel attack without knowing the leakage model or establishing the model. Most attack strategies proposed previously need quantities of power traces with high computational complexity and are sensitive to mistakes, which restricts the attack efficiency seriously. In this paper, we propose a multiple-bits side-channel collision attack based on double distance voting detection (DDVD and also an improved version, involving the error-tolerant mechanism, which can find all 120 relations among 16 key bytes when applied to AES (Advanced Encryption Standard algorithm. In addition, we compare our collision detection method called DDVD with the Euclidean distance and the correlation-enhanced collision method under different intensity of noise, which indicates that our detection technique performs better in the circumstances of noise. Furthermore, 4-bit model of our collision detection method is proven to be optimal in theory and in practice. Meanwhile the corresponding practical attack experiments are also performed on a hardware implementation of AES-128 on FPGA board successfully. Results show that our strategy needs less computation time but more traces than LDPC method and the online time for our strategy is about 90% less than CECA and 96% less than BCA with 90% success rate.

  5. Base

    DEFF Research Database (Denmark)

    Hjulmand, Lise-Lotte; Johansson, Christer

    2004-01-01

    BASE - Engelsk basisgrammatik er resultatet af Lise-Lotte Hjulmands grundige bearbejdning og omfattende revidering af Christer Johanssons Engelska basgrammatik. Grammatikken adskiller sig fra det svenske forlæg på en lang række punkter. Den er bl.a. tilpasset til et dansk publikum og det danske...

  6. Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage

    Science.gov (United States)

    Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo

    2005-01-01

    Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.

  7. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  8. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  9. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  10. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    Science.gov (United States)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  11. PERFORMANCE EVOLUTION OF PAPR REDUCTION IN OFDM WITH AND WITHOUT LDPC TECHNIQUE

    OpenAIRE

    Punit Upmanyu*; Prof. Saurabh Gaur

    2016-01-01

    The OFDM is one of the proven multicarrier modulation techniques, which provides high spectral efficiency, low implementation complexity, less vulnerability to echoes and non-linear distortion. Apart from the above advantages presently this technique is used by almost all wireless standards and above. The one major shortcoming in the implementation of this system is the high PAPR (peak-to-average power ratio) of this system. In this paper, Irregular Low-Density-Parity Check encoder is used ef...

  12. Binary Linear-Time Erasure Decoding for Non-Binary LDPC codes

    OpenAIRE

    Savin, Valentin

    2009-01-01

    In this paper, we first introduce the extended binary representation of non-binary codes, which corresponds to a covering graph of the bipartite graph associated with the non-binary code. Then we show that non-binary codewords correspond to binary codewords of the extended representation that further satisfy some simplex-constraint: that is, bits lying over the same symbol-node of the non-binary graph must form a codeword of a simplex code. Applied to the binary erasure channel, this descript...

  13. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  14. Spherical reconciliation for a continuous-variable quantum key distribution

    International Nuclear Information System (INIS)

    Lu Zhao; Shi Jian-Hong; Li Feng-Guang

    2017-01-01

    Information reconciliation is a significant step for a continuous-variable quantum key distribution (CV-QKD) system. We propose a reconciliation method that allows two authorized parties to extract a consistent and secure binary key in a CV-QKD protocol, which is based on Gaussian-modulated coherent states and homodyne detection. This method named spherical reconciliation is based on spherical quantization and non-binary low-density parity-check (LDPC) codes. With the suitable signal-to-noise ratio (SNR) and code rate of non-binary LDPC codes, spherical reconciliation algorithm has a high efficiency and can extend the transmission distance of CV-QKD. (paper)

  15. Entanglement-assisted quantum low-density parity-check codes

    International Nuclear Information System (INIS)

    Fujiwara, Yuichiro; Clark, David; Tonchev, Vladimir D.; Vandendriessche, Peter; De Boeck, Maarten

    2010-01-01

    This article develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error-correction performance, high rates, and low decoding complexity. The proposed method produces several infinite families of codes with a wide variety of parameters and entanglement requirements. Our framework encompasses the previously known entanglement-assisted quantum LDPC codes having the best error-correction performance and many other codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.

  16. Joint Schemes for Physical Layer Security and Error Correction

    Science.gov (United States)

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  17. Photonic entanglement-assisted quantum low-density parity-check encoders and decoders.

    Science.gov (United States)

    Djordjevic, Ivan B

    2010-05-01

    I propose encoder and decoder architectures for entanglement-assisted (EA) quantum low-density parity-check (LDPC) codes suitable for all-optical implementation. I show that two basic gates needed for EA quantum error correction, namely, controlled-NOT (CNOT) and Hadamard gates can be implemented based on Mach-Zehnder interferometer. In addition, I show that EA quantum LDPC codes from balanced incomplete block designs of unitary index require only one entanglement qubit to be shared between source and destination.

  18. An experimental comparison of coded modulation strategies for 100 Gb/s transceivers

    NARCIS (Netherlands)

    Sillekens, E.; Alvarado, A.; Okonkwo, C.; Thomsen, B.C.

    2016-01-01

    Coded modulation is a key technique to increase the spectral efficiency of coherent optical communication systems. Two popular strategies for coded modulation are turbo trellis-coded modulation (TTCM) and bit-interleaved coded modulation (BICM) based on low-density parity-check (LDPC) codes.

  19. Experimental demonstration of multi-pilot aided carrier phase estimation for DP-64QAM and DP-256QAM

    NARCIS (Netherlands)

    Pajovic, M.; Millar, D.S.; Koike-Akino, T.; Maher, R.; Lavery, D.; Alvarado, A.; Paskov, M.; Kojima, K.; Parsons, K.; Thomsen, B.C.; Savory, S.J.; Bayvel, P.

    2015-01-01

    We present a statistical inference based multi-pilot aided CPE algorithm and analyze its performance via simulations. We experimentally verify LDPC coded back-to-back performance using 10 GBd DP-64QAM and DP-256QAM modulation, with transmitter and receiver linewidths of 100 kHz.

  20. Αλγόριθμοι επαναληπτικής αποκωδικοποίησης κωδικών LDPC και μελέτη της επίδρασης του σφάλματος κβαντισμού στην απόδοση του αλγορίθμου Log Sum-Product

    OpenAIRE

    Κάνιστρας, Νικόλαος

    2008-01-01

    Οι κώδικες LDPC ανήκουν στην κατηγορία των block κωδικών. Πρόκειται για κώδικες ελέγχου σφαλμάτων μετάδοσης και πιο συγκεκριμένα για κώδικες διόρθωσης σφαλμάτων. Αν και η εφεύρεσή τους (από τον Gallager) τοποθετείται χρονικά στις αρχές της δεκαετίας του 60, μόλις τα τελευταία χρόνια κατάφεραν να κεντρίσουν το έντονο ενδιαφέρον της επιστημονικής-ερευνητικής κοινότητας για τις αξιόλογες επιδόσεις τους. Πρόκειται για κώδικες ελέγχου ισοτιμίας με κυριότερο χαρακτηριστικό τον χαμηλής πυκνότητας ...

  1. On the equivalence of Ising models on ‘small-world’ networks and LDPC codes on channels with memory

    International Nuclear Information System (INIS)

    Neri, Izaak; Skantzos, Nikos S

    2014-01-01

    We demonstrate the equivalence between thermodynamic observables of Ising spin-glass models on small-world lattices and the decoding properties of error-correcting low-density parity-check codes on channels with memory. In particular, the self-consistent equations for the effective field distributions in the spin-glass model within the replica symmetric ansatz are equivalent to the density evolution equations forr Gilbert–Elliott channels. This relationship allows us to present a belief-propagation decoding algorithm for finite-state Markov channels and to compute its performance at infinite block lengths from the density evolution equations. We show that loss of reliable communication corresponds to a first order phase transition from a ferromagnetic phase to a paramagnetic phase in the spin glass model. The critical noise levels derived for Gilbert–Elliott channels are in very good agreement with existing results in coding theory. Furthermore, we use our analysis to derive critical noise levels for channels with both memory and asymmetry in the noise. The resulting phase diagram shows that the combination of asymmetry and memory in the channel allows for high critical noise levels: in particular, we show that successful decoding is possible at any noise level of the bad channel when the good channel is good enough. Theoretical results at infinite block lengths using density evolution equations aree compared with average error probabilities calculated from a practical implementation of the corresponding decoding algorithms at finite block lengths. (paper)

  2. Optimal Bipartitet Ramanujan Graphs from Balanced Incomplete Block Designs: Their Characterization and Applications to Expander/LDPC Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Janwa, Heeralal

    2009-01-01

    We characterize optimaal bipartitet expander graphs and give nessecary and sufficient conditions for optimality. We determine the expansion parameters of the BIBD graphs and show that they yield optimal expander graphs and also bipartitet Ramanujan graphs. in particular, we show that the bipartit...

  3. Processor architecture exploration and synthesis of massively parallel multi-processor accelerators in application to LDPC decoding

    NARCIS (Netherlands)

    Jan, Y.; Jóźwiak, Lech

    Numerous modern applications in various fields, such as communication and networking, multimedia, encryption, etc., impose extremely high demands regarding performance while at the same time requiring low energy consumption, low cost, and short design time. Often these very high demands cannot be

  4. The Manifestation of Stopping Sets and Absorbing Sets as Deviations on the Computation Trees of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Eric Psota

    2010-01-01

    Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.

  5. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  6. Channel coding for underwater acoustic single-carrier CDMA communication system

    Science.gov (United States)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  7. 16QAM transmission with 5.2 bits/s/Hz spectral efficiency over transoceanic distance.

    Science.gov (United States)

    Zhang, H; Cai, J-X; Batshon, H G; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2012-05-21

    We transmit 160 x 100 G PDM RZ 16 QAM channels with 5.2 bits/s/Hz spectral efficiency over 6,860 km. There are more than 3 billion 16 QAM symbols, i.e., 12 billion bits, processed in total. Using coded modulation and iterative decoding between a MAP decoder and an LDPC based FEC all channels are decoded with no remaining errors.

  8. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    Science.gov (United States)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  9. Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels

    Directory of Open Access Journals (Sweden)

    Yun Mao

    2012-01-01

    Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.

  10. FPGA implementation of advanced FEC schemes for intelligent aggregation networks

    Science.gov (United States)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.

  11. The Analysis and the Performance Simulation of the Capacity of Bit-interleaved Coded Modulation System

    Directory of Open Access Journals (Sweden)

    Hongwei ZHAO

    2014-09-01

    Full Text Available In this paper, the capacity of the BICM system over AWGN channels is first analyzed; the curves of BICM capacity versus SNR are also got by the Monte-Carlo simulations===?=== and compared with the curves of the CM capacity. Based on the analysis results, we simulate the error performances of BICM system with LDPC codes. Simulation results show that the capacity of BICM system with LDPC codes is enormously influenced by the mapping methods. Given a certain modulation method, the BICM system can obtain about 2-3 dB gain with Gray mapping compared with Non-Gray mapping. Meanwhile, the simulation results also demonstrate the correctness of the theory analysis.

  12. Statistical mechanics of low-density parity-check codes

    Energy Technology Data Exchange (ETDEWEB)

    Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)

    2004-02-13

    We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)

  13. Statistical mechanics of low-density parity-check codes

    International Nuclear Information System (INIS)

    Kabashima, Yoshiyuki; Saad, David

    2004-01-01

    We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)

  14. An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems

    Science.gov (United States)

    Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai

    Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.

  15. Joint nonbinary low-density parity-check codes and modulation diversity over fading channels

    Science.gov (United States)

    Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei

    2010-09-01

    A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels.

  16. Lotus Base

    DEFF Research Database (Denmark)

    Mun, Terry; Bachmann, Asger; Gupta, Vikas

    2016-01-01

    exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120...... such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable...... developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk....

  17. Nonlinear demodulation and channel coding in EBPSK scheme.

    Science.gov (United States)

    Chen, Xianqing; Wu, Lenan

    2012-01-01

    The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.

  18. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  19. Touch BASE

    CERN Multimedia

    Antonella Del Rosso

    2015-01-01

    In a recent Nature article (see here), the BASE collaboration reported the most precise comparison of the charge-to-mass ratio of the proton to its antimatter equivalent, the antiproton. This result is just the beginning and many more challenges lie ahead.   CERN's AD Hall, where the BASE experiment is set-up. The Baryon Antibaryon Symmetry Experiment (BASE) was approved in June 2013 and was ready to take data in August 2014. During these 14 months, the BASE collaboration worked hard to set up its four cryogenic Penning traps, which are the heart of the whole experiment. As their name indicates, these magnetic devices are used to trap antiparticles – antiprotons coming from the Antiproton Decelerator – and particles of matter – negative hydrogen ions produced in the system by interaction with a degrader that slows the antiprotons down, allowing scientists to perform their measurements. “We had very little time to set up the wh...

  20. Fragmentation based

    Directory of Open Access Journals (Sweden)

    Shashank Srivastava

    2014-01-01

    Gaining the understanding of mobile agent architecture and the security concerns, in this paper, we proposed a security protocol which addresses security with mitigated computational cost. The protocol is a combination of self decryption, co-operation and obfuscation technique. To circumvent the risk of malicious code execution in attacking environment, we have proposed fragmentation based encryption technique. Our encryption technique suits the general mobile agent size and provides hard and thorny obfuscation increasing attacker’s challenge on the same plane providing better performance with respect to computational cost as compared to existing AES encryption.

  1. Modified hybrid subcarrier/amplitude/ phase/polarization LDPC-coded modulation for 400 Gb/s optical transmission and beyond.

    Science.gov (United States)

    Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting

    2010-06-21

    In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).

  2. Fuzzy knowledge bases integration based on ontology

    OpenAIRE

    Ternovoy, Maksym; Shtogrina, Olena

    2012-01-01

    the paper describes the approach for fuzzy knowledge bases integration with the usage of ontology. This approach is based on metadata-base usage for integration of different knowledge bases with common ontology. The design process of metadata-base is described.

  3. Foundation: Transforming data bases into knowledge bases

    Science.gov (United States)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  4. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  5. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  6. Paper based electronics platform

    KAUST Repository

    Nassar, Joanna Mohammad; Sevilla, Galo Andres Torres; Hussain, Muhammad Mustafa

    2017-01-01

    A flexible and non-functionalized low cost paper-based electronic system platform fabricated from common paper, such as paper based sensors, and methods of producing paper based sensors, and methods of sensing using the paper based sensors

  7. Interior point decoding for linear vector channels

    International Nuclear Information System (INIS)

    Wadayama, T

    2008-01-01

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem

  8. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    Science.gov (United States)

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  9. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    Science.gov (United States)

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  10. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  11. NASA Tech Briefs, August 2013

    Science.gov (United States)

    2013-01-01

    Topics covered include: Radial Internal Material Handling System (RIMS) for Circular Habitat Volumes; Conical Seat Shut-Off Valve; Impact-Actuated Digging Tool for Lunar Excavation; Flexible Mechanical Conveyors for Regolith Extraction and Transport; Remote Memory Access Protocol Target Node Intellectual Property; Soft Decision Analyzer; Distributed Prognostics and Health Management with a Wireless Network Architecture; Minimal Power Latch for Single-Slope ADCs; Bismuth Passivation Technique for High-Resolution X-Ray Detectors; High-Strength, Super-elastic Compounds; Cu-Cr-Nb-Zr Alloy for Rocket Engines and Other High-Heat- Flux Applications; Microgravity Storage Vessels and Conveying-Line Feeders for Cohesive Regolith; CRUQS: A Miniature Fine Sun Sensor for Nanosatellites; On-Chip Microfluidic Components for In Situ Analysis, Separation, and Detection of Amino Acids; Spectroscopic Determination of Trace Contaminants in High-Purity Oxygen; Method of Separating Oxygen From Spacecraft Cabin Air to Enable Extravehicular Activities; Atomic Force Microscope Mediated Chromatography; Sample Analysis at Mars Instrument Simulator; Access Control of Web- and Java-Based Applications; Tool for Automated Retrieval of Generic Event Tracks (TARGET); Bilayer Protograph Codes for Half-Duplex Relay Channels; Influence of Computational Drop Representation in LES of a Droplet-Laden Mixing Layer.

  12. The physics data base

    International Nuclear Information System (INIS)

    Gault, F.D.

    1984-01-01

    The physics data base is introduced along with its associated data base management system. The emphasis is on data and their use and a classification of data and of data bases is developed to distinguish compilation organizations. The characteristics of these organizations are examined briefly and the long term consequences of the physics data base discussed. (orig.)

  13. Solid Base Catalysis

    CERN Document Server

    Ono, Yoshio

    2011-01-01

    The importance of solid base catalysts has come to be recognized for their environmentally benign qualities, and much significant progress has been made over the past two decades in catalytic materials and solid base-catalyzed reactions. The book is focused on the solid base. Because of the advantages over liquid bases, the use of solid base catalysts in organic synthesis is expanding. Solid bases are easier to dispose than liquid bases, separation and recovery of products, catalysts and solvents are less difficult, and they are non-corrosive. Furthermore, base-catalyzed reactions can be performed without using solvents and even in the gas phase, opening up more possibilities for discovering novel reaction systems. Using numerous examples, the present volume describes the remarkable role solid base catalysis can play, given the ever increasing worldwide importance of "green" chemistry. The reader will obtain an overall view of solid base catalysis and gain insight into the versatility of the reactions to whic...

  14. Spatially coupled low-density parity-check error correction for holographic data storage

    Science.gov (United States)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  15. Beyond Zero Based Budgeting.

    Science.gov (United States)

    Ogden, Daniel M., Jr.

    1978-01-01

    Suggests that the most practical budgeting system for most managers is a formalized combination of incremental and zero-based analysis because little can be learned about most programs from an annual zero-based budget. (Author/IRT)

  16. VectorBase

    Data.gov (United States)

    U.S. Department of Health & Human Services — VectorBase is a Bioinformatics Resource Center for invertebrate vectors. It is one of four Bioinformatics Resource Centers funded by NIAID to provide web-based...

  17. Mobile Inquiry Based Learning

    NARCIS (Netherlands)

    Specht, Marcus

    2012-01-01

    Specht, M. (2012, 8 November). Mobile Inquiry Based Learning. Presentation given at the Workshop "Mobile inquiry-based learning" at the Mobile Learning Day 2012 at the Fernuniversität Hagen, Hagen, Germany.

  18. Microbead agglutination based assays

    KAUST Repository

    Kodzius, Rimantas; Castro, David; Foulds, Ian G.; Parameswaran, Ash M.; Sumanpreet, K. Chhina

    2013-01-01

    We report a simple and rapid room temperature assay for point-of-care (POC) testing that is based on specific agglutination. Agglutination tests are based on aggregation of microbeads in the presence of a specific analyte thus enabling

  19. Carbon Based Nanotechnology: Review

    Science.gov (United States)

    Srivastava, Deepak; Saini, Subhash (Technical Monitor)

    1999-01-01

    This presentation reviews publicly available information related to carbon based nanotechnology. Topics covered include nanomechanics, carbon based electronics, nanodevice/materials applications, nanotube motors, nano-lithography and H2O storage in nanotubes.

  20. The ground based plan

    International Nuclear Information System (INIS)

    1989-01-01

    The paper presents a report of ''The Ground Based Plan'' of the United Kingdom Science and Engineering Research Council. The ground based plan is a plan for research in astronomy and planetary science by ground based techniques. The contents of the report contains a description of:- the scientific objectives and technical requirements (the basis for the Plan), the present organisation and funding for the ground based programme, the Plan, the main scientific features and the further objectives of the Plan. (U.K.)

  1. Stolen Base Physics

    Science.gov (United States)

    Kagan, David

    2013-01-01

    Few plays in baseball are as consistently close and exciting as the stolen base. While there are several studies of sprinting, the art of base stealing is much more nuanced. This article describes the motion of the base-stealing runner using a very basic kinematic model. The model will be compared to some data from a Major League game. The…

  2. Convergent Filter Bases

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2015-09-01

    Full Text Available We are inspired by the work of Henri Cartan [16], Bourbaki [10] (TG. I Filtres and Claude Wagschal [34]. We define the base of filter, image filter, convergent filter bases, limit filter and the filter base of tails (fr: filtre des sections.

  3. Cholinesterase-based biosensors.

    Science.gov (United States)

    Štěpánková, Šárka; Vorčáková, Katarína

    2016-01-01

    Recently, cholinesterase-based biosensors are widely used for assaying anticholinergic compounds. Primarily biosensors based on enzyme inhibition are useful analytical tools for fast screening of inhibitors, such as organophosphates and carbamates. The present review is aimed at compilation of the most important facts about cholinesterase based biosensors, types of physico-chemical transduction, immobilization strategies and practical applications.

  4. Near-Capacity Coding for Discrete Multitone Systems with Impulse Noise

    Directory of Open Access Journals (Sweden)

    Kschischang Frank R

    2006-01-01

    Full Text Available We consider the design of near-capacity-achieving error-correcting codes for a discrete multitone (DMT system in the presence of both additive white Gaussian noise and impulse noise. Impulse noise is one of the main channel impairments for digital subscriber lines (DSL. One way to combat impulse noise is to detect the presence of the impulses and to declare an erasure when an impulse occurs. In this paper, we propose a coding system based on low-density parity-check (LDPC codes and bit-interleaved coded modulation that is capable of taking advantage of the knowledge of erasures. We show that by carefully choosing the degree distribution of an irregular LDPC code, both the additive noise and the erasures can be handled by a single code, thus eliminating the need for an outer code. Such a system can perform close to the capacity of the channel and for the same redundancy is significantly more immune to the impulse noise than existing methods based on an outer Reed-Solomon (RS code. The proposed method has a lower implementation complexity than the concatenated coding approach.

  5. Research on formation of microsatellite communication with genetic algorithm.

    Science.gov (United States)

    Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei

    2013-01-01

    For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.

  6. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    Science.gov (United States)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  7. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  8. Photonic circuits for iterative decoding of a class of low-density parity-check codes

    International Nuclear Information System (INIS)

    Pavlichin, Dmitri S; Mabuchi, Hideo

    2014-01-01

    Photonic circuits in which stateful components are coupled via guided electromagnetic fields are natural candidates for resource-efficient implementation of iterative stochastic algorithms based on propagation of information around a graph. Conversely, such message=passing algorithms suggest novel circuit architectures for signal processing and computation that are well matched to nanophotonic device physics. Here, we construct and analyze a quantum optical model of a photonic circuit for iterative decoding of a class of low-density parity-check (LDPC) codes called expander codes. Our circuit can be understood as an open quantum system whose autonomous dynamics map straightforwardly onto the subroutines of an LDPC decoding scheme, with several attractive features: it can operate in the ultra-low power regime of photonics in which quantum fluctuations become significant, it is robust to noise and component imperfections, it achieves comparable performance to known iterative algorithms for this class of codes, and it provides an instructive example of how nanophotonic cavity quantum electrodynamic components can enable useful new information technology even if the solid-state qubits on which they are based are heavily dephased and cannot support large-scale entanglement. (paper)

  9. ARAC terrain data base

    International Nuclear Information System (INIS)

    Walker, H.

    1982-11-01

    A terrain data base covering the continental United States at 500-meter resolution has been generated. Its function is to provide terrain data for input to mesoscale atmospheric models that are used as part of the Atmospheric Release Advisory Capability at Lawrence Livermore Laboratory (LLNL). The structure of the data base as it exists on the LLNL computer system is described. The data base has been written to tapes for transfer to other systems and the format of these tapes is also described

  10. Base Station Performance Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  11. Value-based pricing

    OpenAIRE

    Netseva-Porcheva Tatyana

    2010-01-01

    The main aim of the paper is to present the value-based pricing. Therefore, the comparison between two approaches of pricing is made - cost-based pricing and value-based pricing. The 'Price sensitively meter' is presented. The other topic of the paper is the perceived value - meaning of the perceived value, the components of perceived value, the determination of perceived value and the increasing of perceived value. In addition, the best company strategies in matrix 'value-cost' are outlined. .

  12. Network-Based Effectiveness

    National Research Council Canada - National Science Library

    Friman, Henrik

    2006-01-01

    ...) to increase competitive advantage, innovation, and mission effectiveness. Network-based effectiveness occurs due to the influence of various factors such as people, procedures, technology, and organizations...

  13. Case-based reasoning

    CERN Document Server

    Kolodner, Janet

    1993-01-01

    Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions whe

  14. Strengths-based Learning

    DEFF Research Database (Denmark)

    Ledertoug, Mette Marie

    -being. The Ph.D.-project in Strength-based learning took place in a Danish school with 750 pupils age 6-16 and a similar school was functioning as a control group. The presentation will focus on both the aware-explore-apply processes and the practical implications for the schools involved, and on measurable......Strength-based learning - Children͛s Character Strengths as Means to their Learning Potential͛ is a Ph.D.-project aiming to create a strength-based mindset in school settings and at the same time introducing strength-based interventions as specific tools to improve both learning and well...

  15. Monitoring Knowledge Base (MKB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial...

  16. Imagery Data Base Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Imagery Data Base Facility supports AFRL and other government organizations by providing imagery interpretation and analysis to users for data selection, imagery...

  17. Game-Based Teaching

    DEFF Research Database (Denmark)

    Hanghøj, Thorkild

    2013-01-01

    This chapter outlines theoretical and empirical perspectives on how Game-Based Teaching can be integrated within the context of formal schooling. Initially, this is done by describing game scenarios as models for possible actions that need to be translated into curricular knowledge practices...... approaches to game-based teaching, which may or may not correspond with the pedagogical models of particular games....

  18. Secure base stations

    NARCIS (Netherlands)

    Bosch, Peter; Brusilovsky, Alec; McLellan, Rae; Mullender, Sape J.; Polakos, Paul

    2009-01-01

    With the introduction of the third generation (3G) Universal Mobile Telecommunications System (UMTS) base station router (BSR) and fourth generation (4G) base stations, such as the 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) Evolved Node B (eNB), it has become important to

  19. Hydrogel based occlusion systems

    NARCIS (Netherlands)

    Stam, F.A.; Jackson, N.; Dubruel, P.; Adesanya, K.; Embrechts, A.; Mendes, E.; Neves, H.P.; Herijgers, P.; Verbrugghe, Y.; Shacham, Y.; Engel, L.; Krylov, V.

    2013-01-01

    A hydrogel based occlusion system, a method for occluding vessels, appendages or aneurysms, and a method for hydrogel synthesis are disclosed. The hydrogel based occlusion system includes a hydrogel having a shrunken and a swollen state and a delivery tool configured to deliver the hydrogel to a

  20. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Fogh Olsen, Ole; Sporring, Jon

    2007-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  1. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Olsen, Ole Fogh; Sporring, Jon

    2006-01-01

    . To address this problem we introduce a novel photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way we preserve the important illumination features......, while eliminating noise. We call our method diffusion based photon mapping....

  2. Zero-Based Budgeting.

    Science.gov (United States)

    Wichowski, Chester

    1979-01-01

    The zero-based budgeting approach is designed to achieve the greatest benefit with the fewest undesirable consequences. Seven basic steps make up the zero-based decision-making process: (1) identifying program goals, (2) classifying goals, (3) identifying resources, (4) reviewing consequences, (5) developing decision packages, (6) implementing a…

  3. Office-based anaesthesia

    African Journals Online (AJOL)

    infection, and consistency in nursing personnel. In the USA 17 -. 24% of all elective ambulatory surgery is ... knowledge base or personality to deal with the OBA environment. Compared with hospitals, office-based facilities currently ... disease or major cardiovascular risk factors). Intravenous access via a flexible cannula is.

  4. Knowledge base mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Suwa, M; Furukawa, K; Makinouchi, A; Mizoguchi, T; Mizoguchi, F; Yamasaki, H

    1982-01-01

    One of the principal goals of the Fifth Generation Computer System Project for the coming decade is to develop a methodology for building knowledge information processing systems which will provide people with intelligent agents. The key notion of the fifth generation computer system is knowledge used for problem solving. In this paper the authors describe the plan of Randd on knowledge base mechanisms. A knowledge representation system is to be designed to support knowledge acquisition for the knowledge information processing systems. The system will include a knowledge representation language, a knowledge base editor and a debugger. It is also expected to perform as a kind of meta-inference system. In order to develop the large scale knowledge base systems, a knowledge base mechanism based on the relational model is to be studied in the earlier stage of the project. Distributed problem solving is also one of the main issues of the project. 19 references.

  5. Skull base tumours

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Alexandra [Instituto Portugues de Oncologia Francisco Gentil, Servico de Radiologia, Rua Professor Lima Basto, 1093 Lisboa Codex (Portugal)], E-mail: borgesalexandra@clix.pt

    2008-06-15

    With the advances of cross-sectional imaging radiologists gained an increasing responsibility in the management of patients with skull base pathology. As this anatomic area is hidden to clinical exam, surgeons and radiation oncologists have to rely on imaging studies to plan the most adequate treatment. To fulfil these endeavour radiologists need to be knowledgeable about skull base anatomy, about the main treatment options available, their indications and contra-indications and needs to be aware of the wide gamut of pathologies seen in this anatomic region. This article will provide a radiologists' friendly approach to the central skull base and will review the most common central skull base tumours and tumours intrinsic to the bony skull base.

  6. Evidence-based radiography

    International Nuclear Information System (INIS)

    Hafslund, Bjorg; Clare, Judith; Graverholt, Birgitte; Wammen Nortvedt, Monica

    2008-01-01

    Evidence-based practice (EBP) offers the integration of the best research evidence with clinical knowledge and expertise and patient values. EBP is a well known term in health care. This paper discusses the implementation of EBP into radiography and introduces the term evidence-based radiography. Evidence-based radiography is radiography informed and based on the combination of clinical expertise and the best available research-based evidence, patient preferences and resources available. In Norway, EBP in radiography is being debated and radiographers are discussing the challenges of implementing EBP in both academic and clinical practice. This discussion paper explains why EBP needs to be a basis for a radiography curriculum and a part of radiographers' practice. We argue that Norwegian radiographers must increase participation in research and developing practice within their specific radiographic domain

  7. Skull base tumours

    International Nuclear Information System (INIS)

    Borges, Alexandra

    2008-01-01

    With the advances of cross-sectional imaging radiologists gained an increasing responsibility in the management of patients with skull base pathology. As this anatomic area is hidden to clinical exam, surgeons and radiation oncologists have to rely on imaging studies to plan the most adequate treatment. To fulfil these endeavour radiologists need to be knowledgeable about skull base anatomy, about the main treatment options available, their indications and contra-indications and needs to be aware of the wide gamut of pathologies seen in this anatomic region. This article will provide a radiologists' friendly approach to the central skull base and will review the most common central skull base tumours and tumours intrinsic to the bony skull base

  8. Iterative Sparse Channel Estimation and Decoding for Underwater MIMO-OFDM

    Directory of Open Access Journals (Sweden)

    Berger ChristianR

    2010-01-01

    Full Text Available We propose a block-by-block iterative receiver for underwater MIMO-OFDM that couples channel estimation with multiple-input multiple-output (MIMO detection and low-density parity-check (LDPC channel decoding. In particular, the channel estimator is based on a compressive sensing technique to exploit the channel sparsity, the MIMO detector consists of a hybrid use of successive interference cancellation and soft minimum mean-square error (MMSE equalization, and channel coding uses nonbinary LDPC codes. Various feedback strategies from the channel decoder to the channel estimator are studied, including full feedback of hard or soft symbol decisions, as well as their threshold-controlled versions. We study the receiver performance using numerical simulation and experimental data collected from the RACE08 and SPACE08 experiments. We find that iterative receiver processing including sparse channel estimation leads to impressive performance gains. These gains are more pronounced when the number of available pilots to estimate the channel is decreased, for example, when a fixed number of pilots is split between an increasing number of parallel data streams in MIMO transmission. For the various feedback strategies for iterative channel estimation, we observe that soft decision feedback slightly outperforms hard decision feedback.

  9. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    Science.gov (United States)

    Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe

    2015-05-01

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.

  10. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    International Nuclear Information System (INIS)

    Lippman, Thomas; Brockie, Richard; Contreras, John; Garzon, Samir; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Coker, Jon; Galbraith, Rick; Hanson, Weldon; Duquette, Paul; Petrizzi, Joe

    2015-01-01

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components

  11. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Lippman, Thomas, E-mail: Thomas.Lippman@hgst.com; Brockie, Richard; Contreras, John; Garzon, Samir; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard [HGST, a Western Digital Company, San Jose, California 95119 (United States); Coker, Jon; Galbraith, Rick; Hanson, Weldon [HGST, a Western Digital Company, Rochester, Minnesota 55901 (United States); Duquette, Paul; Petrizzi, Joe [Avago Technologies, San Jose, California 95131 (United States)

    2015-05-07

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.

  12. Quick-low-density parity check and dynamic threshold voltage optimization in 1X nm triple-level cell NAND flash memory with comprehensive analysis of endurance, retention-time, and temperature variation

    Science.gov (United States)

    Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken

    2016-08-01

    NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.

  13. Log-Likelihood Ratio Calculation for Iterative Decoding on Rayleigh Fading Channels Using Padé Approximation

    Directory of Open Access Journals (Sweden)

    Gou Hosoya

    2013-01-01

    Full Text Available Approximate calculation of channel log-likelihood ratio (LLR for wireless channels using Padé approximation is presented. LLR is used as an input of iterative decoding for powerful error-correcting codes such as low-density parity-check (LDPC codes or turbo codes. Due to the lack of knowledge of the channel state information of a wireless fading channel, such as uncorrelated fiat Rayleigh fading channels, calculations of exact LLR for these channels are quite complicated for a practical implementation. The previous work, an LLR calculation using the Taylor approximation, quickly becomes inaccurate as the channel output leaves some derivative point. This becomes a big problem when higher order modulation scheme is employed. To overcome this problem, a new LLR approximation using Padé approximation, which expresses the original function by a rational form of two polynomials with the same total number of coefficients of the Taylor series and can accelerate the Taylor approximation, is devised. By applying the proposed approximation to the iterative decoding and the LDPC codes with some modulation schemes, we show the effectiveness of the proposed methods by simulation results and analysis based on the density evolution.

  14. Design-Based Research

    DEFF Research Database (Denmark)

    Gynther, Karsten; Christensen, Ove; Petersen, Trine Brun

    2012-01-01

    I denne artikel introduceres Design Based Research for første gang på dansk i et videnskabeligt tidsskrift. Artiklen præsenterer de grundlæggende antagelser, som ligger til grund for Design Based Research-traditionen, og artiklen diskuterer de principper, som ligger til grund for gennemførelse af...... et DBR-forskningsprojekt. Med udgangspunkt i forsknings- og udviklingsprojektet ELYK: E-læring, Yderområder og Klyngedannelse, præsenteres den innovationsmodel, som projektet har udviklet med udgangspunkt i Design Based Research traditionen. ELYKs DBR innovationsmodel har vist sig effektiv i forhold...

  15. Nature-based integration

    DEFF Research Database (Denmark)

    Pitkänen, Kati; Oratuomi, Joose; Hellgren, Daniela

    Increased attention to, and careful planning of the integration of migrants into Nordic societies is ever more important. Nature based integration is a new solution to respond to this need. This report presents the results of a Nordic survey and workshop and illustrates current practices of nature...... based integration by case study descriptions from Denmark, Sweden Norway and Finland. Across Nordic countries several practical projects and initiatives have been launched to promote the benefits of nature in integration and there is also growing academic interest in the topic. Nordic countries have...... the potential of becoming real forerunners in nature based integration even at the global scale....

  16. Data base management study

    Science.gov (United States)

    1976-01-01

    Data base management techniques and applicable equipment are described. Recommendations which will assist potential NASA data users in selecting and using appropriate data base management tools and techniques are presented. Classes of currently available data processing equipment ranging from basic terminals to large minicomputer systems were surveyed as they apply to the needs of potential SEASAT data users. Cost and capabilities projections for this equipment through 1985 were presented. A test of a typical data base management system was described, as well as the results of this test and recommendations to assist potential users in determining when such a system is appropriate for their needs. The representative system tested was UNIVAC's DMS 1100.

  17. Value-based pricing

    Directory of Open Access Journals (Sweden)

    Netseva-Porcheva Tatyana

    2010-01-01

    Full Text Available The main aim of the paper is to present the value-based pricing. Therefore, the comparison between two approaches of pricing is made - cost-based pricing and value-based pricing. The 'Price sensitively meter' is presented. The other topic of the paper is the perceived value - meaning of the perceived value, the components of perceived value, the determination of perceived value and the increasing of perceived value. In addition, the best company strategies in matrix 'value-cost' are outlined. .

  18. QuickBase

    CERN Document Server

    Conner, Nancy

    2007-01-01

    Ready to put Intuit's QuickBase to work? Our new Missing Manual shows you how to capture, modify, share, and manage data and documents with this web-based data-sharing program quickly and easily. No longer do you have to coordinate your team through a blizzard of emails or play frustrating games of "guess which document is the right one."QuickBase saves your organization time and money, letting you manage and share the information that makes your business tick: sales figures, project timelines, drafts of documents, purchase or work requests--whatever information you need to keep business flowi

  19. Cheboygan Vessel Base

    Data.gov (United States)

    Federal Laboratory Consortium — Cheboygan Vessel Base (CVB), located in Cheboygan, Michigan, is a field station of the USGS Great Lakes Science Center (GLSC). CVB was established by congressional...

  20. Hanscom Air Force Base

    Data.gov (United States)

    Federal Laboratory Consortium — MIT Lincoln Laboratory occupies 75 acres (20 acres of which are MIT property) on the eastern perimeter of Hanscom Air Force Base, which is at the nexus of Lexington,...

  1. Network-Based Effectiveness

    National Research Council Canada - National Science Library

    Friman, Henrik

    2006-01-01

    ... (extended from Leavitt, 1965). This text identifies aspects of network-based effectiveness that can benefit from a better understanding of leadership and management development of people, procedures, technology, and organizations...

  2. WormBase

    Data.gov (United States)

    U.S. Department of Health & Human Services — WormBase is an international consortium of biologists and computer scientists dedicated to providing the research community with accurate, current, accessible...

  3. Kelomehele preemia Baseli festivalil

    Index Scriptorium Estoniae

    2000-01-01

    Baselis festivalil "VIPER - International Festival for Film Video and New Media" tunnistati parimaks CD-ROMiks Gustav Deutschi/Anna Schimeki "Odysee today", netiprojektiks itaallaste "01.ORG", äramärkimispreemia - Raivo Kelomehe "Videoweaver"

  4. Risk based modelling

    International Nuclear Information System (INIS)

    Chapman, O.J.V.; Baker, A.E.

    1993-01-01

    Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)

  5. Problem Based Learning

    DEFF Research Database (Denmark)

    de Graaff, Erik; Guerra, Aida

    , the key principles remain the same everywhere. Graaff & Kolmos (2003) identify the main PBL principles as follows: 1. Problem orientation 2. Project organization through teams or group work 3. Participant-directed 4. Experiental learning 5. Activity-based learning 6. Interdisciplinary learning and 7...... model and in general problem based and project based learning. We apply the principle of teach as you preach. The poster aims to outline the visitors’ workshop programme showing the results of some recent evaluations.......Problem-Based Learning (PBL) is an innovative method to organize the learning process in such a way that the students actively engage in finding answers by themselves. During the past 40 years PBL has evolved and diversified resulting in a multitude in variations in models and practices. However...

  6. Biomimetics: nature based innovation

    National Research Council Canada - National Science Library

    Bar-Cohen, Yoseph

    2012-01-01

    "Based on the concept that nature offers numerous sources of inspiration for inventions related to mechanisms, materials, processes, and algorithms, this book covers the topic of biomimetics and the inspired innovation...

  7. BaseMap

    Data.gov (United States)

    California Natural Resource Agency — The goal of this project is to provide a convenient base map that can be used as a starting point for CA projects. It's simple, but designed to work at a number of...

  8. PHENANTHROLINE TEMPLATED SCHIFF BASE

    African Journals Online (AJOL)

    DNA in intercalative mode and in the development of unique chemotherapeutics where they impact on the ... between base pairs of DNA. .... h, i, j, k belong to fragmentation products of impap. ..... Sm(III) complex and herring sperm DNA. Bull.

  9. Lunar resource base

    Science.gov (United States)

    Pulley, John; Wise, Todd K.; Roy, Claude; Richter, Phil

    A lunar base that exploits local resources to enhance the productivity of a total SEI scenario is discussed. The goals were to emphasize lunar science and to land men on Mars in 2016 using significant amounts of lunar resources. It was assumed that propulsion was chemical and the surface power was non-nuclear. Three phases of the base build-up are outlined, the robotic emplacement of the first elements is detailed and a discussion of future options is included.

  10. Participatory design based research

    DEFF Research Database (Denmark)

    Dau, Susanne; Bach Jensen, Louise; Falk, Lars

    This poster reveal how participatory design based research by the use of a CoED inspired creative process can be used for designing solutions to problems regarding students study activities outside campus.......This poster reveal how participatory design based research by the use of a CoED inspired creative process can be used for designing solutions to problems regarding students study activities outside campus....

  11. Maintaining Relationship Based Procurement

    OpenAIRE

    Davis, Peter

    2012-01-01

    Alliance and relationship projects are increasingin number and represent a large pool of work. Tobe successful relationship style contracts dependon soft-dollar factors, particularly the participants'ability to work together within an agreedframework, generally they are not based on lowbid tendering. Participants should be prepared todo business in an open environment based ontrust and mutually agreed governance. Theresearch evaluates relationship maintenance inthe implementation phase of con...

  12. Game-based telerehabilitation.

    Science.gov (United States)

    Lange, B; Flynn, Sheryl M; Rizzo, A A

    2009-03-01

    This article summarizes the recent accomplishments and current challenges facing game-based virtual reality (VR) telerehabilitation. Specifically this article addresses accomplishments relative to realistic practice scenarios, part to whole practice, objective measurement of performance and progress, motivation, low cost, interaction devices and game design. Furthermore, a description of the current challenges facing game based telerehabilitation including the packaging, internet capabilities and access, data management, technical support, privacy protection, seizures, distance trials, scientific scrutiny and support from insurance companies.

  13. REST based mobile applications

    Science.gov (United States)

    Rambow, Mark; Preuss, Thomas; Berdux, Jörg; Conrad, Marc

    2008-02-01

    Simplicity is the major advantage of REST based webservices. Whereas SOAP is widespread in complex, security sensitive business-to-business aplications, REST is widely used for mashups and end-user centric applicatons. In that context we give an overview of REST and compare it to SOAP. Furthermore we apply the GeoDrawing application as an example for REST based mobile applications and emphasize on pros and cons for the use of REST in mobile application scenarios.

  14. Swarm-based medicine.

    Science.gov (United States)

    Putora, Paul Martin; Oldenburg, Jan

    2013-09-19

    Occasionally, medical decisions have to be taken in the absence of evidence-based guidelines. Other sources can be drawn upon to fill in the gaps, including experience and intuition. Authorities or experts, with their knowledge and experience, may provide further input--known as "eminence-based medicine". Due to the Internet and digital media, interactions among physicians now take place at a higher rate than ever before. With the rising number of interconnected individuals and their communication capabilities, the medical community is obtaining the properties of a swarm. The way individual physicians act depends on other physicians; medical societies act based on their members. Swarm behavior might facilitate the generation and distribution of knowledge as an unconscious process. As such, "swarm-based medicine" may add a further source of information to the classical approaches of evidence- and eminence-based medicine. How to integrate swarm-based medicine into practice is left to the individual physician, but even this decision will be influenced by the swarm.

  15. Evidence-Based Toxicology.

    Science.gov (United States)

    Hoffmann, Sebastian; Hartung, Thomas; Stephens, Martin

    Evidence-based toxicology (EBT) was introduced independently by two groups in 2005, in the context of toxicological risk assessment and causation as well as based on parallels between the evaluation of test methods in toxicology and evidence-based assessment of diagnostics tests in medicine. The role model of evidence-based medicine (EBM) motivated both proposals and guided the evolution of EBT, whereas especially systematic reviews and evidence quality assessment attract considerable attention in toxicology.Regarding test assessment, in the search of solutions for various problems related to validation, such as the imperfectness of the reference standard or the challenge to comprehensively evaluate tests, the field of Diagnostic Test Assessment (DTA) was identified as a potential resource. DTA being an EBM discipline, test method assessment/validation therefore became one of the main drivers spurring the development of EBT.In the context of pathway-based toxicology, EBT approaches, given their objectivity, transparency and consistency, have been proposed to be used for carrying out a (retrospective) mechanistic validation.In summary, implementation of more evidence-based approaches may provide the tools necessary to adapt the assessment/validation of toxicological test methods and testing strategies to face the challenges of toxicology in the twenty first century.

  16. LDEF materials data bases

    Science.gov (United States)

    Funk, Joan G.; Strickland, John W.; Davis, John M.

    1993-01-01

    The Long Duration Exposure Facility (LDEF) and the accompanying experiments were composed of and contained a wide variety of materials representing the largest collection of materials flown in low Earth orbit (LEO) and retrieved for ground based analysis to date. The results and implications of the mechanical, thermal, optical, and electrical data from these materials are the foundation on which future LEO space missions will be built. The LDEF Materials Special Investigation Group (MSIG) has been charged with establishing and developing data bases to document these materials and their performance to assure not only that the data are archived for future generations but also that the data are available to the spacecraft user community in an easily accessed, user-friendly form. This paper discusses the format and content of the three data bases developed or being developed to accomplish this task. The hardware and software requirements for each of these three data bases are discussed along with current availability of the data bases. This paper also serves as a user's guide to the MAPTIS LDEF Materials Data Base.

  17. Paper based electronics platform

    KAUST Repository

    Nassar, Joanna Mohammad

    2017-07-20

    A flexible and non-functionalized low cost paper-based electronic system platform fabricated from common paper, such as paper based sensors, and methods of producing paper based sensors, and methods of sensing using the paper based sensors are provided. A method of producing a paper based sensor can include the steps of: a) providing a conventional paper product to serve as a substrate for the sensor or as an active material for the sensor or both, the paper product not further treated or functionalized; and b) applying a sensing element to the paper substrate, the sensing element selected from the group consisting of a conductive material, the conductive material providing contacts and interconnects, sensitive material film that exhibits sensitivity to pH levels, a compressible and/or porous material disposed between a pair of opposed conductive elements, or a combination of two of more said sensing elements. The method of sensing can further include measuring, using the sensing element, a change in resistance, a change in voltage, a change in current, a change in capacitance, or a combination of any two or more thereof.

  18. Gossip-Based Dissemination

    Science.gov (United States)

    Friedman, Roy; Kermarrec, Anne-Marie; Miranda, Hugo; Rodrigues, Luís

    Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].

  19. Iron-based superconductivity

    CERN Document Server

    Johnson, Peter D; Yin, Wei-Guo

    2015-01-01

    This volume presents an in-depth review of experimental and theoretical studies on the newly discovered Fe-based superconductors.  Following the Introduction, which places iron-based superconductors in the context of other unconventional superconductors, the book is divided into three sections covering sample growth, experimental characterization, and theoretical understanding.  To understand the complex structure-property relationships of these materials, results from a wide range of experimental techniques and theoretical approaches are described that probe the electronic and magnetic proper

  20. Evidence-Based Development

    DEFF Research Database (Denmark)

    Hertzum, Morten; Simonsen, Jesper

    2004-01-01

    Systems development is replete with projects that represent substantial resource investments but result in systems that fail to meet users’ needs. Evidence-based development is an emerging idea intended to provide means for managing customer-vendor relationships and working systematically toward...... meeting customer needs. We are suggesting that the effects of the use of a system should play a prominent role in the contractual definition of IT projects and that contract fulfilment should be determined on the basis of evidence of these effects. Based on two ongoing studies of home-care management...

  1. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  2. Process-based costing.

    Science.gov (United States)

    Lee, Robert H; Bott, Marjorie J; Forbes, Sarah; Redford, Linda; Swagerty, Daniel L; Taunton, Roma Lee

    2003-01-01

    Understanding how quality improvement affects costs is important. Unfortunately, low-cost, reliable ways of measuring direct costs are scarce. This article builds on the principles of process improvement to develop a costing strategy that meets both criteria. Process-based costing has 4 steps: developing a flowchart, estimating resource use, valuing resources, and calculating direct costs. To illustrate the technique, this article uses it to cost the care planning process in 3 long-term care facilities. We conclude that process-based costing is easy to implement; generates reliable, valid data; and allows nursing managers to assess the costs of new or modified processes.

  3. Inkjet-based micromanufacturing

    CERN Document Server

    Korvink, Jan G; Shin, Dong-Youn; Brand, Oliver; Fedder, Gary K; Hierold, Christofer; Tabata, Osamu

    2012-01-01

    Inkjet-based Micromanufacturing Inkjet technology goes way beyond putting ink on paper: it enables simpler, faster and more reliable manufacturing processes in the fields of micro- and nanotechnology. Modern inkjet heads are per se precision instruments that deposit droplets of fluids on a variety of surfaces in programmable, repeating patterns, allowing, after suitable modifications and adaptations, the manufacturing of devices such as thin-film transistors, polymer-based displays and photovoltaic elements. Moreover, inkjet technology facilitates the large-scale production of flexible RFID tr

  4. On multivariate Wilson bases

    DEFF Research Database (Denmark)

    Bownik, Marcin; Jakobsen, Mads Sielemann; Lemvig, Jakob

    2017-01-01

    A Wilson system is a collection of finite linear combinations of time frequency shifts of a square integrable function. In this paper we give an account of the construction of bimodular Wilson bases in higher dimensions from Gabor frames of redundancy two.......A Wilson system is a collection of finite linear combinations of time frequency shifts of a square integrable function. In this paper we give an account of the construction of bimodular Wilson bases in higher dimensions from Gabor frames of redundancy two....

  5. Supramolecular fluorene based materials

    NARCIS (Netherlands)

    Abbel, R.J.

    2008-01-01

    This thesis describes the use of noncovalent interactions in order to manipulate and control the self-assembly and morphology of electroactive fluorene-based materials. The supramolecular arrangement of p-conjugated polymers and oligomers can strongly influence their electronic and photophysical

  6. EPICS based DAQ system

    International Nuclear Information System (INIS)

    Cheng Weixing; Chen Yongzhong; Zhou Weimin; Ye Kairong; Liu Dekang

    2002-01-01

    EPICS is the most popular developing platform to build control system and beam diagnostic system in modern physics experiment facilities. An EPICS based data acquisition system was built in Redhat 6.2 operation system. The system is successfully used in the beam position monitor mapping, it improves the mapping process a lot

  7. Scenario-based strategizing

    DEFF Research Database (Denmark)

    Lehr, Thomas; Lorenz, Ullrich; Willert, Markus

    2017-01-01

    -based efficacy and robustness. To facilitate the colla- borative strategizing in teams, we propose a matrix with robustness and efficacy as the two axes, which we call the Parmenides Matrix. We assess the impact of the novel approach by applying it in two cases, at a govern- mental agency (German Environmental...

  8. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  9. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  10. Surfel Based Geometry Resonstruction

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas

    2010-01-01

    We propose a method for retrieving a piecewise smooth surface from noisy data. In data acquired by a scanning process sampled points are almost never on the discontinuities making reconstruction of surfaces with sharp features difficult. Our method is based on a Markov Random Field (MRF) formulat...

  11. Diffusion Based Photon Mapping

    DEFF Research Database (Denmark)

    Schjøth, Lars; Sporring, Jon; Fogh Olsen, Ole

    2008-01-01

    . To address this problem, we introduce a photon mapping algorithm based on nonlinear anisotropic diffusion. Our algorithm adapts according to the structure of the photon map such that smoothing occurs along edges and structures and not across. In this way, we preserve important illumination features, while...

  12. Evidence-based policy

    DEFF Research Database (Denmark)

    Vohnsen, Nina Holm

    2013-01-01

    -makers and the research community (e.g. Boden & Epstein 2006; House of Commons 2006; Cartwright et al 2009; Rod 2010; Vohnsen 2011). This article intends to draw out some general pitfalls in the curious meeting of science and politics by focusing on a particular attempt to make evidence-based legislation in Denmark (for...

  13. Project-Based Science

    Science.gov (United States)

    Krajcik, Joe

    2015-01-01

    Project-based science is an exciting way to teach science that aligns with the "Next Generation Science Standards" ("NGSS"). By focusing on core ideas along with practices and crosscutting concepts, classrooms become learning environments where teachers and students engage in science by designing and carrying out…

  14. Financing Competency Based Programs.

    Science.gov (United States)

    Daniel, Annette

    Literature on the background, causes, and current prevalence of competency based programs is synthesized in this report. According to one analysis of the actual and probable costs of minimum competency testing, estimated costs for test development, test administration, bureaucratic structures, and remedial programs for students who cannot pass the…

  15. Computer Based Expert Systems.

    Science.gov (United States)

    Parry, James D.; Ferrara, Joseph M.

    1985-01-01

    Claims knowledge-based expert computer systems can meet needs of rural schools for affordable expert advice and support and will play an important role in the future of rural education. Describes potential applications in prediction, interpretation, diagnosis, remediation, planning, monitoring, and instruction. (NEC)

  16. Community-Based Care

    Science.gov (United States)

    ... our e-newsletter! Aging & Health A to Z Community-Based Care Basic Facts & Information A variety of healthcare options ... day care centers are either in churches or community centers. Adult day care is commonly used to care for people who ...

  17. Polymer based tunneling sensor

    Science.gov (United States)

    Cui, Tianhong (Inventor); Wang, Jing (Inventor); Zhao, Yongjun (Inventor)

    2006-01-01

    A process for fabricating a polymer based circuit by the following steps. A mold of a design is formed through a lithography process. The design is transferred to a polymer substrate through a hot embossing process. A metal layer is then deposited over at least part of said design and at least one electrical lead is connected to said metal layer.

  18. Evidence-based guidelines

    DEFF Research Database (Denmark)

    Rovira, Àlex; Wattjes, Mike P; Tintoré, Mar

    2015-01-01

    diagnosis in patients with MS. The aim of this article is to provide guidelines for the implementation of MRI of the brain and spinal cord in the diagnosis of patients who are suspected of having MS. These guidelines are based on an extensive review of the recent literature, as well as on the personal...

  19. School Based Health Centers

    Science.gov (United States)

    Children's Aid Society, 2012

    2012-01-01

    School Based Health Centers (SBHC) are considered by experts as one of the most effective and efficient ways to provide preventive health care to children. Few programs are as successful in delivering health care to children at no cost to the patient, and where they are: in school. For many underserved children, The Children's Aid Society's…

  20. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  1. Problem-based learning

    NARCIS (Netherlands)

    Loyens, Sofie; Kirschner, Paul A.; Paas, Fred

    2010-01-01

    Loyens, S. M. M., Kirschner, P. A., & Paas, F. (2011). Problem-based learning. In S. Graham (Editor-in-Chief), A. Bus, S. Major, & L. Swanson (Associate Editors), APA educational psychology handbook: Vol. 3. Application to learning and teaching (pp. 403-425). Washington, DC: American Psychological

  2. Base tree property

    Czech Academy of Sciences Publication Activity Database

    Balcar, B.; Doucha, Michal; Hrušák, M.

    2015-01-01

    Roč. 32, č. 1 (2015), s. 69-81 ISSN 0167-8094 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : forcing * Boolean algebras * base tree Subject RIV: BA - General Mathematics Impact factor: 0.614, year: 2015 http://link.springer.com/article/10.1007/s11083-013-9316-2

  3. unsymmetrical Schiff base complexes

    Indian Academy of Sciences (India)

    the effect of the substitutional groups of the Schiff base on the oxidation and reduction potentials, we used ... Electrochemistry of these complexes showed that the presence of electron .... a solution of the ligand (1 mmol) in methanol (15 mL).

  4. Home-based care

    African Journals Online (AJOL)

    Mrs. Patience Edoho Samson-Akpan

    study was to ascertain the relationship between home-based care and quality of life of PLWHA in support groups in. Calabar South Local Government Area. A correlational design was utilized and a purposive sample of 74 PLWHA participated in the study. A self developed and well validated questionnaire was used for data ...

  5. Mutually unbiased bases

    Indian Academy of Sciences (India)

    Mutually unbiased bases play an important role in quantum cryptography [2] and in the optimal determination of the density operator of an ensemble [3,4]. A density operator ρ in N-dimensions depends on N2 1 real quantities. With the help of MUB's, any such density operator can be encoded, in an optimal way, in terms of ...

  6. Scenario-based strategizing

    DEFF Research Database (Denmark)

    Lehr, Thomas; Lorenz, Ullrich; Willert, Markus

    2017-01-01

    For over 40 years, scenarios have been promoted as a key technique for forming strategies in uncertain en- vironments. However, many challenges remain. In this article, we discuss a novel approach designed to increase the applicability of scenario-based strategizing in top management teams. Drawi...... Ministry) and a firm affected by disruptive change (Bosch, leading global supplier of technology and solutions)....

  7. 80537 based distance relay

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1999-01-01

    A method for implementing a digital distance relay in the power system is described.Instructions are given on how to program this relay on a 80537 based microcomputer system.The problem is used as a practical case study in the course 53113: Micocomputer applications in the power system.The relay...

  8. Mojave Base Station Implementation

    Science.gov (United States)

    Koscielski, C. G.

    1984-01-01

    A 12.2 meter diameter X-Y mount antenna was reconditioned for use by the crustal dynamic project as a fixed base station. System capabilities and characteristics and key performance parameters for subsystems are presented. The implementation is completed.

  9. Model-based consensus

    NARCIS (Netherlands)

    Boumans, M.; Martini, C.; Boumans, M.

    2014-01-01

    The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  10. Model-based consensus

    NARCIS (Netherlands)

    Boumans, Marcel

    2014-01-01

    The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on

  11. Animation-based Sketching

    DEFF Research Database (Denmark)

    Vistisen, Peter

    This thesis is based on the results of a three-year long PhD-study at the Department of Communication and Psychology at Aalborg University. The thesis consist of five original papers, a book manuscript, as well as a linking text with the thesis’ research questions, research design, and summary...

  12. Refractive index based measurements

    DEFF Research Database (Denmark)

    2014-01-01

    In a method for performing a refractive index based measurement of a property of a fluid such as chemical composition or temperature, a chirp in the local spatial frequency of interference fringes of an interference pattern is reduced by mathematical manipulation of the recorded light intensity...

  13. Performance based fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2002-01-01

    Different aspects of fault detection and fault isolation in closed-loop systems are considered. It is shown that using the standard setup known from feedback control, it is possible to formulate fault diagnosis problems based on a performance index in this general standard setup. It is also shown...

  14. Schiff base ligand

    Indian Academy of Sciences (India)

    Unknown

    Low-temperature stoichiometric Schiff base reaction in air in 3 : 1 mole ratio between benz- aldehyde and triethylenetetramine (trien) in methanol yields a novel tetraaza µ-bis(bidentate) acyclic ligand L. It was .... electrochemical work was performed as reported in ..... change in ligand shape through change in oxidation.

  15. ISFET based enzyme sensors

    NARCIS (Netherlands)

    van der Schoot, Bart H.; Bergveld, Piet

    1987-01-01

    This paper reviews the results that have been reported on ISFET based enzyme sensors. The most important improvement that results from the application of ISFETs instead of glass membrane electrodes is in the method of fabrication. Problems with regard to the pH dependence of the response and the

  16. Microcontroller base process emulator

    OpenAIRE

    Jovrea Titus Claudiu

    2009-01-01

    This paper describes the design of a microcontroller base emulator for a conventional industrial process. The emulator is made with microcontroller and is used for testing and evaluating the performances of the industrial regulators. The parameters of the emulated process are fully customizable online and downloadable thru a serial communication from a personal computer.

  17. REST based service composition

    DEFF Research Database (Denmark)

    Grönvall, Erik; Ingstrup, Mads; Pløger, Morten

    2011-01-01

    This paper presents an ongoing work developing and testing a Service Composition framework based upon the REST architecture named SECREST. A minimalistic approach have been favored instead of a creating a complete infrastructure. One focus has been on the system's interaction model. Indeed, an aim...

  18. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    Full text: In convolution based profile fitting, profiles are generated by convoluting functions together to form the observed profile shape. For a convolution of 'n' functions this process can be written as, Y(2θ)=F 1 (2θ)x F 2 (2θ)x... x F i (2θ)x....xF n (2θ). In powder diffractometry the functions F i (2θ) can be interpreted as the aberration functions of the diffractometer, but in general any combination of appropriate functions for F i (2θ) may be used in this context. Most direct convolution fitting methods are restricted to combinations of F i (2θ) that can be convoluted analytically (e.g. GSAS) such as Lorentzians, Gaussians, the hat (impulse) function and the exponential function. However, software such as TOPAS is now available that can accurately convolute and refine a wide variety of profile shapes numerically, including user defined profiles, without the need to convolute analytically. Some of the most important advantages of modern convolution based profile fitting are: 1) virtually any peak shape and angle dependence can normally be described using minimal profile parameters in laboratory and synchrotron X-ray data as well as in CW and TOF neutron data. This is possible because numerical convolution and numerical differentiation is used within the refinement procedure so that a wide range of functions can easily be incorporated into the convolution equation; 2) it can use physically based diffractometer models by convoluting the instrument aberration functions. This can be done for most laboratory based X-ray powder diffractometer configurations including conventional divergent beam instruments, parallel beam instruments, and diffractometers used for asymmetric diffraction. It can also accommodate various optical elements (e.g. multilayers and monochromators) and detector systems (e.g. point and position sensitive detectors) and has already been applied to neutron powder diffraction systems (e.g. ANSTO) as well as synchrotron based

  19. Integrated Case Based and Rule Based Reasoning for Decision Support

    OpenAIRE

    Eshete, Azeb Bekele

    2009-01-01

    This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rule based reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rule based reasoning method with an existing case based system. This project focu...

  20. Rock properties data base

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, R.; Gorski, B.; Gyenge, M.

    1991-03-01

    As mining companies proceed deeper and into areas whose stability is threatened by high and complex stress fields, the science of rock mechanics becomes invaluable in designing underground mine strata control programs. CANMET's Mining Research Laboratories division has compiled a summary of pre- and post-failure mechanical properties of rock types which were tested to provide design data. The 'Rock Properties Data Base' presents the results of these tests, and includes many rock types typical of Canadian mine environments. The data base also contains 'm' and 's' values determined using Hoek and Brown's failure criteria for both pre- and post-failure conditions. 7 refs., 3 tabs., 9 figs., 1 append.

  1. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  2. Problem Based Game Design

    DEFF Research Database (Denmark)

    Reng, Lars; Schoenau-Fog, Henrik

    2011-01-01

    At Aalborg University’s department of Medialogy, we are utilizing the Problem Based Learning method to encourage students to solve game design problems by pushing the boundaries and designing innovative games. This paper is concerned with describing this method, how students employ it in various ...... projects and how they learn to analyse, design, and develop for innovation by using it. We will present various cases to exemplify the approach and focus on how the method engages students and aspires for innovation in digital entertainment and games.......At Aalborg University’s department of Medialogy, we are utilizing the Problem Based Learning method to encourage students to solve game design problems by pushing the boundaries and designing innovative games. This paper is concerned with describing this method, how students employ it in various...

  3. Technology based Education System

    DEFF Research Database (Denmark)

    Kant Hiran, Kamal; Doshi, Ruchi; Henten, Anders

    2016-01-01

    Abstract - Education plays a very important role for the development of the country. Education has multiple dimensions from schooling to higher education and research. In all these domains, there is invariably a need for technology based teaching and learning tools are highly demanded in the acad......Abstract - Education plays a very important role for the development of the country. Education has multiple dimensions from schooling to higher education and research. In all these domains, there is invariably a need for technology based teaching and learning tools are highly demanded...... in the academic institutions. Thus, there is a need of comprehensive technology support system to cater the demands of all educational actors. Cloud Computing is one such comprehensive and user-friendly technology support environment that is the need of an hour. Cloud computing is the emerging technology that has...

  4. Knowledge based maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sturm, A [Hamburgische Electacitaets-Werke AG Hamburg (Germany)

    1998-12-31

    The establishment of maintenance strategies is of crucial significance for the reliability of a plant and the economic efficiency of maintenance measures. Knowledge about the condition of components and plants from the technical and business management point of view therefore becomes one of the fundamental questions and the key to efficient management and maintenance. A new way to determine the maintenance strategy can be called: Knowledge Based Maintenance. A simple method for determining strategies while taking the technical condition of the components of the production process into account to the greatest possible degree which can be shown. A software with an algorithm for Knowledge Based Maintenance leads the user during complex work to the determination of maintenance strategies for this complex plant components. (orig.)

  5. Knowledge based maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sturm, A. [Hamburgische Electacitaets-Werke AG Hamburg (Germany)

    1997-12-31

    The establishment of maintenance strategies is of crucial significance for the reliability of a plant and the economic efficiency of maintenance measures. Knowledge about the condition of components and plants from the technical and business management point of view therefore becomes one of the fundamental questions and the key to efficient management and maintenance. A new way to determine the maintenance strategy can be called: Knowledge Based Maintenance. A simple method for determining strategies while taking the technical condition of the components of the production process into account to the greatest possible degree which can be shown. A software with an algorithm for Knowledge Based Maintenance leads the user during complex work to the determination of maintenance strategies for this complex plant components. (orig.)

  6. Conducting Polymer Based Nanobiosensors

    Directory of Open Access Journals (Sweden)

    Chul Soon Park

    2016-06-01

    Full Text Available In recent years, conducting polymer (CP nanomaterials have been used in a variety of fields, such as in energy, environmental, and biomedical applications, owing to their outstanding chemical and physical properties compared to conventional metal materials. In particular, nanobiosensors based on CP nanomaterials exhibit excellent performance sensing target molecules. The performance of CP nanobiosensors varies based on their size, shape, conductivity, and morphology, among other characteristics. Therefore, in this review, we provide an overview of the techniques commonly used to fabricate novel CP nanomaterials and their biosensor applications, including aptasensors, field-effect transistor (FET biosensors, human sense mimicking biosensors, and immunoassays. We also discuss prospects for state-of-the-art nanobiosensors using CP nanomaterials by focusing on strategies to overcome the current limitations.

  7. Fusion safety data base

    International Nuclear Information System (INIS)

    Laats, E.T.; Hardy, H.A.

    1983-01-01

    The purpose of this Fusion Safety Data Base Program is to provide a repository of data for the design and development of safe commercial fusion reactors. The program is sponsored by the United States Department of Energy (DOE), Office of Fusion Energy. The function of the program is to collect, examine, permanently store, and make available the safety data to the entire US magnetic-fusion energy community. The sources of data will include domestic and foreign fusion reactor safety-related research programs. Any participant in the DOE Program may use the Data Base Program from his terminal through user friendly dialog and can view the contents in the form of text, tables, graphs, or system diagrams

  8. Maintaining Relationship Based Procurement

    Directory of Open Access Journals (Sweden)

    Peter Davis

    2012-11-01

    Full Text Available Alliance and relationship projects are increasingin number and represent a large pool of work. Tobe successful relationship style contracts dependon soft-dollar factors, particularly the participants'ability to work together within an agreedframework, generally they are not based on lowbid tendering. Participants should be prepared todo business in an open environment based ontrust and mutually agreed governance. Theresearch evaluates relationship maintenance inthe implementation phase of constructionalliances - a particular derivative of relationshipstyle contracts. To determine the factors thatcontribute to relationship maintenance forty-nineexperienced Australian alliance projectmanagers were interviewed. The main findingswere; the development of relationships early inthe project form building blocks of success fromwhich relationships are maintained and projectvalue added; quality facilitation plays animportant part in relationship maintenance and ahybrid organisation created as a result of alliancedevelopment overcomes destructiveorganisational boundaries. Relationshipmaintenance is integral to alliance project controland failure to formalise it and pay attention toprocess and past outcomes will undermine analliance project's potential for success.

  9. Location-based Scheduling

    DEFF Research Database (Denmark)

    Andersson, Niclas; Christensen, Knud

    on the market. However, CPM is primarily an activity based method that takes the activity as the unit of focus and there is criticism raised, specifically in the case of construction projects, on the method for deficient management of construction work and continuous flow of resources. To seek solutions...... to the identified limitations of the CPM method, an alternative planning and scheduling methodology that includes locations is tested. Location-based Scheduling (LBS) implies a shift in focus, from primarily the activities to the flow of work through the various locations of the project, i.e. the building. LBS uses...... the graphical presentation technique of Line-of-balance, which is adapted for planning and management of work-flows that facilitates resources to perform their work without interruptions caused by other resources working with other activities in the same location. As such, LBS and Lean Construction share...

  10. Trajectory Based Traffic Analysis

    DEFF Research Database (Denmark)

    Krogh, Benjamin Bjerre; Andersen, Ove; Lewis-Kelham, Edwin

    2013-01-01

    We present the INTRA system for interactive path-based traffic analysis. The analyses are developed in collaboration with traffic researchers and provide novel insights into conditions such as congestion, travel-time, choice of route, and traffic-flow. INTRA supports interactive point-and-click a......We present the INTRA system for interactive path-based traffic analysis. The analyses are developed in collaboration with traffic researchers and provide novel insights into conditions such as congestion, travel-time, choice of route, and traffic-flow. INTRA supports interactive point......-and-click analysis, due to a novel and efficient indexing structure. With the web-site daisy.aau.dk/its/spqdemo/we will demonstrate several analyses, using a very large real-world data set consisting of 1.9 billion GPS records (1.5 million trajectories) recorded from more than 13000 vehicles, and touching most...

  11. Carbon Nanotube based Nanotechnolgy

    Science.gov (United States)

    Meyyappan, M.

    2000-10-01

    Carbon nanotube(CNT) was discovered in the early 1990s and is an off-spring of C60(the fullerene or buckyball). CNT, depending on chirality and diameter, can be metallic or semiconductor and thus allows formation of metal-semiconductor and semiconductor-semiconductor junctions. CNT exhibits extraordinary electrical and mechanical properties and offers remarkable potential for revolutionary applications in electronics devices, computing and data storage technology, sensors, composites, storage of hydrogen or lithium for battery development, nanoelectromechanical systems(NEMS), and as tip in scanning probe microscopy(SPM) for imaging and nanolithography. Thus the CNT synthesis, characterization and applications touch upon all disciplines of science and engineering. A common growth method now is based on CVD though surface catalysis is key to synthesis, in contrast to many CVD applications common in microelectronics. A plasma based variation is gaining some attention. This talk will provide an overview of CNT properties, growth methods, applications, and research challenges and opportunities ahead.

  12. Spintronics-based computing

    CERN Document Server

    Prenat, Guillaume

    2015-01-01

    This book provides a comprehensive introduction to spintronics-based computing for the next generation of ultra-low power/highly reliable logic, which is widely considered a promising candidate to replace conventional, pure CMOS-based logic. It will cover aspects from device to system-level, including magnetic memory cells, device modeling, hybrid circuit structure, design methodology, CAD tools, and technological integration methods. This book is accessible to a variety of readers and little or no background in magnetism and spin electronics are required to understand its content.  The multidisciplinary team of expert authors from circuits, devices, computer architecture, CAD and system design reveal to readers the potential of spintronics nanodevices to reduce power consumption, improve reliability and enable new functionality.  .

  13. Knowledge Based Economy Assessment

    OpenAIRE

    Madalina Cristina Tocan

    2012-01-01

    The importance of knowledge-based economy (KBE) in the XXI century is evident. In the article the reflection of knowledge on economy is analyzed. The main point is targeted to the analysis of characteristics of knowledge expression in economy and to the construction of structure of KBE expression. This allows understanding the mechanism of functioning of knowledge economy. The authors highlight the possibility to assess the penetration level of KBE which could manifest itself trough the exist...

  14. Luxury-based Growth

    OpenAIRE

    Shiro Kuwahara

    2006-01-01

    Assuming that there exists a preference for luxury goods and a knowledge spillover from luxury goods production to goods production, this paper constructs an endogenous economic growth model. The model predicts two steady states: one is a steady positive growth state with regard to luxury goods production, and the other is a zero growth state in the absence of luxury goods production. Thus, this study examines the polarization of economies based on luxury goods consumption

  15. Base Stability of Aminocyclopropeniums

    Science.gov (United States)

    2017-11-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) US Army Research Laboratory Weapons and Materials Research Directorate (ATTN: RDRL-WMM-G) 2800 Powder...Mill Road Adelphi, MD 20783-1138 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-8204 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES...fuel cells to test their utility in anion exchange membranes. While the aminocyclopropeniums showed poor base stability, the cyclopropenium cation

  16. Granular loess classification based

    International Nuclear Information System (INIS)

    Browzin, B.S.

    1985-01-01

    This paper discusses how loess might be identified by two index properties: the granulometric composition and the dry unit weight. These two indices are necessary but not always sufficient for identification of loess. On the basis of analyses of samples from three continents, it was concluded that the 0.01-0.5-mm fraction deserves the name loessial fraction. Based on the loessial fraction concept, a granulometric classification of loess is proposed. A triangular chart is used to classify loess

  17. Educational Process Material Base

    OpenAIRE

    Olga Ozerova; Irina Zabaturina; Vera Kuznetsova; Galina Kovaleva

    2012-01-01

    Based on the data obtained by the Institute for Statistical Studies and the Economics of Knowledge, National Research University - Higher School of Economics Olga Ozerova - Head of the Department for Statistics of Education, Institute for Statistical Studies and the Economics of Knowledge, National Research University - Higher School of Economics, Moscow, Russian Federation. Email: Address: 18 Myasnitskaya St., Moscow, 101000, Russian Federation.Irina Zabaturina - senior resea...

  18. Supramolecular fluorene based materials

    OpenAIRE

    Abbel, R.J.

    2008-01-01

    This thesis describes the use of noncovalent interactions in order to manipulate and control the self-assembly and morphology of electroactive fluorene-based materials. The supramolecular arrangement of p-conjugated polymers and oligomers can strongly influence their electronic and photophysical properties. Therefore, a detailed understanding of such organisation processes is essential for the optimisation of the performance of these materials as applied in optoelectronic devices. In order to...

  19. Graphene-based Nanoelectronics

    Science.gov (United States)

    2013-02-01

    Electrodes were fabricated by drop casting solutions containing the graphene oxide (GO)/CNT/MnAc materials onto titanium (Ti) or stainless steel current...silicon carbide (SiC) substrate can induce a splitting of up to 0.3 eV between the maximum of the valence and minimum of the conduction bands at the...simultaneously hinders the formation of multilayer graphene domains. These results are based on a diffusion-segregation model for carbon precipitation on a Ni

  20. Spiritual-based Leadership

    DEFF Research Database (Denmark)

    Pruzan, Peter

    2015-01-01

    Although far from mainstream, the concept of spiritual-based leadership is emerging as an inclusive and yet highly personal approach to leadership that integrates a leader’s inner perspectives on identity, purpose, responsibility and success with her or his decisions and actions in the outer world...... of business—and therefore it is also emerging as a significant framework for understanding, practicing, communicating and teaching the art and profession of leadership....

  1. Arduino based laser control

    OpenAIRE

    Bernal Muñoz, Ferran

    2015-01-01

    ARDUINO is a vey usefull platform for prototypes. In this project ARDUINO will be used for controling a Semiconductor Tuneable Laser. [ANGLÈS] Diode laser for communications control based on an Arduino board. Temperature control implementation. Software and hardware protection for the laser implementation. [CASTELLÀ] Control de un láser de comunicaciones ópticas desde el ordenador utilizando una placa Arduino. Implementación de un control de temperatura y protección software y hardware ...

  2. Design bases - Concrete structures

    International Nuclear Information System (INIS)

    Diaz-Llanos Ros, M.

    1993-01-01

    The most suitable title for Section 2 is 'Design Bases', which covers not only calculation but also the following areas: - Structural design concepts. - Project criteria. - Material specifications. These concepts are developed in more detail in the following sections. The numbering in this document is neither complete nor hierarchical since, for easier cross referencing, it corresponds to the paragraphs of Eurocode 2 Part 1 (hereinafter 'EUR-2') which are commented on. (author)

  3. Biosphere data base revision

    International Nuclear Information System (INIS)

    Bergstroem, U.; Andersson, K.; Sundblad, B.

    1985-12-01

    The turnover of long-lived radionuclides in the biosphere has been modelled some time ago and the exposure to man was calculated. The nuclides were long-lived actinides and fission products leaking from a simulated deep rock repository for spent nuclear fuel. The data base for these calculations has been updated in the present work and in addition a number of nuclides that were not included in the earlier work have been treated. (G.B.)

  4. Air Force Smart Bases

    Science.gov (United States)

    2017-10-19

    initiates notification to all personnel on the base, the giant voice announces a lock down, everyone’s smart device shows an alarm requesting...location of the detected sound, they easily find a hunter and send his picture back to the IOC, where the hunter’s identity is verified through facial...computer goes into sleep mode, the thermostat goes back to unoccupied mode and his door locks as he walks through. Meanwhile over in the IOC

  5. Polypeptide based hydrogels

    OpenAIRE

    Hanay, Saltuk

    2018-01-01

    There is a need for biocompatible, biodegradable, 3-D printable and stable hydrogels especially in the areas of tissue engineering, drug delivery, bio-sensing technologies and antimicrobial coatings. The main aim of this Ph.D. work was to fabricate polypeptide based hydrogel which may find a potential application in those fields. Focusing on tyrosine or tryptophan-containing copolypeptides prepared by NCarboxyanhydride (NCA) polymerizations, three different crosslinking strategies have been t...

  6. Knowledge-based utility

    International Nuclear Information System (INIS)

    Chwalowski, M.

    1997-01-01

    This presentation provides industry examples of successful marketing practices by companies facing deregulation and competition. The common thread through the examples is that long term survival of today's utility structure is dependent on the strategic role of knowledge. As opposed to regulated monopolies which usually own huge physical assets and have very little intelligence about their customers, unregulated enterprises tend to be knowledge-based, characterized by higher market value than book value. A knowledge-based enterprise gathers data, creates information and develops knowledge by leveraging it as a competitive weapon. It institutionalizes human knowledge as a corporate asset for use over and over again by the use of databases, computer networks, patents, billing, collection and customer services (BCCS), branded interfaces and management capabilities. Activities to become knowledge-based such as replacing inventory/fixed assets with information about material usage to reduce expenditure and achieve more efficient operations, and by focusing on integration and value-adding delivery capabilities, were reviewed

  7. Microbead agglutination based assays

    KAUST Repository

    Kodzius, Rimantas

    2013-01-21

    We report a simple and rapid room temperature assay for point-of-care (POC) testing that is based on specific agglutination. Agglutination tests are based on aggregation of microbeads in the presence of a specific analyte thus enabling the macroscopic observation. Such tests are most often used to explore antibody-antigen reactions. Agglutination has been used for protein assays using a biotin/streptavidin system as well as a hybridization based assay. The agglutination systems are prone to selftermination of the linking analyte, prone to active site saturation and loss of agglomeration at high analyte concentrations. We investigated the molecular target/ligand interaction, explaining the common agglutination problems related to analyte self-termination, linkage of the analyte to the same bead instead of different microbeads. We classified the agglutination process into three kinds of assays: a two- component assay, a three-component assay and a stepped three- component assay. Although we compared these three kinds of assays for recognizing DNA and protein molecules, the assay can be used for virtually any molecule, including ions and metabolites. In total, the optimized assay permits detecting analytes with high sensitivity in a short time, 5 min, at room temperature. Such a system is appropriate for POC testing.

  8. As bases do petismo

    Directory of Open Access Journals (Sweden)

    David Samuels

    2004-10-01

    Full Text Available A partir dos dados do ESEB de 2002 o autor realiza um estudo das bases eleitorais do PT e de hipóteses sobre a natureza do petismo. Através de técnicas estatísticas multivariadas, são testadas relações do petismo com variáveis demográficas, socioeconômicas e variáveis relativas a questões políticas específicas. Os resultados apontam que apenas a escolaridade tem uma associação específica com o petismo, com implicações para o seu comportamento sócio-político.Based on the results of the 2002 Brazilian Electoral Study, the author analyses the electoral bases of the Worker´s Party and the factors associated with the "petismo". The relationships between the "petismo"and the socioeconomic, demographic and political variables are tested using multivariate analysis. The results indicate that the only "social category"associated with "petismo"is level of education, and it has clear implications to their social and political behavior.

  9. Value-based genomics.

    Science.gov (United States)

    Gong, Jun; Pan, Kathy; Fakih, Marwan; Pal, Sumanta; Salgia, Ravi

    2018-03-20

    Advancements in next-generation sequencing have greatly enhanced the development of biomarker-driven cancer therapies. The affordability and availability of next-generation sequencers have allowed for the commercialization of next-generation sequencing platforms that have found widespread use for clinical-decision making and research purposes. Despite the greater availability of tumor molecular profiling by next-generation sequencing at our doorsteps, the achievement of value-based care, or improving patient outcomes while reducing overall costs or risks, in the era of precision oncology remains a looming challenge. In this review, we highlight available data through a pre-established and conceptualized framework for evaluating value-based medicine to assess the cost (efficiency), clinical benefit (effectiveness), and toxicity (safety) of genomic profiling in cancer care. We also provide perspectives on future directions of next-generation sequencing from targeted panels to whole-exome or whole-genome sequencing and describe potential strategies needed to attain value-based genomics.

  10. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  11. Διόρθωση λαθών με τη χρήση κωδίκων RS-LDPC

    OpenAIRE

    Γκίκα, Ζαχαρούλα

    2012-01-01

    Σήμερα, σε όλα σχεδόν τα τηλεπικοινωνιακά συστήματα τα οποία προορίζονται για αποστολή δεδομένων σε υψηλούς ρυθμούς, έχουν υιοθετηθεί κώδικες διόρθωσης λαθών για την αύξηση της αξιοπιστίας τους και τη μείωση της απαιτούμενης ισχύος εκπομπής τους. Οι κώδικες αυτοί δίνουν τη δυνατότητα ανίχνευσης και διόρθωσης των λαθών που μπορεί να δημιουργήσει το μέσο μετάδοσης (κανάλι) σε κάποιο τμήμα πληροφορίας που μεταφέρεται μέσω του τηλεπικοινωνιακού δικτύου. Μία κατηγορία τέτοιων κωδίκων, και μάλιστα ...

  12. Polyolefin-Based Aerogels

    Science.gov (United States)

    Lee, Je Kyun; Gould, George

    2012-01-01

    An organic polybutadiene (PB) rubberbased aerogel insulation material was developed that will provide superior thermal insulation and inherent radiation protection, exhibiting the flexibility, resiliency, toughness, and durability typical of the parent polymer, yet with the low density and superior insulation properties associated with the aerogels. The rubbery behaviors of the PB rubber-based aerogels are able to overcome the weak and brittle nature of conventional inorganic and organic aerogel insulation materials. Additionally, with higher content of hydrogen in their structure, the PB rubber aerogels will also provide inherently better radiation protection than those of inorganic and carbon aerogels. Since PB rubber aerogels also exhibit good hydrophobicity due to their hydrocarbon molecular structure, they will provide better performance reliability and durability as well as simpler, more economic, and environmentally friendly production over the conventional silica or other inorganic-based aerogels, which require chemical treatment to make them hydrophobic. Inorganic aerogels such as silica aerogels demonstrate many unusual and useful properties. There are several strategies to overcoming the drawbacks associated with the weakness and brittleness of silica aerogels. Development of the flexible fiber-reinforced silica aerogel composite blanket has proven one promising approach, providing a conveniently fielded form factor that is relatively robust toward handling in industrial environments compared to silica aerogel monoliths. However, the flexible silica aerogel composites still have a brittle, dusty character that may be undesirable, or even intolerable, in certain applications. Although the cross-linked organic aerogels such as resorcinol-formaldehyde (RF), polyisocyanurate, and cellulose aerogels show very high impact strength, they are also very brittle with little elongation (i.e., less rubbery). Also, silica and carbon aerogels are less efficient

  13. Characteristics Data Base

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, E.D.; Moore, R.S. (Automated Sciences Group, Inc., Oak Ridge, TN (USA))

    1990-08-01

    The LWR Serial Numbers Database System (SNDB) contains detailed data about individual, historically discharged LWR spent fuel assemblies. This data includes the reactor where used, the year the assemblies were discharged, the pool where they are currently stored, assembly type, burnup, weight, enrichment, and an estimate of their radiological properties. This information is distributed on floppy disks to users in the nuclear industry to assist in planning for the permanent nuclear waste repository. This document describes the design and development of the SNDB. It provides a complete description of the file structures and an outline of the major code modules. It serves as a reference for a programmer maintaining the system, or for others interested in the technical detail of this database. This is the initial version of the SNDB. It contains historical data through December 31, 1987, obtained from the Energy Information Administration (EIA). EIA obtains the data from the utility companies via the RW-859 Survey Form. It evaluates and standardizes the data and distributes the resulting batch level database as a large file on magnetic tape. The Characteristics Data Base obtains this database for use in the LWR Quantities Data Base. Additionally, the CDB obtains the individual assembly level detail from EIA for use in the SNDB. While the Quantities Data Base retains only the level of detail necessary for its reporting, the SNDB does retain and use the batch level data to assist in the identification of a particular assembly serial number. We expect to update the SNDB on an annual basis, as new historical data becomes available.

  14. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  15. Knowledge based Entrepreneurship

    DEFF Research Database (Denmark)

    Heebøll, John

    This book is dedicated enterprising people with a technical or a scientific background who consider commercializing ideas and inventions within their field of expertise via a new business activity or a new company. It aims at distilling experiences from many successful and not so successful start......-up ventures from the Technical University of Denmark, 1988 – 2008 into practical, portable knowledge that can be used by future knowledge-based entrepreneurs to set up new companies efficiently or to stay away from it; to do what’s needed and avoid the pitfalls....

  16. Polymerization Using Phosphazene Bases

    KAUST Repository

    Zhao, Junpeng

    2015-09-01

    In the recent rise of metal-free polymerization techniques, organic phosphazene superbases have shown their remarkable strength as promoter/catalyst for the anionic polymerization of various types of monomers. Generally, the complexation of phosphazene base with the counterion (proton or lithium cation) significantly improves the nucleophilicity of the initiator/chain end resulting in highly enhanced polymerization rates, as compared with conventional metalbased initiating systems. In this chapter, the general features of phosphazenepromoted/catalyzed polymerizations and the applications in macromolecular engineering (synthesis of functionalized polymers, block copolymers, and macromolecular architectures) are discussed with challenges and perspectives being pointed out.

  17. Web Based Customized Design

    OpenAIRE

    Moi, Morten Benestad

    2013-01-01

    This thesis studies the methods needed to create a web based application to remotely customize a CAD model. This includes customizing a CAD model by using a graphical user interface to be able to remotely control the inputs to- and outputs from the model in NX, and to get the result sent back to the user. Using CAD systems such as NX requires intensive training, is often a slow process and gives a lot of room for errors. An intuitive, simple user interface will eliminate the need for CAD trai...

  18. Location-based games

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine

    In this dissertation, it is explored which prerequisites are necessary in location-based games (LBGs) to make meaningful the meeting between players and spatiality with an emphasis on physical locations. Throughout the dissertation, it has been shown that LBGs affect players’ perception of and be...... possible. The practical contribution is my creation of the LBG Visions of Sara. People continue to play this game in Odense more than two years after its launch, and DJEEO uses it as a showcase, enabling the company to sell similar LBGs....

  19. LIGHTWEIGHT CONCRETE BASED GRANSHLAK

    Directory of Open Access Journals (Sweden)

    NETESA M. I.

    2016-02-01

    Full Text Available Raising of problem. Concrete advisable to obtain a low strength with local secondary resources for recycling and reduce the environmental burden on the environment. But it is important to design such concrete compositions with a reduced flow of cement. It is known that the coefficient of efficiency of use of cement in the concrete of the heavy and B10 is less than about 0.5, which is almost two times smaller than in class B15 concrete and above. Even lower coefficient of efficiency in light concrete cement low strength. Therefore, it is important to find patterns determining the composition of lightweight concrete based on local-products industry with more efficient use of cement in them. Purpose.. Based on the analysis of earlier research results, including with the use of methods of mathematical planning of experiments to determine the concrete contents, which can provide the requirements for the underlying layers of the floor, the compressive strength of which should correspond to the class B5. It is important to provide the required strength at minimum flow of the cement, which is the most expensive and energy-intensive part of concrete. Conclusion. Analysis of the test results of control samples of concrete in 28-day-old, the following laws. The required tensile strength of concrete compressive strength of 7.0 MPa can be obtained in the test range when used in formulations as a filler as the Dnieper hydroelectric power station fly ash and tailings Krivoy Rog iron ore YuGOK. To ensure providing the required characteristic strength of the concrete in the underlying layers of the floor is advisable to use a nominal composition per cubic meter of concrete: cement 160 kg granshlaka Plant named after Petrovsky, 675 kg of fly ash Dnieper HPP 390 kg, 400 kg of sand, 230 liters of water. Thus, while ensuring rational grain composition components can obtain the desired strength lightweight concrete based granshlaka plant Petrovsky, using as fillers

  20. Accelerator-based BNCT.

    Science.gov (United States)

    Kreiner, A J; Baldo, M; Bergueiro, J R; Cartelli, D; Castell, W; Thatar Vento, V; Gomez Asoia, J; Mercuri, D; Padulo, J; Suarez Sandin, J C; Erhardt, J; Kesque, J M; Valda, A A; Debray, M E; Somacal, H R; Igarzabal, M; Minsky, D M; Herrera, M S; Capoulat, M E; Gonzalez, S J; del Grosso, M F; Gagetti, L; Suarez Anzorena, M; Gun, M; Carranza, O

    2014-06-01

    The activity in accelerator development for accelerator-based BNCT (AB-BNCT) both worldwide and in Argentina is described. Projects in Russia, UK, Italy, Japan, Israel, and Argentina to develop AB-BNCT around different types of accelerators are briefly presented. In particular, the present status and recent progress of the Argentine project will be reviewed. The topics will cover: intense ion sources, accelerator tubes, transport of intense beams, beam diagnostics, the (9)Be(d,n) reaction as a possible neutron source, Beam Shaping Assemblies (BSA), a treatment room, and treatment planning in realistic cases. © 2013 Elsevier Ltd. All rights reserved.

  1. Cellular based cancer vaccines

    DEFF Research Database (Denmark)

    Hansen, M; Met, Ö; Svane, I M

    2012-01-01

    Cancer vaccines designed to re-calibrate the existing host-tumour interaction, tipping the balance from tumor acceptance towards tumor control holds huge potential to complement traditional cancer therapies. In general, limited success has been achieved with vaccines composed of tumor...... to transiently affect in vitro migration via autocrine receptor-mediated endocytosis of CCR7. In the current review, we discuss optimal design of DC maturation focused on pre-clinical as well as clinical results from standard and polarized dendritic cell based cancer vaccines....

  2. WAP - based telemedicine applications

    International Nuclear Information System (INIS)

    Hung, K.; Zhang, Y.T.

    2001-01-01

    Telemedicine refers to the utilization of telecommunication technology for medical diagnosis, treatment, and patient care. Its aim is to provide expert-based health care to remote sites through telecommunication and information technologies. The significant advances in technologies have enabled the introduction of a broad range of telemedicine applications, which are supported by computer networks, wireless communication, and information superhighway. For example, some hospitals are using tele-radiology for remote consultation. Such a system includes medical imaging devices networked with computers and databases. Another growing area is patient monitoring, in which sensors are used to acquire biomedical signals, such as electrocardiogram (ECG), blood pressure, and body temperature, from a remote patient, who could be in bed or moving freely. The signals are then relayed to remote systems for viewing and analysis. Telemedicine can be divided into two basic modes of operations: real-time mode, in which the patient data can be accessed remotely in real-time, and store-and-forward mode, in which the acquired data does not have to be accessed immediately. In the recent years, many parties have demonstrated various telemedicine applications based on the Internet and cellular phone as these two fields have been developing rapidly. A current, recognizable trend in telecommunication is the convergence of wireless communication and computer network technologies. This has been reflected in recently developed telemedicine systems. For example, in 1998 J. Reponen, et al. have demonstrated transmission and display of computerized tomography (CT) examinations using a remote portable computer wirelessly connected to a computer network through TCP/IP on a GSM cellular phone. Two years later, they carried out the same tests with a GSM-based wireless personal digital assistant (PDA). The WAP (Wireless Application Protocol) Forum was founded in 1997 to create a global protocol

  3. Mars base buildup scenarios

    International Nuclear Information System (INIS)

    Blacic, J.D.

    1985-01-01

    Two surface base build-up scenarios are presented in order to help visualize the mission and to serve as a basis for trade studies. In the first scenario, direct manned landings on the Martian surface occur early in the missions and scientific investigation is the main driver and rationale. In the second scenario, early development of an infrastructure to exploite the volatile resources of the Martian moons for economic purposes is emphasized. Scientific exploration of the surface is delayed at first, but once begun develops rapidly aided by the presence of a permanently manned orbital station

  4. Agent-Based Optimization

    CERN Document Server

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  5. Refractive index based measurements

    DEFF Research Database (Denmark)

    2014-01-01

    In a method for performing a refractive index based measurement of a property of a fluid such as chemical composition or temperature by observing an apparent angular shift in an interference fringe pattern produced by back or forward scattering interferometry, ambiguities in the measurement caused...... by the apparent shift being consistent with one of a number of numerical possibilities for the real shift which differ by 2n are resolved by combining measurements performed on the same sample using light paths therethrough of differing lengths....

  6. Sustainability Base Construction Update

    Science.gov (United States)

    Mewhinney, Michael

    2012-01-01

    Construction of the new Sustainability Base Collaborative support facility, expected to become the highest performing building in the federal government continues at NASA's Ames Research Center, Moffet Field, Calif. The new building is designed to achieve a platinum rating under the leadership in Energy and Environment Design (LEED) new construction standards for environmentally sustainable construction developed by the U. S. Green Building Council, Washington, D. C. When completed by the end of 2011, the $20.6 million building will feature near zero net energy consumption, use 90 percent less potable water than conventionally build buildings of equivalent size, and will result in reduced building maintenance costs.

  7. Chitosan-based nanocomposites

    CSIR Research Space (South Africa)

    Kesavan Pillai, Sreejarani

    2012-08-01

    Full Text Available , and hygiene devices. They thus represent a strong and emerging answer for improved and eco-friendly materials. This chapter reviews the recent developments in the area of chitosan-based nanocomposites, with a special emphasis on clay-containing nanocomposites...-sized mineral fillers like silica, talc, and clay are added to reduce the cost and improve chitosan’s performance in some way. However, the mechanical properties such as elongation at break and tensile strength of these composites decrease with the incorporation...

  8. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  9. Quasi Cyclic Low Density Parity Check Code for High SNR Data Transfer

    Directory of Open Access Journals (Sweden)

    M. R. Islam

    2010-06-01

    Full Text Available An improved Quasi Cyclic Low Density Parity Check code (QC-LDPC is proposed to reduce the complexity of the Low Density Parity Check code (LDPC while obtaining the similar performance. The proposed QC-LDPC presents an improved construction at high SNR with circulant sub-matrices. The proposed construction yields a performance gain of about 1 dB at a 0.0003 bit error rate (BER and it is tested on 4 different decoding algorithms. Proposed QC-LDPC is compared with the existing QC-LDPC and the simulation results show that the proposed approach outperforms the existing one at high SNR. Simulations are also performed varying the number of horizontal sub matrices and the results show that the parity check matrix with smaller horizontal concatenation shows better performance.

  10. Droplet based microfluidics

    International Nuclear Information System (INIS)

    Seemann, Ralf; Brinkmann, Martin; Pfohl, Thomas; Herminghaus, Stephan

    2012-01-01

    Droplet based microfluidics is a rapidly growing interdisciplinary field of research combining soft matter physics, biochemistry and microsystems engineering. Its applications range from fast analytical systems or the synthesis of advanced materials to protein crystallization and biological assays for living cells. Precise control of droplet volumes and reliable manipulation of individual droplets such as coalescence, mixing of their contents, and sorting in combination with fast analysis tools allow us to perform chemical reactions inside the droplets under defined conditions. In this paper, we will review available drop generation and manipulation techniques. The main focus of this review is not to be comprehensive and explain all techniques in great detail but to identify and shed light on similarities and underlying physical principles. Since geometry and wetting properties of the microfluidic channels are crucial factors for droplet generation, we also briefly describe typical device fabrication methods in droplet based microfluidics. Examples of applications and reaction schemes which rely on the discussed manipulation techniques are also presented, such as the fabrication of special materials and biophysical experiments.

  11. DVD Based Electronic Pulser

    International Nuclear Information System (INIS)

    Morris, Scott J.; Pratt, Rick M.; Hughes, Michael A.; Kouzes, Richard T.; Pitts, W K.; Robinson, Eric E.

    2005-01-01

    This paper describes the design, construction, and testing of a DVD based electronic pulser system (DVDEPS). Such a device is used to generate pulse streams for simulation of both gamma and neutron detector systems. The DVDEPS reproduces a random pulse stream of a full HPGe spectrum as well as a digital pulse stream representing the output of a neutron multiplicity detector. The exchangeable DVD media contains over an hour of data for both detector systems and can contain an arbitrary gamma spectrum and neutron pulse stream. The data is written to the DVD using a desktop computer program from either actual or simulated spectra. The targeted use of the DVDEPS is authentication or validation of monitoring equipment for non-proliferation purposes, but it is also of general use whenever a complex data stream is required. The DVD based pulser combines the storage capacity and simplicity of DVD technology with commonly available electronic components to build a relatively inexpensive yet highly capable testing instrument

  12. Evidence-based surgery

    Directory of Open Access Journals (Sweden)

    Miran Rems

    2007-04-01

    Full Text Available Background: Surgery is setting a new ground by the reign of evidence that was brought up by the Evidence Based Medicine (EBM. While experiences and opinion of an expert count the least by the principles of EBM, randomized controlled trials (RCT and other comparative studies have gained their importance. Recommendations that were included in guidelines represent a demanding shift in surgeon’s professional thinking. Their thinking and classical education have not yet been completely based on the results of such studies and are still very very much master-pupil centred. Assessment of someone’s own experiences is threatened by objectivity as negative experiences get recorded in deeper memory. Randomized studies and meta-analyses do appear also in surgery. However, they demand an extra knowledge about critical assessment.Conclusions: Setting a patient to the foreground brings a surgeon’s decision to the field of EBM. The process has already begun and cannot be avoided. Decision hierarchy moves from the experience field to the evidence territory but to a lesser extent when compared to the rest of medicine. There exist objective restrictions with approving a new paradigm. However, these should not stop the process of EBM implementation. Finally, there is an ethic issue to be considered. Too slow activities in research, education and critical assessment can bring the surgeon to the position when a well-informed patient loses his/her trust.

  13. Microlaser-based displays

    Science.gov (United States)

    Bergstedt, Robert; Fink, Charles G.; Flint, Graham W.; Hargis, David E.; Peppler, Philipp W.

    1997-07-01

    Laser Power Corporation has developed a new type of projection display, based upon microlaser technology and a novel scan architecture, which provides the foundation for bright, extremely high resolution images. A review of projection technologies is presented along with the limitations of each and the difficulties they experience in trying to generate high resolution imagery. The design of the microlaser based projector is discussed along with the advantage of this technology. High power red, green, and blue microlasers have been designed and developed specifically for use in projection displays. These sources, in combination with high resolution, high contrast modulator, produce a 24 bit color gamut, capable of supporting the full range of real world colors. The new scan architecture, which reduces the modulation rate and scan speeds required, is described. This scan architecture, along with the inherent brightness of the laser provides the fundamentals necessary to produce a 5120 by 4096 resolution display. The brightness and color uniformity of the display is excellent, allowing for tiling of the displays with far fewer artifacts than those in a traditionally tiled display. Applications for the display include simulators, command and control centers, and electronic cinema.

  14. SPACE BASED INTERCEPTOR SCALING

    Energy Technology Data Exchange (ETDEWEB)

    G. CANAVAN

    2001-02-01

    Space Based Interceptor (SBI) have ranges that are adequate to address rogue ICBMs. They are not overly sensitive to 30-60 s delay times. Current technologies would support boost phase intercept with about 150 interceptors. Higher acceleration and velocity could reduce than number by about a factor of 3 at the cost of heavier and more expensive Kinetic Kill Vehicles (KKVs). 6g SBI would reduce optimal constellation costs by about 35%; 8g SBI would reduce them another 20%. Interceptor ranges fall rapidly with theater missile range. Constellations increase significantly for ranges under 3,000 km, even with advanced interceptor technology. For distributed launches, these estimates recover earlier strategic scalings, which demonstrate the improved absentee ratio for larger or multiple launch areas. Constellations increase with the number of missiles and the number of interceptors launched at each. The economic estimates above suggest that two SBI per missile with a modest midcourse underlay is appropriate. The SBI KKV technology would appear to be common for space- and surface-based boost phase systems, and could have synergisms with improved midcourse intercept and discrimination systems. While advanced technology could be helpful in reducing costs, particularly for short range theater missiles, current technology appears adequate for pressing rogue ICBM, accidental, and unauthorized launches.

  15. Challenge Based Innovation gala

    CERN Multimedia

    CERN. Geneva; Utriainen, Tuuli Maria; Toivonen, Harri; Nordberg, Markus

    2014-01-01

    Challenge Based Innovation gala   There’s a new experiment starting in CERN called IdeaLab where we work together with detector R&D researchers to help them to bridge their knowledge into a more human, societally oriented context. Currently we are located in B153, but will move our activities to a new facility next to the Globe in May 2014. One of our first pilot projects is a 5 month course CBI (Challenge Based Innovation) where two multidisciplinary student teams join forces with Edusafe & TALENT projects at CERN. Their goal is to discover what kind of tools for learning could be created in collaboration with the two groups. After months of user interviews and low resolution prototyping they are ready to share the results with us in the form of an afternoon gala. We warmly welcome you to join us to see the students' results and experience the prototypes they have conceived. The event is in three parts, you are welcome to visit all of them,...

  16. [Competence based medical education].

    Science.gov (United States)

    Bernabó, Jorge G; Buraschi, Jorge; Olcese, Juan; Buraschi, María; Duro, Eduardo

    2007-01-01

    The strategy of curriculum planning in the majority of the Schools of Medicine has shifted, in the past years, from curriculum models based in contents to outcome oriented curricula. Coincidently the interest in defining and evaluating the clinical competences that a graduate must have has grown. In our country, and particularly in the Associated Hospitals belonging to the Unidad Regional de Enseñanza IV of the UBA School of Medicine, evidence has been gathered showing that the acquisition of clinical competences during the grade is in general insufficient. The foundations and characteristics of PREM (Programa de Requisitos Esenciales Mínimos) are described. PREM is a tool to promote the apprenticeship of abilities and necessary skills for the practice of medicine. The objective of the program is to promote the apprenticeship of a well defined list of core competences considered indispensable for a general practitioner. An outcome oriented curriculum with a clear definition of the expected knowledge, skills and attitudes of a graduate of the programme, the promotion of learning experiences centered in the practice and evaluation tools based in direct observation of the student's performance should contribute to close the gap between what the Medicine Schools traditionally teach and evaluate, and what the doctor needs to know and needs to do to perform correctly its profession.

  17. Constraint-based reachability

    Directory of Open Access Journals (Sweden)

    Arnaud Gotlieb

    2013-02-01

    Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.

  18. Metasurface-Based Polarimeters

    Directory of Open Access Journals (Sweden)

    Fei Ding

    2018-04-01

    Full Text Available The state of polarization (SOP is an inherent property of light that can be used to gain crucial information about the composition and structure of materials interrogated with light. However, the SOP is difficult to experimentally determine since it involves phase information between orthogonal polarization states, and is uncorrelated with the light intensity and frequency, which can be easily determined with photodetectors and spectrometers. Rapid progress on optical gradient metasurfaces has resulted in the development of conceptually new approaches to the SOP characterization. In this paper, we review the fundamentals of and recent developments within metasurface-based polarimeters. Starting by introducing the concepts of generalized Snell’s law and Stokes parameters, we explain the Pancharatnam–Berry phase (PB-phase which is instrumental for differentiating between orthogonal circular polarizations. Then we review the recent progress in metasurface-based polarimeters, including polarimeters, spectropolarimeters, orbital angular momentum (OAM spectropolarimeters, and photodetector integrated polarimeters. The review is ended with a short conclusion and perspective for future developments.

  19. Risk-based safety indicators

    International Nuclear Information System (INIS)

    Szikszai, T.

    1997-01-01

    The presentation discusses the following issues: The objectives of the risk-based indicator programme. The characteristics of the risk-based indicators. The objectives of risk-based safety indicators - in monitoring safety; in PSA applications. What indicators? How to produce the risk based indicators? PSA requirements

  20. MS Based Metabonomics

    Energy Technology Data Exchange (ETDEWEB)

    Want, Elizabeth J.; Metz, Thomas O.

    2010-03-01

    Metabonomics is the latest and least mature of the systems biology triad, which also includes genomics and proteomics, and has its origins in the early orthomolecular medicine work pioneered by Linus Pauling and Arthur Robinson. It was defined by Nicholson and colleagues in 1999 as the quantitative measurement of perturbations in the metabolite complement of an integrated biological system in response to internal or external stimuli, and is often used today to describe many non-global types of metabolite analyses. Applications of metabonomics are extensive and include toxicology, nutrition, pharmaceutical research and development, physiological monitoring and disease diagnosis. For example, blood samples from millions of neonates are tested routinely by mass spectrometry (MS) as a diagnostic tool for inborn errors of metabolism. The metabonome encompasses a wide range of structurally diverse metabolites; therefore, no single analytical platform will be sufficient. Specialized sample preparation and detection techniques are required, and advances in NMR and MS technologies have led to enhanced metabonome coverage, which in turn demands improved data analysis approaches. The role of MS in metabonomics is still evolving as instrumentation and software becomes more sophisticated and as researchers realize the strengths and limitations of current technology. MS offers a wide dynamic range, high sensitivity, and reproducible, quantitative analysis. These attributes are essential for addressing the challenges of metabonomics, as the range of metabolite concentrations easily exceeds nine orders of magnitude in biofluids, and the diversity of molecular species ranges from simple amino and organic acids to lipids and complex carbohydrates. Additional challenges arise in generating a comprehensive metabolite profile, downstream data processing and analysis, and structural characterization of important metabolites. A typical workflow of MS-based metabonomics is shown in Figure

  1. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  2. Interference Coordination for E-MBMS Transmissions in LTE-Advanced

    Directory of Open Access Journals (Sweden)

    Alberto A. Lopes

    2010-01-01

    Full Text Available Interference coordination methods for Evolved-Multimedia Broadcast/Multicast Service (E-MBMS in Long-Term Evolution Advanced (LTE-A are presented. In addition, we consider signal space diversity based on Rotation Matrices (RM known to provide good performance gains over uncorrelated Rayleigh fading channels. OFDM/OFDMA systems can make the use of RM very attractive both for single and multiple antenna transmissions. In this paper, OFDM/OFDMA signals based on LTE parameters are combined with RM, MIMO, Turbo, or LDPC codes. We have considered different types of receivers, namely, we used an MMSE (Minimum Mean Squared Error equalizer and a Maximum Likelihood Soft Output criterion (MLSO. Frequency, signal, and space diversity gains are evaluated for different spatial channel models (SCM based on ITU multipath propagation channels. Different adaptive frequency reuse and schedulers are considered to evaluate the E-MBMS spectral efficiency at the cell borders.

  3. Plasma based accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Caldwell, Allen [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2015-05-01

    The concept of laser-induced plasma wakefields as a technique to accelerate charged particles was introduced 35 years ago as a means to go beyond the accelerating gradients possible with metallic cavities supporting radio frequency electromagnetic fields. Significant developments in laser technology have made possible the pulse intensity needed to realize this concept, and rapid progress is now underway in the realization of laser-driven plasma wakefield acceleration. It has also been realized that similar accelerating gradients can be produced by particle beams propagating in plasmas, and experimental programs have also been undertaken to study this possibility. Positive results have been achieved with electron-driven plasma wakefields, and a demonstration experiment with proton-driven wakefields is under construction at CERN. The concepts behind these different schemes and their pros and cons are described, as well as the experimental results achieved. An outlook for future practical uses of plasma based accelerators will also be given.

  4. Gossip-Based Broadcast

    Science.gov (United States)

    Leitão, João; Pereira, José; Rodrigues, Luís

    Gossip, or epidemic, protocols have emerged as a powerful strategy to implement highly scalable and resilient reliable broadcast primitives on large scale peer-to-peer networks. Epidemic protocols are scalable because they distribute the load among all nodes in the system and resilient because they have an intrinsic level of redundancy that masks node and network failures. This chapter provides an introduction to gossip-based broadcast on large-scale unstructured peer-to-peer overlay networks: it surveys the main results in the field, discusses techniques to build and maintain the overlays that support efficient dissemination strategies, and provides an in-depth discussion and experimental evaluation of two concrete protocols, named HyParView and Plumtree.

  5. Sensory bases of navigation.

    Science.gov (United States)

    Gould, J L

    1998-10-08

    Navigating animals need to know both the bearing of their goal (the 'map' step), and how to determine that direction (the 'compass' step). Compasses are typically arranged in hierarchies, with magnetic backup as a last resort when celestial information is unavailable. Magnetic information is often essential to calibrating celestial cues, though, and repeated recalibration between celestial and magnetic compasses is important in many species. Most magnetic compasses are based on magnetite crystals, but others make use of induction or paramagnetic interactions between short-wavelength light and visual pigments. Though odors may be used in some cases, most if not all long-range maps probably depend on magnetite. Magnetitebased map senses are used to measure only latitude in some species, but provide the distance and direction of the goal in others.

  6. Graphene based biosensors

    Energy Technology Data Exchange (ETDEWEB)

    Gürel, Hikmet Hakan, E-mail: hhakan.gurel@kocaeli.edu.tr [Kocaeli University, Kocaeli (Turkey); Salmankurt, Bahadır [Sakarya University, Sakarya (Turkey)

    2016-03-25

    Nanometer-sized graphene as a 2D material has unique chemical and electronic properties. Because of its unique physical, chemical, and electronic properties, its interesting shape and size make it a promising nanomaterial in many biological applications. It is expected that biomaterials incorporating graphene will be developed for the graphene-based drug delivery systems and biomedical devices. The interactions of biomolecules and graphene are long-ranged and very weak. Development of new techniques is very desirable for design of bioelectronics sensors and devices. In this work, we present first-principles calculations within density functional theory to calculate effects of charging on nucleobases on graphene. It is shown that how modify structural and electronic properties of nucleobases on graphene by applied charging.

  7. Integrated data base program

    International Nuclear Information System (INIS)

    Notz, K.J.

    1981-01-01

    The IDB Program provides direct support to the DOE Nuclear Waste Management and Fuel Cycle Programs and their lead sites and support contractors by providing and maintaining a current, integrated data base of spent fuel and radioactive waste inventories and projections. All major waste types (HLW, TRU, and LLW) and sources (government, commerical fuel cycle, and I/I) are included. A major data compilation was issued in September, 1981: Spent Fuel and Radioactive Waste Inventories and Projections as of December 31, 1980, DOE/NE-0017. This report includes chapters on Spent Fuel, HLW, TRU Waste, LLW, Remedial Action Waste, Active Uranium Mill Tailings, and Airborne Waste, plus Appendices with more detailed data in selected areas such as isotopics, radioactivity, thermal power, projections, and land usage. The LLW sections include volumes, radioactivity, thermal power, current inventories, projected inventories and characteristics, source terms, land requirements, and a breakdown in terms of government/commercial and defense/fuel cycle/I and I

  8. Nickel base alloys

    International Nuclear Information System (INIS)

    Gibson, R.C.; Korenko, M.K.

    1980-01-01

    Nickel based alloy, the characteristic of which is that it mainly includes in percentages by weight: 57-63 Ni, 7-18 Cr, 10-20 Fe, 4-6 Mo, 1-2 Nb, 0.2-0.8 Si, 0.01-0.05 Zr, 1.0-2.5 Ti, 1.0-2.5 Al, 0.02-0.06 C and 0.002-0.015 B. The aim is to create new nickel-chromium alloys, hardened in a solid solution and by precipitation, that are stable, exhibit reduced swelling and resistant to plastic deformation inside the reactor. These alloys of the gamma prime type have improved mechanical strengthm swelling resistance, structural stability and welding properties compared with Inconel 625 [fr

  9. Rate based failure detection

    Science.gov (United States)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    2018-01-02

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or data paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.

  10. Behavior based safety

    International Nuclear Information System (INIS)

    Sudhikumaran, T.V.; Mehta, S.C.; Goyal, D.K.

    2009-01-01

    Behaviour Based Safety (popularly known as BBS) is a new methodology for achieving injury free work place and total Safety Culture. BBS is successfully being implemented and is being practiced as a work methodology for achieving a loss and injury free work environment and work practice. Through BBS, it was brought out that the root causes of all Industrial accidents some how originate from the 'at risk' behaviour of some individual or group of individuals at some level. The policy of NPCIL is to excel in the field of Industrial and Fire Safety in comparison to international standards. This article indents to bring out the various parameters helping in installing BBS programme at any plant. (author)

  11. Fuel cycle based safeguards

    International Nuclear Information System (INIS)

    De Montmollin, J.M.; Higinbotham, W.A.; Gupta, D.

    1985-07-01

    In NPT safeguards the same model approach and absolute-quantity inspection goals are applied at present to all similar facilities, irrespective of the State's fuel cycle. There is a continuing interest and activity on the part of the IAEA in new NPT safeguards approaches that more directly address a State's nuclear activities as a whole. This fuel cycle based safeguards system is expected to a) provide a statement of findings for the entire State rather than only for individual facilities; b) allocate inspection efforts so as to reflect more realistically the different categories of nuclear materials in the different parts of the fuel cycle and c) provide more timely and better coordinated information on the inputs, outputs and inventories of nuclear materials in a State. (orig./RF) [de

  12. Bases para proyectiles dirigidos

    Directory of Open Access Journals (Sweden)

    Editorial, Equipo

    1959-03-01

    Full Text Available Aunque actualmente no se ha llegado a una línea general de métodos o sistemas que gobiernen un tipo característico de rampa y servicios auxiliares necesarios para el lanzamiento al espacio de proyectiles dirigidos a grandes alturas y distancias, las experiencias obtenidas en diferentes ensayos, utilizando distintos tipos de proyectiles y trayectorias balísticas, han sentado toda una serie de procedimientos, datos y conclusiones de gran valor balístico que, aun teniendo en cuenta la continua evolución del proyectil, sus formas, combustibles y alcances, se conocen ya, con bastante aproximación, las condiciones mínimas que ha de reunir una base dedicada a este tipo de lanzamientos.

  13. Situation based housing

    DEFF Research Database (Denmark)

    Duelund Mortensen, Peder; Welling, Helen; Wiell Nordberg, Lene

    2007-01-01

    of the average family's lifestyle. These dwellings were ground-breaking when they were built, but today are clearly a product of their time. The reaction to functionalism and the postwar mass production gave rise to flexible dwelling with countless possibilities for room division. The housing of this period has...... characteristics which in the long run have proven to be unfortunate both in terms in terms of durability and architectural quality. Today there is a focus on the development of more open and functionally non-determined housing. A number of new housing schemes in and around Copenhagen reveal a variety...... of approaches to these goals. This working paper reviews not only a selection of new housing types, but also dwellings from the past, which each contain an aspect of changeability. Our study is based on information from users in the selected housing schemes, gathered from questionnaires, information about...

  14. Liaison based assembly design

    Energy Technology Data Exchange (ETDEWEB)

    Ames, A.; Kholwadwala, D.; Wilson, R.H.

    1996-12-01

    Liaison Based Assembly Design extends the current information infrastructure to support design in terms of kinematic relationships between parts, or liaisons. These liaisons capture information regarding contact, degrees-of-freedom constraints and containment relationships between parts in an assembly. The project involved defining a useful collection of liaison representations, investigating their properties, and providing for maximum use of the data in downstream applications. We tested our ideas by implementing a prototype system involving extensions to Pro/Engineer and the Archimedes assembly planner. With an expanded product model, the design system is more able to capture design intent. When a product update is attempted, increased knowledge availability improves our ability to understand the effect of design changes. Manufacturing and analysis disciplines benefit from having liaison information available, so less time is wasted arguing over incomplete design specifications and our enterprise can be more completely integrated.

  15. Touching base with OPERA

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    Three seminars – at CERN, at Gran Sasso and in Japan – and an article calling for the scrutiny of the scientific community: the OPERA Collaboration opened its research publicly. In addition to huge press coverage, this triggered welcome reactions from colleagues around the world, many of whom will attempt to independently interpret and reproduce the measurement. OPERA’s Spokesperson touches base with the Bulletin.   The CERN Main Auditorium was crowded as OPERA Physics co-ordinator Dario Autiero presented the results of their research (23 September 2011). According to the OPERA strategy, the results of the measurements are in the hands of the scientific community and, as for any other scientific result, several months will be needed before other groups will be able to perform an independent measurement. In the meantime, the OPERA Collaboration is dealing with an avalanche of emails from the scientific community, members of the general public, and the press. &...

  16. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  17. Refractive index based measurements

    DEFF Research Database (Denmark)

    2014-01-01

    A refractive index based measurement of a property of a fluid is measured in an apparatus comprising a variable wavelength coherent light source (16), a sample chamber (12), a wavelength controller (24), a light sensor (20), a data recorder (26) and a computation apparatus (28), by - directing...... coherent light having a wavelength along an input light path, - producing scattering of said light from each of a plurality of interfaces within said apparatus including interfaces between said fluid and a surface bounding said fluid, said scattering producing an interference pattern formed by said...... scattered light, - cyclically varying the wavelength of said light in said input light path over a 1 nm to 20nm wide range of wavelengths a rate of from 10Hz to 50 KHz, - recording variation of intensity of the interfering light with change in wavelength of the light at an angle of observation...

  18. Transaction based approach

    Science.gov (United States)

    Hunka, Frantisek; Matula, Jiri

    2017-07-01

    Transaction based approach is utilized in some methodologies in business process modeling. Essential parts of these transactions are human beings. The notion of agent or actor role is usually used for them. The paper on a particular example describes possibilities of Design Engineering Methodology for Organizations (DEMO) and Resource-Event-Agent (REA) methodology. Whereas the DEMO methodology can be regarded as a generic methodology having its foundation in the theory of Enterprise Ontology the REA methodology is regarded as the domain specific methodology and has its origin in accountancy systems. The results of these approaches is that the DEMO methodology captures everything that happens in the reality with a good empirical evidence whereas the REA methodology captures only changes connected with economic events. Economic events represent either change of the property rights to economic resource or consumption or production of economic resources. This results from the essence of economic events and their connection to economic resources.

  19. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  20. Telephone-Based Coaching.

    Science.gov (United States)

    Boccio, Mindy; Sanna, Rashel S; Adams, Sara R; Goler, Nancy C; Brown, Susan D; Neugebauer, Romain S; Ferrara, Assiamira; Wiley, Deanne M; Bellamy, David J; Schmittdiel, Julie A

    2017-03-01

    Many Americans continue to smoke, increasing their risk of disease and premature death. Both telephone-based counseling and in-person tobacco cessation classes may improve access for smokers seeking convenient support to quit. Little research has assessed whether such programs are effective in real-world clinical populations. Retrospective cohort study comparing wellness coaching participants with two groups of controls. Kaiser Permanente Northern California, a large integrated health care delivery system. Two hundred forty-one patients who participated in telephonic tobacco cessation coaching from January 1, 2011, to March 31, 2012, and two control groups: propensity-score-matched controls, and controls who participated in a tobacco cessation class during the same period. Wellness coaching participants received an average of two motivational interviewing-based coaching sessions that engaged the patient, evoked their reason to consider quitting, and helped them establish a quit plan. Self-reported quitting of tobacco and fills of tobacco cessation medications within 12 months of follow-up. Logistic regressions adjusting for age, gender, race/ethnicity, and primary language. After adjusting for confounders, tobacco quit rates were higher among coaching participants vs. matched controls (31% vs. 23%, p Coaching participants and class attendees filled tobacco-cessation prescriptions at a higher rate (47% for both) than matched controls (6%, p coaching was as effective as in-person classes and was associated with higher rates of quitting compared to no treatment. The telephonic modality may increase convenience and scalability for health care systems looking to reduce tobacco use and improve health.

  1. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  2. Mainstreaming gesture based interfaces

    Directory of Open Access Journals (Sweden)

    David Procházka

    2013-01-01

    Full Text Available Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System. Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.

  3. DNA-based machines.

    Science.gov (United States)

    Wang, Fuan; Willner, Bilha; Willner, Itamar

    2014-01-01

    The base sequence in nucleic acids encodes substantial structural and functional information into the biopolymer. This encoded information provides the basis for the tailoring and assembly of DNA machines. A DNA machine is defined as a molecular device that exhibits the following fundamental features. (1) It performs a fuel-driven mechanical process that mimics macroscopic machines. (2) The mechanical process requires an energy input, "fuel." (3) The mechanical operation is accompanied by an energy consumption process that leads to "waste products." (4) The cyclic operation of the DNA devices, involves the use of "fuel" and "anti-fuel" ingredients. A variety of DNA-based machines are described, including the construction of "tweezers," "walkers," "robots," "cranes," "transporters," "springs," "gears," and interlocked cyclic DNA structures acting as reconfigurable catenanes, rotaxanes, and rotors. Different "fuels", such as nucleic acid strands, pH (H⁺/OH⁻), metal ions, and light, are used to trigger the mechanical functions of the DNA devices. The operation of the devices in solution and on surfaces is described, and a variety of optical, electrical, and photoelectrochemical methods to follow the operations of the DNA machines are presented. We further address the possible applications of DNA machines and the future perspectives of molecular DNA devices. These include the application of DNA machines as functional structures for the construction of logic gates and computing, for the programmed organization of metallic nanoparticle structures and the control of plasmonic properties, and for controlling chemical transformations by DNA machines. We further discuss the future applications of DNA machines for intracellular sensing, controlling intracellular metabolic pathways, and the use of the functional nanostructures for drug delivery and medical applications.

  4. Base Research Program

    Energy Technology Data Exchange (ETDEWEB)

    Everett Sondreal; John Hendrikson

    2009-03-31

    In June 2009, the Energy & Environmental Research Center (EERC) completed 11 years of research under the U.S. Department of Energy (DOE) Base Cooperative Agreement No. DE-FC26-98FT40320 funded through the Office of Fossil Energy (OFE) and administered at the National Energy Technology Laboratory (NETL). A wide range of diverse research activities were performed under annual program plans approved by NETL in seven major task areas: (1) resource characterization and waste management, (2) air quality assessment and control, (3) advanced power systems, (4) advanced fuel forms, (5) value-added coproducts, (6) advanced materials, and (7) strategic studies. This report summarizes results of the 67 research subtasks and an additional 50 strategic studies. Selected highlights in the executive summary illustrate the contribution of the research to the energy industry in areas not adequately addressed by the private sector alone. During the period of performance of the agreement, concerns have mounted over the impact of carbon emissions on climate change, and new programs have been initiated by DOE to ensure that fossil fuel resources along with renewable resources can continue to supply the nation's transportation fuel and electric power. The agreement has addressed DOE goals for reductions in CO{sub 2} emissions through efficiency, capture, and sequestration while expanding the supply and use of domestic energy resources for energy security. It has further contributed to goals for near-zero emissions from highly efficient coal-fired power plants; environmental control capabilities for SO{sub 2}, NO{sub x}, fine respirable particulate (PM{sub 2.5}), and mercury; alternative transportation fuels including liquid synfuels and hydrogen; and synergistic integration of fossil and renewable resources (e.g., wind-, biomass-, and coal-based electrical generation).

  5. The office based CHIVA

    Directory of Open Access Journals (Sweden)

    Passariello F

    2013-09-01

    Full Text Available Fausto Passariello,1 Stefano Ermini,2 Massimo Cappelli,3 Roberto Delfrate,4 Claude Franceschi5 1Centro Diagnostico Aquarius, Napoli, Italy; 2Private Practice, Grassina, Italy; 3Private Practice, Firenze, Italy; 4Casa di Cure Figlie di Maria, Cremona, Italy; 5Hospital St Joseph, Service d'Explorations Vasculaires, Paris, France Abstract: The cure Conservatrice Hémodynamique de l'Insuffisance Veineuse en Ambulatoire (CHIVA can be office based (OB. The OB-CHIVA protocol is aimed at transferring CHIVA procedures to specialists rooms. The protocol will check the feasibility of OB-CHIVA, data pertaining to recurrence, and will offer the opportunity to study saphenous femoral junction (SFJ stump evolution, the role of the washing vessels and the arch recanalization rate, and gather new data about the effect of the length of the treated saphenous vein. A simplified diagnostic procedure will allow an essential ultrasound examination of the venous net while a schematic and easily readable algorithm guides therapeutic choices. The Riobamba draining crossotomy (RDC tactic is composed of a set of OB procedures. While some of these procedures are, at the moment, only proposals, others are already applied. Devices generally used in ablative procedures such as Light Amplification by Stimulated Emission of Radiation (LASER, radio frequency, steam, and mechanical devices are used in this context to serve to conservative interventions for CHIVA. New techniques have also been proposed for devalvulation and tributary disconnection. Detailed follow-up is necessary in order to determine the effects of therapy and possible disease evolution. Finally, information is added about the informed consent and the ethical considerations of OB-CHIVA research. Keywords: CHIVA, office based procedures, LASER, RF, steam

  6. Organic Biochar Based Fertilization

    Science.gov (United States)

    Schmidt, Hans-Peter; Pandit, Bishnu Hari; Cornelissen, Gerard; Kammann, Claudia

    2017-04-01

    Biochar produced in cost-efficient flame curtain kilns (Kon-Tiki) was nutrient enriched either with cow urine or with dissolved mineral (NPK) fertilizer to produce biochar-based fertilizers containing between 60-100 kg N, 5-60 kg P2O5 and 60-100 kg K2O, respectively, per ton of biochar. In 21 field trials nutrient-enriched biochars were applied at rates of 0.5 to 2 t ha-1 into the root zone of 13 different annual and perennial crops. Treatments combining biochar, compost and organic or chemical fertilizer were evaluated; control treatments contained the same amounts of nutrients but without biochar. All nutrient-enriched biochar substrates improved yields compared to their respective no-biochar controls. Biochar enriched with dissolved NPK produced on average 20% ± 5.1% (N=4) higher yields than standard NPK fertilization without biochar. Cow urine-enriched biochar blended with compost resulted on average in 123% ± 76.7% (N=13) higher yields compared to the organic farmer practice with cow urine-blended compost and outcompeted NPK-enriched biochar (same nutrient dose) by 103% ± 12.4% (N=4) on average. 21 field trials robustly revealed that low-dosage root zone application of organic biochar-based fertilizers caused substantial yield increases in rather fertile silt loam soils compared to traditional organic fertilization and to mineral NPK- or NPK-biochar fertilization. This can likely be explained by the nutrient carrier effect of biochar causing a slow nutrient release behavior, more balanced nutrient fluxes and reduced nutrient losses especially when liquid organic nutrients are used for the biochar enrichment. The results promise new pathways for optimizing organic farming and improving on-farm nutrient cycling.

  7. Loyalty-based management.

    Science.gov (United States)

    Reichheld, F F

    1993-01-01

    Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.

  8. Foundry based approach for InP based PIC development

    NARCIS (Netherlands)

    Smit, M.K.

    2014-01-01

    Europe is making significant investments in development of generic photonic foundry platform infrastructures for InP-based and Silicon Photonic ICs. Here we present the present status for the InP-based JePPIX platform.

  9. NMR studies concerning base-base interactions in oligonucleotides

    International Nuclear Information System (INIS)

    Hoogen, Y.T. van den.

    1988-01-01

    Two main subjects are treated in the present thesis. The firsst part principally deals with the base-base interactions in single-stranded oligoribonucleotides. The second part presents NMR and model-building studies of DNA and RNA duplexes containing an unpaired base. (author). 242 refs.; 26 figs.; 24 tabs

  10. Computer-based multi-channel analyzer based on internet

    International Nuclear Information System (INIS)

    Zhou Xinzhi; Ning Jiaoxian

    2001-01-01

    Combined the technology of Internet with computer-based multi-channel analyzer, a new kind of computer-based multi-channel analyzer system which is based on browser is presented. Its framework and principle as well as its implementation are discussed

  11. Base Camp Architecture

    Directory of Open Access Journals (Sweden)

    Warebi Gabriel Brisibe

    2016-03-01

    Full Text Available Longitudinal or time line studies of change in the architecture of a particular culture are common, but an area still open to further research is change across space or place. In particular, there is need for studies on architectural change of cultures stemming from the same ethnic source split between their homeland and other Diasporas. This change may range from minor deviations to drastic shifts away from an architectural norm and the accumulation of these shifts within a time frame constitutes variations. This article focuses on identifying variations in the architecture of the Ijo fishing group that migrates along the coastline of West Africa. It examines the causes of cross-cultural variation between base camp dwellings of Ijo migrant fishermen in the Bakassi Peninsula in Cameroon and Bayelsa State in Nigeria. The study draws on the idea of the inevitability of cultural and social change over time as proposed in the theories of cultural dynamism and evolution. It tests aspects of cultural transmission theory using the principal coordinates analysis to ascertain the possible causes of variation. From the findings, this research argues that migration has enhanced the forces of cultural dynamism, which have resulted in significant variations in the architecture of this fishing group.

  12. Evidence-based management.

    Science.gov (United States)

    Pfeffer, Jeffrey; Sutton, Robert I

    2006-01-01

    For the most part, managers looking to cure their organizational ills rely on obsolete knowledge they picked up in school, long-standing but never proven traditions, patterns gleaned from experience, methods they happen to be skilled in applying, and information from vendors. They could learn a thing or two from practitioners of evidence-based medicine, a movement that has taken the medical establishment by storm over the past decade. A growing number of physicians are eschewing the usual, flawed resources and are instead identifying, disseminating, and applying research that is soundly conducted and clinically relevant. It's time for managers to do the same. The challenge is, quite simply, to ground decisions in the latest and best knowledge of what actually works. In some ways, that's more difficult to do in business than in medicine. The evidence is weaker in business; almost anyone can (and many people do) claim to be a management expert; and a motley crew of sources--Shakespeare, Billy Graham,Jack Welch, Attila the Hunare used to generate management advice. Still, it makes sense that when managers act on better logic and strong evidence, their companies will beat the competition. Like medicine, management is learned through practice and experience. Yet managers (like doctors) can practice their craft more effectively if they relentlessly seek new knowledge and insight, from both inside and outside their companies, so they can keep updating their assumptions, skills, and knowledge.

  13. Skull base tumors

    International Nuclear Information System (INIS)

    Kikinis, R.; Matsumae, M.; Jolesz, F.A.; Black, P.M.; Cline, H.E.; Lorenson, W.E.

    1991-01-01

    This paper reports on an image processing procedure for the planning of surgery of skull base tumors that can extract bone, vessels, tumor, and brain parenchyma and that permits resolution of cranial nerves. Three-dimensional (3D) reconstructions were generated from double-echo long TR interleaved conventional spin-echo and fast-spin-echo MR imaging data. Sixteen cases have been analyzed preoperatively. Image processing consisted of a multistep procedure combining a supervised multivariate analysis with neighborhood operations such as connectivity and erosion/dilation. 3D renderings of anatomic structures of interest were then generated. Cases were evaluated preoperatively and manipulated interactively with the computer-generated images by a team consisting of neuroradiologists, neurosurgeons, and craniofacial surgeons. The preparation of 3D reconstructions required only a few hours and was performed mostly by a research assistant. The preoperative analysis of the 3D reconstructions was found to be a valuable tool, providing information complementing the surgeon's understanding of a case as derived from conventional imaging. The interactive manipulation of data proved to be a powerful way to evaluate alternative surgical approaches

  14. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  15. Refractory metal based superalloys

    International Nuclear Information System (INIS)

    Alonso, Paula R.; Vicente, Eduardo E.; Rubiolo, Gerardo H.

    1999-01-01

    Refractory metals are looked as promising materials for primary circuits in fission reactors and even as fusion reactor components. Indeed, superalloys could be developed which take advantage of their high temperature properties together with the benefits of a two- phase (intermetallic compound-refractory metal matrix) coherent structure. In 1993, researchers of the Office National d'Etudes et de Recherches Aerospatiales of France reported the observation of such a coherent structure in the Ta-Ti-Zr-Al-Nb-Mo system although the exact composition is not reported. The intermetallic compound would be Ti 2 AlMo based. However, the formation of this compound and its possible coexistence with a disordered bcc phase in the ternary system Ti-Al-Mo is a controversial subject in the related literature. In this work we develop a technique to obtain homogeneous alloys samples with 50 Ti-25 Al-25 Mo composition. The resulting specimens were characterized by optical and electronic metallography (SEM), microprobe composition measurements (EPMA) and X-ray diffraction (XRD) analyses. The results show the evidence for a bcc (A2→B2) ordering reaction in the Ti-Al-Mo system in the 50 Ti-25 Al-25 Mo composition. (author)

  16. Fluorescence lifetime based bioassays

    Science.gov (United States)

    Meyer-Almes, Franz-Josef

    2017-12-01

    Fluorescence lifetime (FLT) is a robust intrinsic property and material constant of fluorescent matter. Measuring this important physical indicator has evolved from a laboratory curiosity to a powerful and established technique for a variety of applications in drug discovery, medical diagnostics and basic biological research. This distinct trend was mainly driven by improved and meanwhile affordable laser and detection instrumentation on the one hand, and the development of suitable FLT probes and biological assays on the other. In this process two essential working approaches emerged. The first one is primarily focused on high throughput applications employing biochemical in vitro assays with no requirement for high spatial resolution. The second even more dynamic trend is the significant expansion of assay methods combining highly time and spatially resolved fluorescence data by fluorescence lifetime imaging. The latter approach is currently pursued to enable not only the investigation of immortal tumor cell lines, but also specific tissues or even organs in living animals. This review tries to give an actual overview about the current status of FLT based bioassays and the wide range of application opportunities in biomedical and life science areas. In addition, future trends of FLT technologies will be discussed.

  17. Carbon nanotube based photocathodes

    International Nuclear Information System (INIS)

    Hudanski, Ludovic; Minoux, Eric; Schnell, Jean-Philippe; Xavier, Stephane; Pribat, Didier; Legagneux, Pierre; Gangloff, Laurent; Teo, Kenneth B K; Robertson, John; Milne, William I

    2008-01-01

    This paper describes a novel photocathode which is an array of vertically aligned multi-walled carbon nanotubes (MWCNTs), each MWCNT being associated with one p-i-n photodiode. Unlike conventional photocathodes, the functions of photon-electron conversion and subsequent electron emission are physically separated. Photon-electron conversion is achieved with p-i-n photodiodes and the electron emission occurs from the MWCNTs. The current modulation is highly efficient as it uses an optically controlled reconfiguration of the electric field at the MWCNT locations. Such devices are compatible with high frequency and very large bandwidth operation and could lead to their application in compact, light and efficient microwave amplifiers for satellite telecommunication. To demonstrate this new photocathode concept, we have fabricated the first carbon nanotube based photocathode using silicon p-i-n photodiodes and MWCNT bunches. Using a green laser, this photocathode delivers 0.5 mA with an internal quantum efficiency of 10% and an I ON /I OFF ratio of 30

  18. Based on Channel Characteristics

    Directory of Open Access Journals (Sweden)

    Zhuo Hao

    2013-01-01

    Full Text Available A number of key agreement schemes based on wireless channel characteristics have been proposed recently. However, previous key agreement schemes require that two nodes which need to agree on a key are within the communication range of each other. Hence, they are not suitable for multihop wireless networks, in which nodes do not always have direct connections with each other. In this paper, we first propose a basic multihop key agreement scheme for wireless ad hoc networks. The proposed basic scheme is resistant to external eavesdroppers. Nevertheless, this basic scheme is not secure when there exist internal eavesdroppers or Man-in-the-Middle (MITM adversaries. In order to cope with these adversaries, we propose an improved multihop key agreement scheme. We show that the improved scheme is secure against internal eavesdroppers and MITM adversaries in a single path. Both performance analysis and simulation results demonstrate that the improved scheme is efficient. Consequently, the improved key agreement scheme is suitable for multihop wireless ad hoc networks.

  19. Biosensors based on cantilevers.

    Science.gov (United States)

    Alvarez, Mar; Carrascosa, Laura G; Zinoviev, Kiril; Plaza, Jose A; Lechuga, Laura M

    2009-01-01

    Microcantilevers based-biosensors are a new label-free technique that allows the direct detection of biomolecular interactions in a label-less way and with great accuracy by translating the biointeraction into a nanomechanical motion. Low cost and reliable standard silicon technologies are widely used for the fabrication of cantilevers with well-controlled mechanical properties. Over the last years, the number of applications of these sensors has shown a fast growth in diverse fields, such as genomic or proteomic, because of the biosensor flexibility, the low sample consumption, and the non-pretreated samples required. In this chapter, we report a dedicated design and a fabrication process of highly sensitive microcantilever silicon sensors. We will describe as well an application of the device in the environmental field showing the immunodetection of an organic toxic pesticide as an example. The cantilever biofunctionalization process and the subsequent pesticide determination are detected in real time by monitoring the nanometer-scale bending of the microcantilever due to a differential surface stress generated between both surfaces of the device.

  20. Flow-Based Provenance

    Directory of Open Access Journals (Sweden)

    Sabah Al-Fedaghi

    2017-02-01

    Full Text Available Aim/Purpose: With information almost effortlessly created and spontaneously available, current progress in Information and Communication Technology (ICT has led to the complication that information must be scrutinized for trustworthiness and provenance. Information systems must become provenance-aware to be satisfactory in accountability, reproducibility, and trustworthiness of data. Background:\tMultiple models for abstract representation of provenance have been proposed to describe entities, people, and activities involved in producing a piece of data, including the Open Provenance Model (OPM and the World Wide Web Consortium. These models lack certain concepts necessary for specifying workflows and encoding the provenance of data products used and generated. Methodology: Without loss of generality, the focus of this paper is on OPM depiction of provenance in terms of a directed graph. We have redrawn several case studies in the framework of our proposed model in order to compare and evaluate it against OPM for representing these cases. Contribution: This paper offers an alternative flow-based diagrammatic language that can form a foundation for modeling of provenance. The model described here provides an (abstract machine-like representation of provenance. Findings: The results suggest a viable alternative in the area of diagrammatic representation for provenance applications. Future Research: Future work will seek to achieve more accurate comparisons with current models in the field.

  1. Communication Base Station Log Analysis Based on Hierarchical Clustering

    Directory of Open Access Journals (Sweden)

    Zhang Shao-Hua

    2017-01-01

    Full Text Available Communication base stations generate massive data every day, these base station logs play an important value in mining of the business circles. This paper use data mining technology and hierarchical clustering algorithm to group the scope of business circle for the base station by recording the data of these base stations.Through analyzing the data of different business circle based on feature extraction and comparing different business circle category characteristics, which can choose a suitable area for operators of commercial marketing.

  2. Mindfulness-Based Stress Reduction

    Science.gov (United States)

    ... R S T U V W X Y Z Mindfulness-Based Stress Reduction (MBSR) Information 6 Things You ... Disease and Dementia (12/20/13) Research Spotlights Mindfulness-Based Stress Reduction, Cognitive-Behavioral Therapy Shown To ...

  3. Risk-based configuration control

    International Nuclear Information System (INIS)

    Szikszai, T.

    1997-01-01

    The presentation discusses the following issues: The Configuration Control; The Risk-based Configuration Control (during power operation mode, and during shutdown mode). PSA requirements. Use of Risk-based Configuration Control System. Configuration Management (basic elements, benefits, information requirements)

  4. OLBS: Offline location based services

    OpenAIRE

    Coelho, P; Ana Aguiar; João Correia Lopes

    2011-01-01

    Most existing location-based services rely on ubiquitous connectivity to deliver location-based contents to the users. However, connectivity is not available anywhere at anytime even in urban centres. Underground, indoors, remote areas, and foreign countries are examples situations where users commonly do not have guaranteed connectivity but could profit from location-based contents. In this work, we propose an open platform for publishing, distributing and maintaining location-based contents...

  5. ‘"Education-based Research"

    DEFF Research Database (Denmark)

    Degn Johansson, Troels

    This paper lays out a concept of education-based research-the production of research knowledge within the framework of tertiary design education-as an integration of problem-based learning and research-based education. This leads to a critique of reflective practice as the primary way to facilitate...... learning at this level, a discussion of the nature of design problems in the instrumentalist tradition, and some suggestions as to how design studies curricula may facilitate education-based research....

  6. Health Physics Positions Data Base

    International Nuclear Information System (INIS)

    Kerr, G.D.; Borges, T.; Stafford, R.S.; Lu, P.Y.; Carter, D.

    1992-05-01

    The Health Physics Positions (HPPOS) Data Base of the Nuclear Regulatory Commission (NRC) is a collection of summaries of NRC staff positions on a wide range of topics in radiation protection (health physics). The bases for the data base are 247 original documents in the form of letters, memoranda, and excerpts from technical reports. The HPPOS Data Base was developed by the NRC Headquarters and Regional Offices to help ensure uniformity in inspections, enforcement, and licensing actions

  7. Memory-Based Shallow Parsing

    OpenAIRE

    Sang, Erik F. Tjong Kim

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving the performance of the memory-based learner. Our approach is evaluated on standard data sets and the results are compared with that of other systems. This reveals that our approach works well for ba...

  8. Lunar Base Heat Pump

    Science.gov (United States)

    Walker, D.; Fischbach, D.; Tetreault, R.

    1996-01-01

    The objective of this project was to investigate the feasibility of constructing a heat pump suitable for use as a heat rejection device in applications such as a lunar base. In this situation, direct heat rejection through the use of radiators is not possible at a temperature suitable for lde support systems. Initial analysis of a heat pump of this type called for a temperature lift of approximately 378 deg. K, which is considerably higher than is commonly called for in HVAC and refrigeration applications where heat pumps are most often employed. Also because of the variation of the rejection temperature (from 100 to 381 deg. K), extreme flexibility in the configuration and operation of the heat pump is required. A three-stage compression cycle using a refrigerant such as CFC-11 or HCFC-123 was formulated with operation possible with one, two or three stages of compression. Also, to meet the redundancy requirements, compression was divided up over multiple compressors in each stage. A control scheme was devised that allowed these multiple compressors to be operated as required so that the heat pump could perform with variable heat loads and rejection conditions. A prototype heat pump was designed and constructed to investigate the key elements of the high-lift heat pump concept. Control software was written and implemented in the prototype to allow fully automatic operation. The heat pump was capable of operation over a wide range of rejection temperatures and cooling loads, while maintaining cooling water temperature well within the required specification of 40 deg. C +/- 1.7 deg. C. This performance was verified through testing.

  9. Irreducible normalizer operators and thresholds for degenerate quantum codes with sublinear distances

    Science.gov (United States)

    Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.

    2015-03-01

    We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.

  10. Performance-Based Funding Brief

    Science.gov (United States)

    Washington Higher Education Coordinating Board, 2011

    2011-01-01

    A number of states have made progress in implementing performance-based funding (PFB) and accountability. This policy brief summarizes main features of performance-based funding systems in three states: Tennessee, Ohio, and Indiana. The brief also identifies key issues that states considering performance-based funding must address, as well as…

  11. Benefit-based tree valuation

    Science.gov (United States)

    E.G. McPherson

    2007-01-01

    Benefit-based tree valuation provides alternative estimates of the fair and reasonable value of trees while illustrating the relative contribution of different benefit types. This study compared estimates of tree value obtained using cost- and benefit-based approaches. The cost-based approach used the Council of Landscape and Tree Appraisers trunk formula method, and...

  12. Different perspectives on economic base.

    Science.gov (United States)

    Lisa K. Crone; Richard W. Haynes; Nicholas E. Reyna

    1999-01-01

    Two general approaches for measuring the economic base are discussed. Each method is used to define the economic base for each of the counties included in the Interior Columbia Basin Ecosystem Management Project area. A more detailed look at four selected counties results in similar findings from different approaches. Limitations of economic base analysis also are...

  13. Memory-Based Shallow Parsing

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.

    2002-01-01

    We present memory-based learning approaches to shallow parsing and apply these to five tasks: base noun phrase identification, arbitrary base phrase recognition, clause detection, noun phrase parsing and full parsing. We use feature selection techniques and system combination methods for improving

  14. Condition based spare parts supply

    NARCIS (Netherlands)

    Lin, X.; Basten, Robertus Johannes Ida; Kranenburg, A.A.; van Houtum, Geert-Jan

    2012-01-01

    We consider a spare parts stock point that serves an installed base of machines. Each machine contains the same critical component, whose degradation behavior is described by a Markov process. We consider condition based spare parts supply, and show that an optimal, condition based inventory policy

  15. Evidence-based dentistry.

    Science.gov (United States)

    Chambers, David W

    2010-01-01

    Both panegyric and criticism of evidence-based dentistry tend to be clumsy because the concept is poorly defined. This analysis identifies several contributions to the profession that have been made under the EBD banner. Although the concept of clinicians integrating clinical epidemiology, the wisdom of their practices, and patients' values is powerful, its implementation has been distorted by a too heavy emphasis of computerized searches for research findings that meet the standards of academics. Although EBD advocates enjoy sharing anecdotal accounts of mistakes others have made, faulting others is not proof that one's own position is correct. There is no systematic, high-quality evidence that EBD is effective. The metaphor of a three-legged stool (evidence, experience, values, and integration) is used as an organizing principle. "Best evidence" has become a preoccupation among EBD enthusiasts. That overlong but thinly developed leg of the stool is critiqued from the perspectives of the criteria for evidence, the difference between internal and external validity, the relationship between evidence and decision making, the ambiguous meaning of "best," and the role of reasonable doubt. The strongest leg of the stool is clinical experience. Although bias exists in all observations (including searches for evidence), there are simple procedures that can be employed in practice to increase useful and objective evidence there, and there are dangers in delegating policy regarding allowable treatments to external groups. Patient and practitioner values are the shortest leg of the stool. As they are so little recognized, their integration in EBD is problematic and ethical tensions exist where paternalism privileges science over patient's self-determined best interests. Four potential approaches to integration are suggested, recognizing that there is virtually no literature on how the "seat" of the three-legged stool works or should work. It is likely that most dentists

  16. Barrier Data Base user's guide

    International Nuclear Information System (INIS)

    Worrell, R.B.; Gould, D.J.; Wall, D.W.

    1977-06-01

    A special purpose data base for physical security barriers has been developed. In addition to barriers, the entities accommodated by the Barrier Data Base (BDB) include threats and references. A threat is established as a configuration of people and equipment which has been employed to penetrate (or attempt to penetrate) a barrier. References are used to cite publications pertinent to the barriers and threats in the data base. Utilization and maintenance of the Barrier Data Base is achieved with LIST, QUERY, ENTER, DELETE, and CHANGE commands which are used to manipulate the data base entities

  17. From oil-based mud to water-based mud

    International Nuclear Information System (INIS)

    Christiansen, C.

    1991-01-01

    Maersk Olie og Gas AS has used low toxic oil-based muds extensively since 1982 for drilling development wells and later in the development of horizontal well drilling techniques. However, in view of the strong drive towards a reduction in the amount of oil discharged to the North Sea from the oil industry, Maersk Olie og Gas AS initiated trials with new or improved types of water-based mud, first in deviated wells (1989) and then in horizontal wells (1990). The paper reviews Maersk Olie og Gas As experience with oil-based mud since the drilling of the first horizontal well in 1987, specifically with respect to cuttings washing equipment, oil retention on cuttings, and the procedure for monitoring of this parameter. It describes the circumstances leading to the decision to revert to water-based mud systems. Finally, it reviews the experience gained so far with the new improved types of water-based mud systems, mainly glycol and KCl/polymer mud systems. Comparison of operational data, such as rate of penetration, torque and drag, etc., is made between wells drilled with oil-based mud and water-based mud. The trials with the new improved types of water-based mud systems have been positive, i.e. horizontal wells can be drilled successfully with water-based mud. As a result, Maersk Olie og and Gas AS has decided to discontinue the use of low toxic oil-based muds in the Danish sector of the North Sea

  18. Workplace Based Assessment in Psychiatry

    Directory of Open Access Journals (Sweden)

    Ayse Devrim Basterzi

    2009-11-01

    Full Text Available Workplace based assessment refers to the assessment of working practices based on what doctors actually do in the workplace, and is predominantly carried out in the workplace itself. Assessment drives learning and it is therefore essential that workplace-based assessment focuses on important attributes rather than what is easiest to assess. Workplacebased assessment is usually competency based. Workplace based assesments may well facilitate and enhance various aspects of educational supervisions, including its structure, frequency and duration etc. The structure and content of workplace based assesments should be monitored to ensure that its benefits are maximised by remaining tailored to individual trainees' needs. Workplace based assesment should be used for formative and summative assessments. Several formative assessment methods have been developed for use in the workplace such as mini clinical evaluation exercise (mini-cex, evidence based journal club assesment and case based discussion, multi source feedback etc. This review discusses the need of workplace based assesments in psychiatry graduate education and introduces some of the work place based assesment methods.

  19. Case-based reasoning: The marriage of knowledge base and data base

    Science.gov (United States)

    Pulaski, Kirt; Casadaban, Cyprian

    1988-01-01

    The coupling of data and knowledge has a synergistic effect when building an intelligent data base. The goal is to integrate the data and knowledge almost to the point of indistinguishability, permitting them to be used interchangeably. Examples given in this paper suggest that Case-Based Reasoning is a more integrated way to link data and knowledge than pure rule-based reasoning.

  20. Knowledge base verification based on enhanced colored petri net

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Verification is a process aimed at demonstrating whether a system meets it`s specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base and apply the method to verification of simple knowledge base. 8 refs., 4 figs. (Author)

  1. Knowledge base verification based on enhanced colored petri net

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Verification is a process aimed at demonstrating whether a system meets it`s specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base and apply the method to verification of simple knowledge base. 8 refs., 4 figs. (Author)

  2. XML-Based SHINE Knowledge Base Interchange Language

    Science.gov (United States)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  3. Managing the Gap between Curriculum Based and Problem Based Learning

    DEFF Research Database (Denmark)

    Bygholm, Ann; Buus, Lillian

    2009-01-01

    /or but rather both/and. In this paper we describe an approach to design and delivery of online courses in computer science which on the one hand is based on a specified curriculum and on the other hand gives room for different learning strategies, problem based learning being one of them. We discuss......Traditionally there has been a clear distinction between curriculum based and problem based approaches to accomplish learning. Preferred approaches depend of course on conviction, culture, traditions and also on the specific learning situation. We will argue that it is not a question of either...

  4. Acids and bases solvent effects on acid-base strenght

    CERN Document Server

    Cox, Brian G

    2013-01-01

    Acids and bases are ubiquitous in chemistry. Our understanding of them, however, is dominated by their behaviour in water. Transfer to non-aqueous solvents leads to profound changes in acid-base strengths and to the rates and equilibria of many processes: for example, synthetic reactions involving acids, bases and nucleophiles; isolation of pharmaceutical actives through salt formation; formation of zwitter- ions in amino acids; and chromatographic separation of substrates. This book seeks to enhance our understanding of acids and bases by reviewing and analysing their behaviour in non-aqueous solvents. The behaviour is related where possible to that in water, but correlations and contrasts between solvents are also presented.

  5. Computer-based and web-based radiation safety training

    Energy Technology Data Exchange (ETDEWEB)

    Owen, C., LLNL

    1998-03-01

    The traditional approach to delivering radiation safety training has been to provide a stand-up lecture of the topic, with the possible aid of video, and to repeat the same material periodically. New approaches to meeting training requirements are needed to address the advent of flexible work hours and telecommuting, and to better accommodate individuals learning at their own pace. Computer- based and web-based radiation safety training can provide this alternative. Computer-based and web- based training is an interactive form of learning that the student controls, resulting in enhanced and focused learning at a time most often chosen by the student.

  6. Concurrent array-based queue

    Science.gov (United States)

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  7. The knowledge base of journalism

    DEFF Research Database (Denmark)

    Svith, Flemming

    In this paper I propose the knowledge base as a fruitful way to apprehend journalism. With the claim that the majority of practice is anchored in knowledge – understood as 9 categories of rationales, forms and levels – this knowledge base appears as a contextual look at journalists’ knowledge......, and place. As an analytical framework, the knowledge base is limited to understand the practice of newspaper journalists, but, conversely, the knowledge base encompasses more general beginnings through the inclusion of overall structural relationships in the media and journalism and general theories...... on practice and knowledge. As the result of an abductive reasoning is a theory proposal, there is a need for more deductive approaches to test the validity of this knowledge base claim. It is thus relevant to investigate which rationales are included in the knowledge base of journalism, as the dimension does...

  8. Aperiodic-metamaterial-based absorber

    Directory of Open Access Journals (Sweden)

    Quanlong Yang

    2017-09-01

    Full Text Available The periodic-metamaterial-based perfect absorber has been studied broadly. Conversely, if the unit cell in the metamaterial-based absorber is arranged aperiodically (aperiodic-metamaterial-based absorber, how does it perform? Inspired by this, here we present a systematic study of the aperiodic-metamaterial-based absorber. By investigating the response of metamaterial absorbers based on periodic, Fibonacci, Thue-Morse, and quasicrystal lattices, we found that aperiodic-metamaterial-based absorbers could display similar absorption behaviors as the periodic one in one hand. However, their absorption behaviors show different tendency depending on the thicknesses of the spacer. Further studies on the angle and polarization dependence of the absorption behavior are also presented.

  9. Estimating North Dakota's Economic Base

    OpenAIRE

    Coon, Randal C.; Leistritz, F. Larry

    2009-01-01

    North Dakota’s economic base is comprised of those activities producing a product paid for by nonresidents, or products exported from the state. North Dakota’s economic base activities include agriculture, mining, manufacturing, tourism, and federal government payments for construction and to individuals. Development of the North Dakota economic base data is important because it provides the information to quantify the state’s economic growth, and it creates the final demand sectors for the N...

  10. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  11. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  12. Brain-Based Learning and Standards-Based Elementary Science.

    Science.gov (United States)

    Konecki, Loretta R.; Schiller, Ellen

    This paper explains how brain-based learning has become an area of interest to elementary school science teachers, focusing on the possible relationships between, and implications of, research on brain-based learning to the teaching of science education standards. After describing research on the brain, the paper looks at three implications from…

  13. Fatigue data bases in Europe

    International Nuclear Information System (INIS)

    Olivier, R.; Koettgen, V.B.; Seeger, T.; Boller, C.

    1988-01-01

    Based on an inquiry with well-known European fatigue institutions the paper describes existing more or less extensive collections and data bases of S-N and crack growth data, mainly on unwelded and welded steel and aluminium in organizations and countries as follows: CEC, ESA, Federal Republic of Germany, GDR, Italy, Norway, Republic of Ireland, Switzerland, UK. This documentation of the present state is completed by a short survey on available European fatigue standards and design rules, serving as a data base for nominal materials data. Requirements for data base concepts, contents, user interface and data structure are presented in a short overview. (orig./HP)

  14. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  15. New Mexico Geothermal Data Base

    International Nuclear Information System (INIS)

    Witcher, J.C.; Whittier, J.; Morgan, R.

    1990-01-01

    This paper reports on the New Mexico Geothermal Data Base (NMGDB) which is a comprehensive public-domain data base of low-temperature geothermal resource information for New Mexico that is designed to assist researchers and developers. A broad range of geoscience, engineering, climatic, economic, and land status information are complied in the dBASE III PLUS data base management system for use on an IBM or IBM-compatible personal computer. A user friendly menu format with on-screen prompts allows easy and convenient use

  16. Isochronous cyclotron data base description

    International Nuclear Information System (INIS)

    Kiyan, I.N.; Vorozhtsov, S.B.; Tarashkevich, R.

    2004-01-01

    The relational data base of the control parameters of the isochronous cyclotron, Isochronous Cyclotron Data Base (ICDB), is described. The relational data base under consideration, written in Transact SQL for the MS SQL Server 2000 with the use of MS Enterprise Manager and MS Query Analyzer, was installed on the server of the AIC144 isochronous cyclotron in Krakow, which operates under the control of the operating system MS Windows Server 2003 (Standard Edition). The interface of the data base under considerations is written in C++ with the use of Visual C++ .NET and is built in the Cyclotron Operator Help Program (COHP), which is used for modeling the operational modes of the isochronous cyclotron. Communication between the COHP and the relational data base is realised on the base of the Open Data Base Connectivity protocol. The relational data base of the control parameter of the isochronous cyclotron is intended: firstly, for systematization and automatic use of all measured and modelled magnetic field maps in the process of modeling the operational modes; secondly, for systematization and convenient access to the stored operational modes; thirdly, for simplifying the operator's work. The relational data base of the control parameter of the isochronous cyclotron reflects its physical structure and the logic of its operator's work. (author)

  17. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira

    2016-07-28

    One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.

  18. NASA Tech Briefs, April 2009

    Science.gov (United States)

    2009-01-01

    Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.

  19. Implementation of continuous-variable quantum key distribution with discrete modulation

    Science.gov (United States)

    Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro

    2017-06-01

    We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.

  20. Design and Implementation of Secure and Reliable Communication using Optical Wireless Communication

    Science.gov (United States)

    Saadi, Muhammad; Bajpai, Ambar; Zhao, Yan; Sangwongngam, Paramin; Wuttisittikulkij, Lunchakorn

    2014-11-01

    Wireless networking intensify the tractability in the home and office environment to connect the internet without wires but at the cost of risks associated with stealing the data or threat of loading malicious code with the intention of harming the network. In this paper, we proposed a novel method of establishing a secure and reliable communication link using optical wireless communication (OWC). For security, spatial diversity based transmission using two optical transmitters is used and the reliability in the link is achieved by a newly proposed method for the construction of structured parity check matrix for binary Low Density Parity Check (LDPC) codes. Experimental results show that a successful secure and reliable link between the transmitter and the receiver can be achieved by using the proposed novel technique.