Threshold Multi Split-Row algorithm for decoding irregular LDPC codes
Directory of Open Access Journals (Sweden)
Chakir Aqil
2017-12-01
Full Text Available In this work, we propose a new threshold multi split-row algorithm in order to improve the multi split-row algorithm for LDPC irregular codes decoding. We give a complete description of our algorithm as well as its advantages for the LDPC codes. The simulation results over an additive white gaussian channel show that an improvement in code error performance between 0.4 dB and 0.6 dB compared to the multi split-row algorithm.
Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels
Directory of Open Access Journals (Sweden)
IMTAWIL, V.
2011-11-01
Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.
Interleaved Product LDPC Codes
Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco
2011-01-01
Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.
Ensemble Weight Enumerators for Protograph LDPC Codes
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Weight Distribution for Non-binary Cluster LDPC Code Ensemble
Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi
In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.
Discussion on LDPC Codes and Uplink Coding
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Enhancement of Unequal Error Protection Properties of LDPC Codes
Directory of Open Access Journals (Sweden)
Poulliat Charly
2007-01-01
Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.
Fast QC-LDPC code for free space optical communication
Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong
2017-02-01
Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.
Analysis of Non-binary Hybrid LDPC Codes
Sassatelli, Lucile; Declercq, David
2008-01-01
In this paper, we analyse asymptotically a new class of LDPC codes called Non-binary Hybrid LDPC codes, which has been recently introduced. We use density evolution techniques to derive a stability condition for hybrid LDPC codes, and prove their threshold behavior. We study this stability condition to conclude on asymptotic advantages of hybrid LDPC codes compared to their non-hybrid counterparts.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Kashyap Manohar
2008-01-01
Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Decoding LDPC Convolutional Codes on Markov Channels
Directory of Open Access Journals (Sweden)
Chris Winstead
2008-04-01
Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
LDPC Codes with Minimum Distance Proportional to Block Size
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low
Protograph LDPC Codes Over Burst Erasure Channels
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
QC-LDPC code-based cryptography
Baldi, Marco
2014-01-01
This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes. In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described. Some families of QC-LDPC codes that are best suited for use in cryptography are also presented. The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants. The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security. Several examples of QC-LDPC code-based public key cryptosystems are prese...
Protograph LDPC Codes for the Erasure Channel
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Constructing LDPC Codes from Loop-Free Encoding Modules
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Codeword Structure Analysis for LDPC Convolutional Codes
Directory of Open Access Journals (Sweden)
Hua Zhou
2015-12-01
Full Text Available The codewords of a low-density parity-check (LDPC convolutional code (LDPC-CC are characterised into structured and non-structured. The number of the structured codewords is dominated by the size of the polynomial syndrome former matrix H T ( D , while the number of the non-structured ones depends on the particular monomials or polynomials in H T ( D . By evaluating the relationship of the codewords between the mother code and its super codes, the low weight non-structured codewords in the super codes can be eliminated by appropriately choosing the monomials or polynomials in H T ( D , resulting in improved distance spectrum of the mother code.
A novel construction method of QC-LDPC codes based on CRT for optical communications
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
DNA Barcoding through Quaternary LDPC Codes.
Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar
2015-01-01
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).
DNA Barcoding through Quaternary LDPC Codes.
Directory of Open Access Journals (Sweden)
Elizabeth Tapia
Full Text Available For many parallel applications of Next-Generation Sequencing (NGS technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH or have intrinsic poor error correcting abilities (Hamming. Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9 at the expense of a rate of read losses just in the order of 10(-6.
Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization
Sassatelli, Lucile; Declercq, David
2007-01-01
In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...
Directory of Open Access Journals (Sweden)
Surbhi Sharma
2011-06-01
Full Text Available Irregular low-density parity-check (LDPC codes have been found to show exceptionally good performance for single antenna systems over a wide class of channels. In this paper, the performance of LDPC codes with multiple antenna systems is investigated in flat Rayleigh and Rician fading channels for different modulation schemes. The focus of attention is mainly on the concatenation of irregular LDPC codes with complex orthogonal space-time codes. Iterative decoding is carried out with a density evolution method that sets a threshold above which the code performs well. For the proposed concatenated system, the simulation results show that the QAM technique achieves a higher coding gain of 8.8 dB and 3.2 dB over the QPSK technique in Rician (LOS and Rayleigh (NLOS faded environments respectively.
Structured LDPC Codes over Integer Residue Rings
Directory of Open Access Journals (Sweden)
Marc A. Armand
2008-07-01
Full Text Available This paper presents a new class of low-density parity-check (LDPC codes over Ã¢Â„Â¤2a represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.
Structured LDPC Codes over Integer Residue Rings
Directory of Open Access Journals (Sweden)
Mo Elisa
2008-01-01
Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.
Protograph LDPC Codes with Node Degrees at Least 3
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Multiple LDPC decoding for distributed source coding and video coding
DEFF Research Database (Denmark)
Forchhammer, Søren; Luong, Huynh Van; Huang, Xin
2011-01-01
Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...
Rate-Compatible Protograph LDPC Codes
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Pilotless Frame Synchronization Using LDPC Code Constraints
Jones, Christopher; Vissasenor, John
2009-01-01
A method of pilotless frame synchronization has been devised for low- density parity-check (LDPC) codes. In pilotless frame synchronization , there are no pilot symbols; instead, the offset is estimated by ex ploiting selected aspects of the structure of the code. The advantag e of pilotless frame synchronization is that the bandwidth of the sig nal is reduced by an amount associated with elimination of the pilot symbols. The disadvantage is an increase in the amount of receiver data processing needed for frame synchronization.
Spatially coupled LDPC coding in cooperative wireless networks
Jayakody, D.N.K.; Skachek, V.; Chen, B.
2016-01-01
This paper proposes a novel technique of spatially coupled low-density parity-check (SC-LDPC) code-based soft forwarding relaying scheme for a two-way relay system. We introduce an array-based optimized SC-LDPC codes in relay channels. A more precise model is proposed to characterize the residual
Memory-efficient decoding of LDPC codes
Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon
2005-01-01
We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.
Bilayer expurgated LDPC codes with uncoded relaying
Directory of Open Access Journals (Sweden)
Md. Noor-A-Rahim
2017-08-01
Full Text Available Bilayer low-density parity-check (LDPC codes are an effective coding technique for decode-and-forward relaying, where the relay forwards extra parity bits to help the destination to decode the source bits correctly. In the existing bilayer coding scheme, these parity bits are protected by an error correcting code and assumed reliably available at the receiver. We propose an uncoded relaying scheme, where the extra parity bits are forwarded to the destination without any protection. Through density evolution analysis and simulation results, we show that our proposed scheme achieves better performance in terms of bit erasure probability than the existing relaying scheme. In addition, our proposed scheme results in lower complexity at the relay.
On Analyzing LDPC Codes over Multiantenna MC-CDMA System
Directory of Open Access Journals (Sweden)
S. Suresh Kumar
2014-01-01
Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Multilevel LDPC Codes Design for Multimedia Communication CDMA System
Directory of Open Access Journals (Sweden)
Hou Jia
2004-01-01
Full Text Available We design multilevel coding (MLC with a semi-bit interleaved coded modulation (BICM scheme based on low density parity check (LDPC codes. Different from the traditional designs, we joined the MLC and BICM together by using the Gray mapping, which is suitable to transmit the data over several equivalent channels with different code rates. To perform well at signal-to-noise ratio (SNR to be very close to the capacity of the additive white Gaussian noise (AWGN channel, random regular LDPC code and a simple semialgebra LDPC (SA-LDPC code are discussed in MLC with parallel independent decoding (PID. The numerical results demonstrate that the proposed scheme could achieve both power and bandwidth efficiency.
Directory of Open Access Journals (Sweden)
Valérian Mannoni
2004-09-01
Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called Ã‚Â“irregularity profile.Ã‚Â” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.
The application of LDPC code in MIMO-OFDM system
Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao
2018-03-01
The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Directory of Open Access Journals (Sweden)
Yan Zhang
2015-01-01
Full Text Available This paper presents four different integer sequences to construct quasi-cyclic low-density parity-check (QC-LDPC codes with mathematical theory. The paper introduces the procedure of the coding principle and coding. Four different integer sequences constructing QC-LDPC code are compared with LDPC codes by using PEG algorithm, array codes, and the Mackey codes, respectively. Then, the integer sequence QC-LDPC codes are used in coded cooperative communication. Simulation results show that the integer sequence constructed QC-LDPC codes are effective, and overall performance is better than that of other types of LDPC codes in the coded cooperative communication. The performance of Dayan integer sequence constructed QC-LDPC is the most excellent performance.
Performance analysis of LDPC codes on OOK terahertz wireless channels
International Nuclear Information System (INIS)
Liu Chun; Wang Chang; Cao Jun-Cheng
2016-01-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. (paper)
Protograph based LDPC codes with minimum distance linearly growing with block size
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Experimental demonstration of nonbinary LDPC convolutional codes for DP-64QAM/256QAM
Koike-Akino, T.; Sugihara, K.; Millar, D.S.; Pajovic, M.; Matsumoto, W.; Alvarado, A.; Maher, R.; Lavery, D.; Paskov, M.; Kojima, K.; Parsons, K.; Thomsen, B.C.; Savory, S.J.; Bayvel, P.
2016-01-01
We show the great potential of nonbinary LDPC convolutional codes (NB-LDPC-CC) with low-latency windowed decoding. It is experimentally demonstrated that NB-LDPC-CC can offer a performance improvement of up to 5 dB compared with binary coding.
Non-Binary Protograph-Based LDPC Codes: Analysis,Enumerators and Designs
Sun, Yizeng
2013-01-01
Non-binary LDPC codes can outperform binary LDPC codes using sum-product algorithm with higher computation complexity. Non-binary LDPC codes based on protographs have the advantage of simple hardware architecture. In the first part of this thesis, we will use EXIT chart analysis to compute the thresholds of different protographs over GF(q). Based on threshold computation, some non-binary protograph-based LDPC codes are designed and their frame error rates are compared with binary LDPC codes. ...
Improved Design of Unequal Error Protection LDPC Codes
Directory of Open Access Journals (Sweden)
Sandberg Sara
2010-01-01
Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.
Transmission over UWB channels with OFDM system using LDPC coding
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
LDPC Codes--Structural Analysis and Decoding Techniques
Zhang, Xiaojie
2012-01-01
Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…
Construction of Protograph LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation
LDPC-PPM Coding Scheme for Optical Communication
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes
Directory of Open Access Journals (Sweden)
Cocco Giuseppe
2010-01-01
Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.
Optimisation des codes LDPC irréguliers et algorithmes de décodage des codes LDPC q-aires
Cances , Jean-Pierre
2013-01-01
Cette note technique rappelle les principes d'optimisation pour obtenir les profils de codes LDPC irréguliers performants et rappelle les principes des algorithmes de décodage utilizes pour les codes LDPC q-aires à grande efficacité spectrale.
Design of ACM system based on non-greedy punctured LDPC codes
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Cooperative optimization and their application in LDPC codes
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
Construction of Quasi-Cyclic LDPC Codes Based on Fundamental Theorem of Arithmetic
Directory of Open Access Journals (Sweden)
Hai Zhu
2018-01-01
Full Text Available Quasi-cyclic (QC LDPC codes play an important role in 5G communications and have been chosen as the standard codes for 5G enhanced mobile broadband (eMBB data channel. In this paper, we study the construction of QC LDPC codes based on an arbitrary given expansion factor (or lifting degree. First, we analyze the cycle structure of QC LDPC codes and give the necessary and sufficient condition for the existence of short cycles. Based on the fundamental theorem of arithmetic in number theory, we divide the integer factorization into three cases and present three classes of QC LDPC codes accordingly. Furthermore, a general construction method of QC LDPC codes with girth of at least 6 is proposed. Numerical results show that the constructed QC LDPC codes perform well over the AWGN channel when decoded with the iterative algorithms.
Design LDPC Codes without Cycles of Length 4 and 6
Directory of Open Access Journals (Sweden)
Kiseon Kim
2008-04-01
Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.
A good performance watermarking LDPC code used in high-speed optical fiber communication system
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Improving a Power Line Communications Standard with LDPC Codes
Directory of Open Access Journals (Sweden)
Hsu Christine
2007-01-01
Full Text Available We investigate a power line communications (PLC scheme that could be used to enhance the HomePlug 1.0 standard, specifically its ROBO mode which provides modest throughput for the worst case PLC channel. The scheme is based on using a low-density parity-check (LDPC code, in lieu of the concatenated Reed-Solomon and convolutional codes in ROBO mode. The PLC channel is modeled with multipath fading and Middleton's class A noise. Clipping is introduced to mitigate the effect of impulsive noise. A simple and effective method is devised to estimate the variance of the clipped noise for LDPC decoding. Simulation results show that the proposed scheme outperforms the HomePlug 1.0 ROBO mode and has lower computational complexity. The proposed scheme also dispenses with the repetition of information bits in ROBO mode to gain time diversity, resulting in 4-fold increase in physical layer throughput.
Bounded-Angle Iterative Decoding of LDPC Codes
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm
Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar
2017-01-01
For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Peeling Decoding of LDPC Codes with Applications in Compressed Sensing
Directory of Open Access Journals (Sweden)
Weijun Zeng
2016-01-01
Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.
Error floor behavior study of LDPC codes for concatenated codes design
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Short-Block Protograph-Based LDPC Codes
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
Min-Max decoding for non binary LDPC codes
Savin, Valentin
2008-01-01
Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced c...
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
New Technique for Improving Performance of LDPC Codes in the Presence of Trapping Sets
Directory of Open Access Journals (Sweden)
Mohamed Adnan Landolsi
2008-06-01
Full Text Available Trapping sets are considered the primary factor for degrading the performance of low-density parity-check (LDPC codes in the error-floor region. The effect of trapping sets on the performance of an LDPC code becomes worse as the code size decreases. One approach to tackle this problem is to minimize trapping sets during LDPC code design. However, while trapping sets can be reduced, their complete elimination is infeasible due to the presence of cycles in the underlying LDPC code bipartite graph. In this work, we introduce a new technique based on trapping sets neutralization to minimize the negative effect of trapping sets under belief propagation (BP decoding. Simulation results for random, progressive edge growth (PEG and MacKay LDPC codes demonstrate the effectiveness of the proposed technique. The hardware cost of the proposed technique is also shown to be minimal.
Simultaneous chromatic dispersion and PMD compensation by using coded-OFDM and girth-10 LDPC codes.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-07-07
Low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is studied as an efficient coded modulation scheme suitable for simultaneous chromatic dispersion and polarization mode dispersion (PMD) compensation. We show that, for aggregate rate of 10 Gb/s, accumulated dispersion over 6500 km of SMF and differential group delay of 100 ps can be simultaneously compensated with penalty within 1.5 dB (with respect to the back-to-back configuration) when training sequence based channel estimation and girth-10 LDPC codes of rate 0.8 are employed.
Performance analysis of LDPC codes on OOK terahertz wireless channels
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications
Directory of Open Access Journals (Sweden)
O. Al Rasheed
2014-06-01
Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.
Encoders for block-circulant LDPC codes
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2009-01-01
Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.
Zhang, Yequn; Arabaci, Murat; Djordjevic, Ivan B
2012-04-09
Leveraging the advanced coherent optical communication technologies, this paper explores the feasibility of using four-dimensional (4D) nonbinary LDPC-coded modulation (4D-NB-LDPC-CM) schemes for long-haul transmission in future optical transport networks. In contrast to our previous works on 4D-NB-LDPC-CM which considered amplified spontaneous emission (ASE) noise as the dominant impairment, this paper undertakes transmission in a more realistic optical fiber transmission environment, taking into account impairments due to dispersion effects, nonlinear phase noise, Kerr nonlinearities, and stimulated Raman scattering in addition to ASE noise. We first reveal the advantages of using 4D modulation formats in LDPC-coded modulation instead of conventional two-dimensional (2D) modulation formats used with polarization-division multiplexing (PDM). Then we demonstrate that 4D LDPC-coded modulation schemes with nonbinary LDPC component codes significantly outperform not only their conventional PDM-2D counterparts but also the corresponding 4D bit-interleaved LDPC-coded modulation (4D-BI-LDPC-CM) schemes, which employ binary LDPC codes as component codes. We also show that the transmission reach improvement offered by the 4D-NB-LDPC-CM over 4D-BI-LDPC-CM increases as the underlying constellation size and hence the spectral efficiency of transmission increases. Our results suggest that 4D-NB-LDPC-CM can be an excellent candidate for long-haul transmission in next-generation optical networks.
Encoding of QC-LDPC Codes of Rank Deficient Parity Matrix
Directory of Open Access Journals (Sweden)
Mohammed Kasim Mohammed Al-Haddad
2016-05-01
Full Text Available the encoding of long low density parity check (LDPC codes presents a challenge compared to its decoding. The Quasi Cyclic (QC LDPC codes offer the advantage for reducing the complexity for both encoding and decoding due to its QC structure. Most QC-LDPC codes have rank deficient parity matrix and this introduces extra complexity over the codes with full rank parity matrix. In this paper an encoding scheme of QC-LDPC codes is presented that is suitable for codes with full rank parity matrix and rank deficient parity matrx. The extra effort required by the codes with rank deficient parity matrix over the codes of full rank parity matrix is investigated.
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
Energy Technology Data Exchange (ETDEWEB)
Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)
2007-10-12
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
International Nuclear Information System (INIS)
Alamino, Roberto C; Saad, David
2007-01-01
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases
Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding
DEFF Research Database (Denmark)
Luong, Huynh Van; Huang, Xin; Forchhammer, Søren
2011-01-01
Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...
Performance analysis of WS-EWC coded optical CDMA networks with/without LDPC codes
Huang, Chun-Ming; Huang, Jen-Fa; Yang, Chao-Chin
2010-10-01
One extended Welch-Costas (EWC) code family for the wavelength-division-multiplexing/spectral-amplitude coding (WDM/SAC; WS) optical code-division multiple-access (OCDMA) networks is proposed. This system has a superior performance as compared to the previous modified quadratic congruence (MQC) coded OCDMA networks. However, since the performance of such a network is unsatisfactory when the data bit rate is higher, one class of quasi-cyclic low-density parity-check (QC-LDPC) code is adopted to improve that. Simulation results show that the performance of the high-speed WS-EWC coded OCDMA network can be greatly improved by using the LDPC codes.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Directory of Open Access Journals (Sweden)
H. Prashantha Kumar
2011-09-01
Full Text Available Low density parity check (LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical Shannon limit for a memory less channel. LDPC codes are finding increasing use in applications like LTE-Networks, digital television, high density data storage systems, deep space communication systems etc. Several algebraic and combinatorial methods are available for constructing LDPC codes. In this paper we discuss a novel low complexity algebraic method for constructing regular LDPC like codes derived from full rank codes. We demonstrate that by employing these codes over AWGN channels, coding gains in excess of 2dB over un-coded systems can be realized when soft iterative decoding using a parity check tree is employed.
LDPC-coded orbital angular momentum (OAM) modulation for free-space optical communication.
Djordjevic, Ivan B; Arabaci, Murat
2010-11-22
An orbital angular momentum (OAM) based LDPC-coded modulation scheme suitable for use in FSO communication is proposed. We demonstrate that the proposed scheme can operate under strong atmospheric turbulence regime and enable 100 Gb/s optical transmission while employing 10 Gb/s components. Both binary and nonbinary LDPC-coded OAM modulations are studied. In addition to providing better BER performance, the nonbinary LDPC-coded modulation reduces overall decoder complexity and latency. The nonbinary LDPC-coded OAM modulation provides a net coding gain of 9.3 dB at the BER of 10(-8). The maximum-ratio combining scheme outperforms the corresponding equal-gain combining scheme by almost 2.5 dB.
Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization
Directory of Open Access Journals (Sweden)
Li (Tiffany Jing
2008-01-01
Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified "convergence-constraint" density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional "threshold-constraint" method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.
Rate-Compatible LDPC Codes with Linear Minimum Distance
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Construction of Short-length High-rates Ldpc Codes Using Difference Families
Deny Hamdani; Ery Safrianti
2007-01-01
Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...
LDPC code decoding adapted to the precoded partial response magnetic recording channels
International Nuclear Information System (INIS)
Lee, Jun; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-01-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems
Batshon, Hussam G; Djordjevic, Ivan; Schmidt, Ted
2010-09-13
We propose a subcarrier-multiplexed four-dimensional LDPC bit-interleaved coded modulation scheme that is capable of achieving beyond 480 Gb/s single-channel transmission rate over optical channels. Subcarrier-multiplexed four-dimensional LDPC coded modulation scheme outperforms the corresponding dual polarization schemes by up to 4.6 dB in OSNR at BER 10(-8).
LDPC code decoding adapted to the precoded partial response magnetic recording channels
Energy Technology Data Exchange (ETDEWEB)
Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo
2004-05-01
We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.
Construction of Short-Length High-Rates LDPC Codes Using Difference Families
Directory of Open Access Journals (Sweden)
Deny Hamdani
2010-10-01
Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Differentially Encoded LDPC CodesÃ¢Â€Â”Part II: General Case and Code Optimization
Directory of Open Access Journals (Sweden)
Jing Li (Tiffany
2008-04-01
Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified Ã¢Â€Âœconvergence-constraintÃ¢Â€Â density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional Ã¢Â€Âœthreshold-constraintÃ¢Â€Â method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.
Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.
Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko
2008-08-18
Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.
High Girth Column-Weight-Two LDPC Codes Based on Distance Graphs
Directory of Open Access Journals (Sweden)
Gabofetswe Malema
2007-01-01
Full Text Available LDPC codes of column weight of two are constructed from minimal distance graphs or cages. Distance graphs are used to represent LDPC code matrices such that graph vertices that represent rows and edges are columns. The conversion of a distance graph into matrix form produces an adjacency matrix with column weight of two and girth double that of the graph. The number of 1's in each row (row weight is equal to the degree of the corresponding vertex. By constructing graphs with different vertex degrees, we can vary the rate of corresponding LDPC code matrices. Cage graphs are used as examples of distance graphs to design codes with different girths and rates. Performance of obtained codes depends on girth and structure of the corresponding distance graphs.
Unitals and ovals of symmetric block designs in LDPC and space-time coding
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
LDPC Code Design for Nonuniform Power-Line Channels
Directory of Open Access Journals (Sweden)
Sanaei Ali
2007-01-01
Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel
Directory of Open Access Journals (Sweden)
Declercq David
2007-01-01
Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.
Construction of LDPC codes over GF(q) with modified progressive edge growth
Institute of Scientific and Technical Information of China (English)
CHEN Xin; MEN Ai-dong; YANG Bo; QUAN Zi-yi
2009-01-01
A parity check matrix construction method for constructing a low-density parity-check (LDPC) codes over GF(q) (q>2) based on the modified progressive edge growth (PEG) algorithm is introduced. First, the nonzero locations of the parity check matrix are selected using the PEG algorithm. Then the nonzero elements are defined by avoiding the definition of subcode. A proof is given to show the good minimum distance property of constructed GF(q)-LDPC codes. Simulations are also presented to illustrate the good error performance of the designed codes.
Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission
Directory of Open Access Journals (Sweden)
Tarek Chehade
2015-01-01
Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.
Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes
Directory of Open Access Journals (Sweden)
Qing Zhu
2013-01-01
Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.
Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm
Directory of Open Access Journals (Sweden)
Sandeep Kakde
2017-12-01
Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.
PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.
Djordjevic, Ivan B
2007-04-02
The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.
Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels
Directory of Open Access Journals (Sweden)
O. Al Rasheed
2013-11-01
Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
Design and Analysis of Adaptive Message Coding on LDPC Decoder with Faulty Storage
Directory of Open Access Journals (Sweden)
Guangjun Ge
2018-01-01
Full Text Available Unreliable message storage severely degrades the performance of LDPC decoders. This paper discusses the impacts of message errors on LDPC decoders and schemes improving the robustness. Firstly, we develop a discrete density evolution analysis for faulty LDPC decoders, which indicates that protecting the sign bits of messages is effective enough for finite-precision LDPC decoders. Secondly, we analyze the effects of quantization precision loss for static sign bit protection and propose an embedded dynamic coding scheme by adaptively employing the least significant bits (LSBs to protect the sign bits. Thirdly, we give a construction of Hamming product code for the adaptive coding and present low complexity decoding algorithms. Theoretic analysis indicates that the proposed scheme outperforms traditional triple modular redundancy (TMR scheme in decoding both threshold and residual errors, while Monte Carlo simulations show that the performance loss is less than 0.2 dB when the storage error probability varies from 10-3 to 10-4.
Analysis and Construction of Full-Diversity Joint Network-LDPC Codes for Cooperative Communications
Directory of Open Access Journals (Sweden)
Capirone Daniele
2010-01-01
Full Text Available Transmit diversity is necessary in harsh environments to reduce the required transmit power for achieving a given error performance at a certain transmission rate. In networks, cooperative communication is a well-known technique to yield transmit diversity and network coding can increase the spectral efficiency. These two techniques can be combined to achieve a double diversity order for a maximum coding rate on the Multiple-Access Relay Channel (MARC, where two sources share a common relay in their transmission to the destination. However, codes have to be carefully designed to obtain the intrinsic diversity offered by the MARC. This paper presents the principles to design a family of full-diversity LDPC codes with maximum rate. Simulation of the word error rate performance of the new proposed family of LDPC codes for the MARC confirms the full diversity.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
William G. Cowley
2008-04-01
Full Text Available This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of 2ÃÂ€/M radians, where Ã¢Â€ÂœMÃ¢Â€Â is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over Ã¢Â„Â¤4 with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of 2Ã¢ÂˆÂ˜ standard deviation per symbol with small loss.
Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels
Directory of Open Access Journals (Sweden)
Karuppasami Sridhar
2008-01-01
Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.
Drăghici, S.; Proştean, O.; Răduca, E.; Haţiegan, C.; Hălălae, I.; Pădureanu, I.; Nedeloni, M.; (Barboni Haţiegan, L.
2017-01-01
In this paper a method with which a set of characteristic functions are associated to a LDPC code is shown and also functions that represent the evolution density of messages that go along the edges of a Tanner graph. Graphic representations of the density evolution are shown respectively the study and simulation of likelihood threshold that render asymptotic boundaries between which there are decodable codes were made using MathCad V14 software.
Directory of Open Access Journals (Sweden)
Rovini Massimo
2009-01-01
Full Text Available This is a reply to the comments by Gunnam et al. "Comments on 'Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes'", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 704174 on our recent work "Techniques and architectures for hazard-free semi-parallel decoding of LDPC codes", EURASIP Journal on Embedded Systems, vol. 2009, Article ID 723465.
Rate-compatible protograph LDPC code families with linear minimum distance
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
Differentially Encoded LDPC Codes—Part I: Special Case of Product Accumulate Codes
Directory of Open Access Journals (Sweden)
(Tiffany JingLi
2008-01-01
Full Text Available Part I of a two-part series investigates product accumulate codes, a special class of differentially-encoded low density parity check (DE-LDPC codes with high performance and low complexity, on flat Rayleigh fading channels. In the coherent detection case, Divsalar's simple bounds and iterative thresholds using density evolution are computed to quantify the code performance at finite and infinite lengths, respectively. In the noncoherent detection case, a simple iterative differential detection and decoding (IDDD receiver is proposed and shown to be robust for different Doppler shifts. Extrinsic information transfer (EXIT charts reveal that, with pilot symbol assisted differential detection, the widespread practice of inserting pilot symbols to terminate the trellis actually incurs a loss in capacity, and a more efficient way is to separate pilots from the trellis. Through analysis and simulations, it is shown that PA codes perform very well with both coherent and noncoherent detections. The more general case of DE-LDPC codes, where the LDPC part may take arbitrary degree profiles, is studied in Part II Li 2008.
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
LDPC product coding scheme with extrinsic information for bit patterned media recoding
Directory of Open Access Journals (Sweden)
Seongkwon Jeong
2017-05-01
Full Text Available Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI and inter-track interference (ITI occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
LDPC product coding scheme with extrinsic information for bit patterned media recoding
Jeong, Seongkwon; Lee, Jaejin
2017-05-01
Since the density limit of the current perpendicular magnetic storage system will soon be reached, bit patterned media recording (BPMR) is a promising candidate for the next generation storage system to achieve an areal density beyond 1 Tb/in2. Each recording bit is stored in a fabricated magnetic island and the space between the magnetic islands is nonmagnetic in BPMR. To approach recording densities of 1 Tb/in2, the spacing of the magnetic islands must be less than 25 nm. Consequently, severe inter-symbol interference (ISI) and inter-track interference (ITI) occur. ITI and ISI degrade the performance of BPMR. In this paper, we propose a low-density parity check (LDPC) product coding scheme that exploits extrinsic information for BPMR. This scheme shows an improved bit error rate performance compared to that in which one LDPC code is used.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
Construction of Rate-Compatible LDPC Codes Utilizing Information Shortening and Parity Puncturing
Directory of Open Access Journals (Sweden)
Jones Christopher R
2005-01-01
Full Text Available This paper proposes a method for constructing rate-compatible low-density parity-check (LDPC codes. The construction considers the problem of optimizing a family of rate-compatible degree distributions as well as the placement of bipartite graph edges. A hybrid approach that combines information shortening and parity puncturing is proposed. Local graph conditioning techniques for the suppression of error floors are also included in the construction methodology.
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes
Directory of Open Access Journals (Sweden)
Rovini Massimo
2009-01-01
Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.
A PEG Construction of LDPC Codes Based on the Betweenness Centrality Metric
Directory of Open Access Journals (Sweden)
BHURTAH-SEEWOOSUNGKUR, I.
2016-05-01
Full Text Available Progressive Edge Growth (PEG constructions are usually based on optimizing the distance metric by using various methods. In this work however, the distance metric is replaced by a different one, namely the betweenness centrality metric, which was shown to enhance routing performance in wireless mesh networks. A new type of PEG construction for Low-Density Parity-Check (LDPC codes is introduced based on the betweenness centrality metric borrowed from social networks terminology given that the bipartite graph describing the LDPC is analogous to a network of nodes. The algorithm is very efficient in filling edges on the bipartite graph by adding its connections in an edge-by-edge manner. The smallest graph size the new code could construct surpasses those obtained from a modified PEG algorithm - the RandPEG algorithm. To the best of the authors' knowledge, this paper produces the best regular LDPC column-weight two graphs. In addition, the technique proves to be competitive in terms of error-correcting performance. When compared to MacKay, PEG and other recent modified-PEG codes, the algorithm gives better performance over high SNR due to its particular edge and local graph properties.
Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes
Directory of Open Access Journals (Sweden)
BOT, A.
2013-11-01
Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
Design of a VLSI Decoder for Partially Structured LDPC Codes
Directory of Open Access Journals (Sweden)
Fabrizio Vacca
2008-01-01
of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.
Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.
Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes
Hamilton, Kathleen; Pryadko, Leonid
Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.
A Novel Modified Algorithm with Reduced Complexity LDPC Code Decoder
Directory of Open Access Journals (Sweden)
Song Yang
2014-10-01
Full Text Available A novel efficient decoding algorithm reduced the sum-product algorithm (SPA Complexity with LPDC code is proposed. Base on the hyperbolic tangent rule, modified the Check node update with two horizontal process, which have similar calculation, Motivated by the finding that sun- min (MS algorithm reduce the complexity reducing the approximation error in the horizontal process, simplify the information weight small part. Compared with the exiting approximations, the proposed method is less computational complexity than SPA algorithm. Simulation results show that the author algorithm can achieve performance very close SPA.
High performance reconciliation for continuous-variable quantum key distribution with LDPC code
Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua
2015-03-01
Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.
Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes
Su, Hualing; He, Yucheng; Zhou, Lin
2017-08-01
In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.
Recursive construction of (J,L (J,L QC LDPC codes with girth 6
Directory of Open Access Journals (Sweden)
Mohammad Gholami
2016-06-01
Full Text Available In this paper, a recursive algorithm is presented to generate some exponent matrices which correspond to Tanner graphs with girth at least 6. For a J×L J×L exponent matrix E E, the lower bound Q(E Q(E is obtained explicitly such that (J,L (J,L QC LDPC codes with girth at least 6 exist for any circulant permutation matrix (CPM size m≥Q(E m≥Q(E. The results show that the exponent matrices constructed with our recursive algorithm have smaller lower-bound than the ones proposed recently with girth 6
MIMO-OFDM System's Performance Using LDPC Codes for a Mobile Robot
Daoud, Omar; Alani, Omar
This work deals with the performance of a Sniffer Mobile Robot (SNFRbot)-based spatial multiplexed wireless Orthogonal Frequency Division Multiplexing (OFDM) transmission technology. The use of Multi-Input Multi-Output (MIMO)-OFDM technology increases the wireless transmission rate without increasing transmission power or bandwidth. A generic multilayer architecture of the SNFRbot is proposed with low power and low cost. Some experimental results are presented and show the efficiency of sniffing deadly gazes, sensing high temperatures and sending live videos of the monitored situation. Moreover, simulation results show the achieved performance by tackling the Peak-to-Average Power Ratio (PAPR) problem of the used technology using Low Density Parity Check (LDPC) codes; and the effect of combating the PAPR on the bit error rate (BER) and the signal to noise ratio (SNR) over a Doppler spread channel.
Djordjevic, Ivan B
2007-08-06
We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.
BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL
Directory of Open Access Journals (Sweden)
Surbhi Sharma
2010-06-01
Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the ‘m’ parameter increases (Less fading.
Instanton-based techniques for analysis and reduction of error floors of LDPC codes
International Nuclear Information System (INIS)
Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane
2008-01-01
We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
Instanton-based techniques for analysis and reduction of error floor of LDPC codes
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE
2008-01-01
We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
Directory of Open Access Journals (Sweden)
Washington Fernández R
2009-04-01
Full Text Available En este trabajo se analiza y estudia el código chequeo de paridad de baja densidad irregular (LDPC, en un canal de línea eléctrica de baja tensión. Se analizan los modelos de ruido existentes para las líneas eléctricas de baja tensión. Se evalúa el desempeño del código LDPC irregular, en un canal de línea eléctrica de baja tensión para diferentes velocidades de transmisión (3, 10, 15 y 30 Mbps, considerando como parámetro de desempeño el BER versus SNR.In this work the irregular Low Density Parity Check code (LDPC is analyzed and studied, is used in a low voltage powerline. Noise models are analyzed for these low voltage lines. The performance of the LDPC code is evaluated in a low voltage line channel for rates of 3, 5, 15 and 30 Mbps, considering of BER vs SNR a parameter.
Blind Estimation of the Phase and Carrier Frequency Offsets for LDPC-Coded Systems
Directory of Open Access Journals (Sweden)
Houcke Sebastien
2010-01-01
Full Text Available Abstract We consider in this paper the problem of phase offset and Carrier Frequency Offset (CFO estimation for Low-Density Parity-Check (LDPC coded systems. We propose new blind estimation techniques based on the calculation and minimization of functions of the Log-Likelihood Ratios (LLR of the syndrome elements obtained according to the parity check matrix of the error-correcting code. In the first part of this paper, we consider phase offset estimation for a Binary Phase Shift Keying (BPSK modulation and propose a novel estimation technique. Simulation results show that the proposed method is very effective and outperforms many existing algorithms. Then, we modify the estimation criterion so that it can work for higher-order modulations. One interesting feature of the proposed algorithm when applied to high-order modulations is that the phase offset of the channel can be blindly estimated without any ambiguity. In the second part of the paper, we consider the problem of CFO estimation and propose estimation techniques that are based on the same concept as the ones presented for the phase offset estimation. The Mean Squared Error (MSE and Bit Error Rate (BER curves show the efficiency of the proposed estimation techniques.
Analysis of error floor of LDPC codes under LP decoding over the BSC
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ
2009-01-01
We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation
Cronie, H.S.
2005-01-01
We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
A Low-Complexity Joint Detection-Decoding Algorithm for Nonbinary LDPC-Coded Modulation Systems
Wang, Xuepeng; Bai, Baoming; Ma, Xiao
2010-01-01
In this paper, we present a low-complexity joint detection-decoding algorithm for nonbinary LDPC codedmodulation systems. The algorithm combines hard-decision decoding using the message-passing strategy with the signal detector in an iterative manner. It requires low computational complexity, offers good system performance and has a fast rate of decoding convergence. Compared to the q-ary sum-product algorithm (QSPA), it provides an attractive candidate for practical applications of q-ary LDP...
Design and performance investigation of LDPC-coded upstream transmission systems in IM/DD OFDM-PONs
Gong, Xiaoxue; Guo, Lei; Wu, Jingjing; Ning, Zhaolong
2016-12-01
In Intensity-Modulation Direct-Detection (IM/DD) Orthogonal Frequency Division Multiplexing Passive Optical Networks (OFDM-PONs), aside from Subcarrier-to-Subcarrier Intermixing Interferences (SSII) induced by square-law detection, the same laser frequency for data sending from Optical Network Units (ONUs) results in ONU-to-ONU Beating Interferences (OOBI) at the receiver. To mitigate those interferences, we design a Low-Density Parity Check (LDPC)-coded and spectrum-efficient upstream transmission system. A theoretical channel model is also derived, in order to analyze the detrimental factors influencing system performances. Simulation results demonstrate that the receiver sensitivity is improved 3.4 dB and 2.5 dB under QPSK and 8QAM, respectively, after 100 km Standard Single-Mode Fiber (SSMF) transmission. Furthermore, the spectrum efficiency can be improved by about 50%.
Antônio Unias de Lucena
2015-01-01
Resumo: O emprego de códigos LDPC em comunicações ópticas vem recebendo especial atenção nos últimos anos devido à sua elevada capacidade de correção de erros, fato que possibilita enlaces mais longos e com maior capacidade de transmissão. A presente dissertação apresenta um estudo de códigos LDPC binários, irregulares e estruturados (IE-LDPC), bem como, uma comparação do desempenho de dois algoritmos de decodificação comumente utilizados na decodificação de códigos LDPC: o algoritmo soma-pro...
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K
2017-12-04
Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
LDPC coding for QKD at higher photon flux levels based on spatial entanglement of twin beams in PDC
International Nuclear Information System (INIS)
Daneshgaran, Fred; Mondin, Marina; Bari, Inam
2014-01-01
Twin beams generated by Parametric Down Conversion (PDC) exhibit quantum correlations that has been effectively used as a tool for many applications including calibration of single photon detectors. By now, detection of multi-mode spatial correlations is a mature field and in principle, only depends on the transmission and detection efficiency of the devices and the channel. In [2, 4, 5], the authors utilized their know-how on almost perfect selection of modes of pairwise correlated entangled beams and the optimization of the noise reduction to below the shot-noise level, for absolute calibration of Charge Coupled Device (CCD) cameras. The same basic principle is currently being considered by the same authors for possible use in Quantum Key Distribution (QKD) [3, 1]. The main advantage in such an approach would be the ability to work with much higher photon fluxes than that of a single photon regime that is theoretically required for discrete variable QKD applications (in practice, very weak laser pulses with mean photon count below one are used).The natural setup of quantization of CCD detection area and subsequent measurement of the correlation statistic needed to detect the presence of the eavesdropper Eve, leads to a QKD channel model that is a Discrete Memoryless Channel (DMC) with a number of inputs and outputs that can be more than two (i.e., the channel is a multi-level DMC). This paper investigates the use of Low Density Parity Check (LDPC) codes for information reconciliation on the effective parallel channels associated with the multi-level DMC. The performance of such codes are shown to be close to the theoretical limits.
Binary Linear-Time Erasure Decoding for Non-Binary LDPC codes
Savin, Valentin
2009-01-01
In this paper, we first introduce the extended binary representation of non-binary codes, which corresponds to a covering graph of the bipartite graph associated with the non-binary code. Then we show that non-binary codewords correspond to binary codewords of the extended representation that further satisfy some simplex-constraint: that is, bits lying over the same symbol-node of the non-binary graph must form a codeword of a simplex code. Applied to the binary erasure channel, this descript...
A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening
Dolinar, Sam
2005-01-01
We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.
Progressive transmission of images over fading channels using rate-compatible LDPC codes.
Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul
2006-12-01
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.
Kódování a efektivita LDPC kódů
Kozlík, Andrew
2011-01-01
Low-density parity-check (LDPC) codes are linear error correcting codes which are capable of performing near channel capacity. Furthermore, they admit efficient decoding algorithms that provide near optimum performance. Their main disadvantage is that most LDPC codes have relatively complex encoders. In this thesis, we begin by giving a detailed discussion of the sum-product decoding algorithm, we then study the performance of LDPC codes on the binary erasure channel under sum-product decodin...
On the performance of 1-level LDPC lattices
Sadeghi, Mohammad-Reza; Sakzad, Amin
2013-01-01
The low-density parity-check (LDPC) lattices perform very well in high dimensions under generalized min-sum iterative decoding algorithm. In this work we focus on 1-level LDPC lattices. We show that these lattices are the same as lattices constructed based on Construction A and low-density lattice-code (LDLC) lattices. In spite of having slightly lower coding gain, 1-level regular LDPC lattices have remarkable performances. The lower complexity nature of the decoding algorithm for these type ...
LDPC Decoding on GPU for Mobile Device
Directory of Open Access Journals (Sweden)
Yiqin Lu
2016-01-01
Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.
FPGA implementation of low complexity LDPC iterative decoder
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Resource Efficient LDPC Decoders for Multimedia Communication
Chandrasetty, Vikram Arkalgud; Aziz, Syed Mahfuzul
2013-01-01
Achieving high image quality is an important aspect in an increasing number of wireless multimedia applications. These applications require resource efficient error correction hardware to detect and correct errors introduced by the communication channel. This paper presents an innovative flexible architecture for error correction using Low-Density Parity-Check (LDPC) codes. The proposed partially-parallel decoder architecture utilizes a novel code construction technique based on multi-level H...
High-throughput GPU-based LDPC decoding
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Directory of Open Access Journals (Sweden)
Eric Psota
2010-01-01
Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.
A Scalable Architecture of a Structured LDPC Decoder
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
On the equivalence of Ising models on ‘small-world’ networks and LDPC codes on channels with memory
International Nuclear Information System (INIS)
Neri, Izaak; Skantzos, Nikos S
2014-01-01
We demonstrate the equivalence between thermodynamic observables of Ising spin-glass models on small-world lattices and the decoding properties of error-correcting low-density parity-check codes on channels with memory. In particular, the self-consistent equations for the effective field distributions in the spin-glass model within the replica symmetric ansatz are equivalent to the density evolution equations forr Gilbert–Elliott channels. This relationship allows us to present a belief-propagation decoding algorithm for finite-state Markov channels and to compute its performance at infinite block lengths from the density evolution equations. We show that loss of reliable communication corresponds to a first order phase transition from a ferromagnetic phase to a paramagnetic phase in the spin glass model. The critical noise levels derived for Gilbert–Elliott channels are in very good agreement with existing results in coding theory. Furthermore, we use our analysis to derive critical noise levels for channels with both memory and asymmetry in the noise. The resulting phase diagram shows that the combination of asymmetry and memory in the channel allows for high critical noise levels: in particular, we show that successful decoding is possible at any noise level of the bad channel when the good channel is good enough. Theoretical results at infinite block lengths using density evolution equations aree compared with average error probabilities calculated from a practical implementation of the corresponding decoding algorithms at finite block lengths. (paper)
UEP Concepts in Modulation and Coding
Directory of Open Access Journals (Sweden)
Werner Henkel
2010-01-01
Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
Optical LDPC decoders for beyond 100 Gbits/s optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2009-05-01
We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.
Analysis of Minimal LDPC Decoder System on a Chip Implementation
Directory of Open Access Journals (Sweden)
T. Palenik
2015-09-01
Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.
Optimal Codes for the Burst Erasure Channel
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure
Hardwarearchitektur für einen universellen LDPC Decoder
Directory of Open Access Journals (Sweden)
C. Beuschel
2009-05-01
Full Text Available Im vorliegenden Beitrag wird eine universelle Decoderarchitektur für einen Low-Density Parity-Check (LDPC Code Decoder vorgestellt. Anders als bei den in der Literatur häufig beschriebenen Architekturen für strukturierte Codes ist die hier vorgestellte Architektur frei programmierbar, so dass jeder beliebige LDPC Code durch eine Änderung der Initialisierung des Speichers für die Prüfmatrix mit derselben Hardware decodiert werden kann. Die größte Herausforderung beim Entwurf von teilparallelen LDPC Decoder Architekturen liegt im konfliktfreien Datenaustausch zwischen mehreren parallelen Speichern und Berechnungseinheiten, wozu ein Mapping und Scheduling Algorithmus benötigt wird. Der hier vorgestellte Algorithmus stützt sich auf Graphentheorie und findet für jeden beliebigen LDPC Code eine für die Architektur optimale Lösung. Damit sind keine Wartezyklen notwendig und die Parallelität der Architektur wird zu jedem Zeitpunkt voll ausgenutzt.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications
Directory of Open Access Journals (Sweden)
M. Revathy
2015-01-01
Full Text Available Low-density parity-check (LDPC codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax, and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
A Low-Complexity Euclidean Orthogonal LDPC Architecture for Low Power Applications.
Revathy, M; Saravanan, R
2015-01-01
Low-density parity-check (LDPC) codes have been implemented in latest digital video broadcasting, broadband wireless access (WiMax), and fourth generation of wireless standards. In this paper, we have proposed a high efficient low-density parity-check code (LDPC) decoder architecture for low power applications. This study also considers the design and analysis of check node and variable node units and Euclidean orthogonal generator in LDPC decoder architecture. The Euclidean orthogonal generator is used to reduce the error rate of the proposed LDPC architecture, which can be incorporated between check and variable node architecture. This proposed decoder design is synthesized on Xilinx 9.2i platform and simulated using Modelsim, which is targeted to 45 nm devices. Synthesis report proves that the proposed architecture greatly reduces the power consumption and hardware utilizations on comparing with different conventional architectures.
A new Monte Carlo code for simulation of the effect of irregular surfaces on X-ray spectra
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio, E-mail: brunetti@uniss.it; Golosio, Bruno
2014-04-01
Generally, quantitative X-ray fluorescence (XRF) analysis estimates the content of chemical elements in a sample based on the areas of the fluorescence peaks in the energy spectrum. Besides the concentration of the elements, the peak areas depend also on the geometrical conditions. In fact, the estimate of the peak areas is simple if the sample surface is smooth and if the spectrum shows a good statistic (large-area peaks). For this reason often the sample is prepared as a pellet. However, this approach is not always feasible, for instance when cultural heritage or valuable samples must be analyzed. In this case, the sample surface cannot be smoothed. In order to address this problem, several works have been reported in the literature, based on experimental measurements on a few sets of specific samples or on Monte Carlo simulations. The results obtained with the first approach are limited by the specific class of samples analyzed, while the second approach cannot be applied to arbitrarily irregular surfaces. The present work describes a more general analysis tool based on a new fast Monte Carlo algorithm, which is virtually able to simulate any kind of surface. At the best of our knowledge, it is the first Monte Carlo code with this option. A study of the influence of surface irregularities on the measured spectrum is performed and some results reported. - Highlights: • We present a fast Monte Carlo code with the possibility to simulate any irregularly rough surfaces. • We show applications to multilayer measurements. • Real time simulations are available.
LDPC and SHA based iris recognition for image authentication
Directory of Open Access Journals (Sweden)
K. Seetharaman
2012-11-01
Full Text Available We introduce a novel way to authenticate an image using Low Density Parity Check (LDPC and Secure Hash Algorithm (SHA based iris recognition method with reversible watermarking scheme, which is based on Integer Wavelet Transform (IWT and threshold embedding technique. The parity checks and parity matrix of LDPC encoding and cancellable biometrics i.e., hash string of unique iris code from SHA-512 are embedded into an image for authentication purpose using reversible watermarking scheme based on IWT and threshold embedding technique. Simply by reversing the embedding process, the original image, parity checks, parity matrix and SHA-512 hash are extracted back from watermarked-image. For authentication, the new hash string produced by employing SHA-512 on error corrected iris code from live person is compared with hash string extracted from watermarked-image. The LDPC code reduces the hamming distance for genuine comparisons by a larger amount than for the impostor comparisons. This results in better separation between genuine and impostor users which improves the authentication performance. Security of this scheme is very high due to the security complexity of SHA-512, which is 2256 under birthday attack. Experimental results show that this approach can assure more accurate authentication with a low false rejection or false acceptance rate and outperforms the prior arts in terms of PSNR.
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2010-10-25
We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).
Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting
2010-06-21
In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).
An Area-Efficient Reconfigurable LDPC Decoder with Conflict Resolution
Zhou, Changsheng; Huang, Yuebin; Huang, Shuangqu; Chen, Yun; Zeng, Xiaoyang
Based on Turbo-Decoding Message-Passing (TDMP) and Normalized Min-Sum (NMS) algorithm, an area efficient LDPC decoder that supports both structured and unstructured LDPC codes is proposed in this paper. We introduce a solution to solve the memory access conflict problem caused by TDMP algorithm. We also arrange the main timing schedule carefully to handle the operations of our solution while avoiding much additional hardware consumption. To reduce the memory bits needed, the extrinsic message storing strategy is also optimized. Besides the extrinsic message recover and the accumulate operation are merged together. To verify our architecture, a LDPC decoder that supports both China Multimedia Mobile Broadcasting (CMMB) and Digital Terrestrial/ Television Multimedia Broadcasting (DTMB) standards is developed using SMIC 0.13µm standard CMOS process. The core area is 4.75mm2 and the maximum operating clock frequency is 200MHz. The estimated power consumption is 48.4mW at 25MHz for CMMB and 130.9mW at 50MHz for DTMB with 5 iterations and 1.2V supply.
An LDPC decoder architecture for wireless sensor network applications.
Biroli, Andrea Dario Giancarlo; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%-80%, depending on considered environment, distance and bit error rate.
An LDPC Decoder Architecture for Wireless Sensor Network Applications
Giancarlo Biroli, Andrea Dario; Martina, Maurizio; Masera, Guido
2012-01-01
The pervasive use of wireless sensors in a growing spectrum of human activities reinforces the need for devices with low energy dissipation. In this work, coded communication between a couple of wireless sensor devices is considered as a method to reduce the dissipated energy per transmitted bit with respect to uncoded communication. Different Low Density Parity Check (LDPC) codes are considered to this purpose and post layout results are shown for a low-area low-energy decoder, which offers percentage energy savings with respect to the uncoded solution in the range of 40%–80%, depending on considered environment, distance and bit error rate. PMID:22438724
Fundamentals of convolutional coding
Johannesson, Rolf
2015-01-01
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Hrouza, Ondřej
2012-01-01
Práce se zabývá problematikou LDPC kódů. Jsou zde popsány metody vytváření paritní matice, kde je kladen důraz především na strukturované vytváření této matice za použití konečné geometrie: Euklidovské geometrie a projektivní geometrie. Další oblastí, které se práce věnuje je dekódování LDPC kódů. Práce porovnává čtyři dekódovací metody: Hard-Decision algoritmus, Bit-Flipping algoritmus, The Sum-Product algoritmus a Log Likelihood algoritmus, při kterých je kladen důraz především na iterativn...
Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo
2016-04-01
We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.
Near-Capacity Coding for Discrete Multitone Systems with Impulse Noise
Directory of Open Access Journals (Sweden)
Kschischang Frank R
2006-01-01
Full Text Available We consider the design of near-capacity-achieving error-correcting codes for a discrete multitone (DMT system in the presence of both additive white Gaussian noise and impulse noise. Impulse noise is one of the main channel impairments for digital subscriber lines (DSL. One way to combat impulse noise is to detect the presence of the impulses and to declare an erasure when an impulse occurs. In this paper, we propose a coding system based on low-density parity-check (LDPC codes and bit-interleaved coded modulation that is capable of taking advantage of the knowledge of erasures. We show that by carefully choosing the degree distribution of an irregular LDPC code, both the additive noise and the erasures can be handled by a single code, thus eliminating the need for an outer code. Such a system can perform close to the capacity of the channel and for the same redundancy is significantly more immune to the impulse noise than existing methods based on an outer Reed-Solomon (RS code. The proposed method has a lower implementation complexity than the concatenated coding approach.
Indian Academy of Sciences (India)
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Joint Carrier-Phase Synchronization and LDPC Decoding
Simon, Marvin; Valles, Esteban
2009-01-01
A method has been proposed to increase the degree of synchronization of a radio receiver with the phase of a suppressed carrier signal modulated with a binary- phase-shift-keying (BPSK) or quaternary- phase-shift-keying (QPSK) signal representing a low-density parity-check (LDPC) code. This method is an extended version of the method described in Using LDPC Code Constraints to Aid Recovery of Symbol Timing (NPO-43112), NASA Tech Briefs, Vol. 32, No. 10 (October 2008), page 54. Both methods and the receiver architectures in which they would be implemented belong to a class of timing- recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. The proposed method calls for the use of what is known in the art as soft decision feedback to remove the modulation from a replica of the incoming signal prior to feeding this replica to a phase-locked loop (PLL) or other carrier-tracking stage in the receiver. Soft decision feedback refers to suitably processed versions of intermediate results of iterative computations involved in the LDPC decoding process. Unlike a related prior method in which hard decision feedback (the final sequence of decoded symbols) is used to remove the modulation, the proposed method does not require estimation of the decoder error probability. In a basic digital implementation of the proposed method, the incoming signal (having carrier phase theta theta (sub c) plus noise would first be converted to inphase (I) and quadrature (Q) baseband signals by mixing it with I and Q signals at the carrier frequency [wc/(2 pi)] generated by a local oscillator. The resulting demodulated signals would be processed through one-symbol-period integrate and- dump filters, the outputs of which would be sampled and held, then multiplied by a soft-decision version of the baseband modulated signal. The resulting I and Q products consist of terms proportional to the cosine
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
LDPC-based iterative joint source-channel decoding for JPEG2000.
Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane
2007-02-01
A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes
Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman
2011-01-01
Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...
On the reduced-complexity of LDPC decoders for beyond 400 Gb/s serial optical transmission
Djordjevic, Ivan B.; Xu, Lei; Wang, Ting
2010-12-01
Two reduced-complexity (RC) LDPC decoders are proposed, which can be used in combination with large-girth LDPC codes to enable beyond 400 Gb/s serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.45 dB worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further evaluate the proposed algorithms for use in beyond 400 Gb/s serial optical transmission in combination with PolMUX 32-IPQ-based signal constellation and show that low BERs can be achieved for medium optical SNRs, while achieving the net coding gain above 11.4 dB.
A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system
Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang
2015-11-01
A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira; Lin, Sian Jheng; Al-Naffouri, Tareq Y.
2016-01-01
, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet
Quasi Cyclic Low Density Parity Check Code for High SNR Data Transfer
Directory of Open Access Journals (Sweden)
M. R. Islam
2010-06-01
Full Text Available An improved Quasi Cyclic Low Density Parity Check code (QC-LDPC is proposed to reduce the complexity of the Low Density Parity Check code (LDPC while obtaining the similar performance. The proposed QC-LDPC presents an improved construction at high SNR with circulant sub-matrices. The proposed construction yields a performance gain of about 1 dB at a 0.0003 bit error rate (BER and it is tested on 4 different decoding algorithms. Proposed QC-LDPC is compared with the existing QC-LDPC and the simulation results show that the proposed approach outperforms the existing one at high SNR. Simulations are also performed varying the number of horizontal sub matrices and the results show that the parity check matrix with smaller horizontal concatenation shows better performance.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Entanglement-assisted quantum quasicyclic low-density parity-check codes
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
Quantum quasi-cyclic low-density parity-check error-correcting codes
International Nuclear Information System (INIS)
Yuan, Li; Gui-Hua, Zeng; Lee, Moon Ho
2009-01-01
In this paper, we propose the approach of employing circulant permutation matrices to construct quantum quasicyclic (QC) low-density parity-check (LDPC) codes. Using the proposed approach one may construct some new quantum codes with various lengths and rates of no cycles-length 4 in their Tanner graphs. In addition, these constructed codes have the advantages of simple implementation and low-complexity encoding. Finally, the decoding approach for the proposed quantum QC LDPC is investigated. (general)
An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder
Directory of Open Access Journals (Sweden)
Tong Zhang
2003-05-01
Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10Ã¢ÂˆÂ’6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs
Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken
2015-09-01
To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-01-01
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.
Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar
2015-06-29
Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
Coded Cooperation for Multiway Relaying in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhongwei Si
2015-06-01
Full Text Available Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.
A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation
Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen
In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.
Efficient decoding of random errors for quantum expander codes
Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony
2017-01-01
We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...
An experimental comparison of coded modulation strategies for 100 Gb/s transceivers
Sillekens, E.; Alvarado, A.; Okonkwo, C.; Thomsen, B.C.
2016-01-01
Coded modulation is a key technique to increase the spectral efficiency of coherent optical communication systems. Two popular strategies for coded modulation are turbo trellis-coded modulation (TTCM) and bit-interleaved coded modulation (BICM) based on low-density parity-check (LDPC) codes.
Polynomial theory of error correcting codes
Cancellieri, Giovanni
2015-01-01
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Trellis and turbo coding iterative and graph-based error control coding
Schlegel, Christian B
2015-01-01
This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.
Entanglement-assisted quantum low-density parity-check codes
International Nuclear Information System (INIS)
Fujiwara, Yuichiro; Clark, David; Tonchev, Vladimir D.; Vandendriessche, Peter; De Boeck, Maarten
2010-01-01
This article develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error-correction performance, high rates, and low decoding complexity. The proposed method produces several infinite families of codes with a wide variety of parameters and entanglement requirements. Our framework encompasses the previously known entanglement-assisted quantum LDPC codes having the best error-correction performance and many other codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
The serial message-passing schedule for LDPC decoding algorithms
Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue
2015-12-01
The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.
Study of bifurcation behavior of two-dimensional turbo product code decoders
International Nuclear Information System (INIS)
He Yejun; Lau, Francis C.M.; Tse, Chi K.
2008-01-01
Turbo codes, low-density parity-check (LDPC) codes and turbo product codes (TPCs) are high performance error-correction codes which employ iterative algorithms for decoding. Under different conditions, the behaviors of the decoders are different. While the nonlinear dynamical behaviors of turbo code decoders and LDPC decoders have been reported in the literature, the dynamical behavior of TPC decoders is relatively unexplored. In this paper, we investigate the behavior of the iterative algorithm of a two-dimensional TPC decoder when the input signal-to-noise ratio (SNR) varies. The quantity to be measured is the mean square value of the posterior probabilities of the information bits. Unlike turbo decoders or LDPC decoders, TPC decoders do not produce a clear 'waterfall region'. This is mainly because the TPC decoding algorithm does not converge to 'indecisive' fixed points even at very low SNR values
Study of bifurcation behavior of two-dimensional turbo product code decoders
Energy Technology Data Exchange (ETDEWEB)
He Yejun [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China); Lau, Francis C.M. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)], E-mail: encmlau@polyu.edu.hk; Tse, Chi K. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)
2008-04-15
Turbo codes, low-density parity-check (LDPC) codes and turbo product codes (TPCs) are high performance error-correction codes which employ iterative algorithms for decoding. Under different conditions, the behaviors of the decoders are different. While the nonlinear dynamical behaviors of turbo code decoders and LDPC decoders have been reported in the literature, the dynamical behavior of TPC decoders is relatively unexplored. In this paper, we investigate the behavior of the iterative algorithm of a two-dimensional TPC decoder when the input signal-to-noise ratio (SNR) varies. The quantity to be measured is the mean square value of the posterior probabilities of the information bits. Unlike turbo decoders or LDPC decoders, TPC decoders do not produce a clear 'waterfall region'. This is mainly because the TPC decoding algorithm does not converge to 'indecisive' fixed points even at very low SNR values.
Star Formation in Irregular Galaxies.
Hunter, Deidre; Wolff, Sidney
1985-01-01
Examines mechanisms of how stars are formed in irregular galaxies. Formation in giant irregular galaxies, formation in dwarf irregular galaxies, and comparisons with larger star-forming regions found in spiral galaxies are considered separately. (JN)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Channel coding techniques for wireless communications
Deergha Rao, K
2015-01-01
The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...
Statistical mechanics of low-density parity-check codes
Energy Technology Data Exchange (ETDEWEB)
Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 2268502 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)
2004-02-13
We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)
Statistical mechanics of low-density parity-check codes
International Nuclear Information System (INIS)
Kabashima, Yoshiyuki; Saad, David
2004-01-01
We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multi-spin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems. (topical review)
Statistical physics inspired energy-efficient coded-modulation for optical communications.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2012-04-15
Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Denk, Tilmann; Mottola, S.
2012-10-01
Ymir (diameter 18 km), Saturn's second largest retrograde outer or irregular moon, has been observed six times by the Cassini narrow-angle camera (NAC) during the first 7 months in 2012. The observations span phase angles from 2° up to 102° and were taken at ranges between 15 and 18 million kilometers. From such a distance, Ymir is smaller than a pixel in the Cassini NAC. The data reveal a sidereal rotation period of 11.93 hrs, which is 1.6x longer than the previously reported value (Denk et al. 2011, EPSC/DPS #1452). Reason for this discrepancy is that the rotational light curve shows a rather uncommon 3-maxima and 3-minima shape at least in the phase angle range 50° to 100°, which was not recognizable in earlier data. The data cover several rotations from different viewing and illumination geometries and allow for a convex shape inversion with possibly a unique solution for the pole direction. The model reproduces the observed light curves to a very good accuracy without requiring albedo variegation, thereby suggesting that the lightcurve is dominated by the shape of Ymir. Among Saturn's irregular moons, the phenomenon of more than two maxima and minima at moderate to high phase angles is not unique to Ymir. At least Siarnaq and Paaliaq also show light curves with a strong deviation from a double-sine curve. Their rotation periods, however, remain unknown until more data can be taken. The light curve of Phoebe is fundamentally different to Ymir's because it is mainly shaped by local albedo differences and not by shape. Other reliable rotation periods of irregular satellites measured by Cassini include: Mundilfari 6.74 h; Kari 7.70 h; Albiorix 13.32 h; Kiviuq 21.82 h. More uncertain values are: Skathi 12 h; Bebhionn 16 h; Thrymr 27 h; Erriapus 28 h.
A mean field theory of coded CDMA systems
International Nuclear Information System (INIS)
Yano, Toru; Tanaka, Toshiyuki; Saad, David
2008-01-01
We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems
A mean field theory of coded CDMA systems
Energy Technology Data Exchange (ETDEWEB)
Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp
2008-08-15
We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.
Irregular Migrants and the Law
Kassim, Azizah; Mat Zin, Ragayah Hj.
2013-01-01
This paper examines Malaysia`s policy on irregular migrants and its implementation, and discusses its impact. A survey and interview covering 404 respondents was conducted between July 2010 and June 2011 to ascertain the real situations surrounding irregular migrants in Malaysia, which is one of the major host countries of international migrants from developing nations. The policy on foreign workers was formulated in the mid-1980s to deal with the large number of irregular migrants and their ...
A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions
Directory of Open Access Journals (Sweden)
Marco Baldi
2008-01-01
Full Text Available Classic linear block codes, like Bose-Chaudhuri-Hocquenghem (BCH and Reed-Solomon (RS codes, are widely used in multimedia transmissions, but their soft-decision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to low-density parity-check (LDPC codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVB-S2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.
Strategic Analysis of Irregular Warfare
2010-03-01
the same mathematical equations used by Lanchester .10 Irregular Warfare Theory and Doctrine It is time to develop new analytical methods and models...basis on which to build, similar to what Lanchester provided almost 100 years ago. Figure 9 portrays both Lanchester’s approach and an irregular 17
Non-binary Entanglement-assisted Stabilizer Quantum Codes
Riguang, Leng; Zhi, Ma
2011-01-01
In this paper, we show how to construct non-binary entanglement-assisted stabilizer quantum codes by using pre-shared entanglement between the sender and receiver. We also give an algorithm to determine the circuit for non-binary entanglement-assisted stabilizer quantum codes and some illustrated examples. The codes we constructed do not require the dual-containing constraint, and many non-binary classical codes, like non-binary LDPC codes, which do not satisfy the condition, can be used to c...
Analysis of an Irregular RC Multi-storeyed Building Subjected to Dynamic Loading
AkashRaut; Pachpor, Prabodh; Dautkhani, Sanket
2018-03-01
Many buildings in the present scenario have irregular configurations both in plan and elevation. This in future may subject to devastating earthquakes. So it is necessary to analyze the structure. The present paper is made to study three type of irregularity wiz vertical, mass and plan irregularity as per clause 7.1 of IS 1893 (part1)2002 code. The paper discusses the analysis of RC (Reinforced Concrete) Buildings with vertical irregularity. The study as a whole makes an effort to evaluate the effect of vertical irregularity on RC buildings for which comparison of three parameters namely shear force, bending moment and deflection are taken into account.
Channel coding for underwater acoustic single-carrier CDMA communication system
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Irregular Dwarf Galaxy IC 1613
2005-01-01
Ultraviolet image (left) and visual image (right) of the irregular dwarf galaxy IC 1613. Low surface brightness galaxies, such as IC 1613, are more easily detected in the ultraviolet because of the low background levels compared to visual wavelengths.
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
Directory of Open Access Journals (Sweden)
Du Bing
2010-01-01
Full Text Available A recently developed theory suggests that network coding is a generalization of source coding and channel coding and thus yields a significant performance improvement in terms of throughput and spatial diversity. This paper proposes a cooperative design of a parity-check network coding scheme in the context of a two-source multiple access relay channel (MARC model, a common compact model in hierarchical wireless sensor networks (WSNs. The scheme uses Low-Density Parity-Check (LDPC as the surrogate to build up a layered structure which encapsulates the multiple constituent LDPC codes in the source and relay nodes. Specifically, the relay node decodes the messages from two sources, which are used to generate extra parity-check bits by a random network coding procedure to fill up the rate gap between Source-Relay and Source-Destination transmissions. Then, we derived the key algebraic relationships among multidimensional LDPC constituent codes as one of the constraints for code profile optimization. These extra check bits are sent to the destination to realize a cooperative diversity as well as to approach MARC decode-and-forward (DF capacity.
Simulasi Low Density Parity Check (Ldpc) dengan Standar Dvb-t2
Kurniawan, Yusuf; Hafizh, Idham
2014-01-01
Artikel ini berisi implementasi simulasi encoding-decoding yang dilakukanpada suatu sampel data biner acak sesuai dengan standar yang digunakanpada Digital Video Broadcasting – Terrestrial 2nd Generation (DVB-T2),dengan menggunakan MATLAB. Low Density Parity Check (LDPC)digunakan dalam proses encoding-decoding sebagai fitur untuk melakukankoreksi kesalahan pada saat pengiriman data. Modulasi yang digunakandalam simulasi adalah BPSK dengan model kanal AWGN. Dalam simulasitersebut, diperbanding...
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
Optimized Fast Walsh–Hadamard Transform on GPUs for non-binary LDPC decoding
Andrade, Joao; Falcao, Gabriel; Silva, Vitor
2014-01-01
The Fourier Transform Sum-Product Algorithm (FT-SPA) used in non-binary Low-Density Parity-Check (LDPC) decoding makes extensive use of the Walsh–Hadamard Transform (WHT). We have developed a massively parallel Fast Walsh–Hadamard Transform (FWHT) which exploits the Graphics Processing Unit (GPU) pipeline and memory hierarchy, thereby minimizing the level of memory bank conflicts and maximizing the number of returned instructions per clock cycle for different generations of graphics processor...
Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.
Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B
2017-10-15
We experimentally demonstrate self-adaptive coded 5×100 Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Typical performance of regular low-density parity-check codes over general symmetric channels
International Nuclear Information System (INIS)
Tanaka, Toshiyuki; Saad, David
2003-01-01
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models
Typical performance of regular low-density parity-check codes over general symmetric channels
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)
2003-10-31
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.
GARCH and Irregularly Spaced Data
Meddahi, N.; Renault, E.; Werker, B.J.M.
2003-01-01
An exact discretization of continuous time stochastic volatility processes observed at irregularly spaced times is used to give insights on how a coherent GARCH model can be specified for such data. The relation of our approach with those in the existing literature is studied.
Neural network decoder for quantum error correcting codes
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
On the Automatic Parallelization of Sparse and Irregular Fortran Programs
Directory of Open Access Journals (Sweden)
Yuan Lin
1999-01-01
Full Text Available Automatic parallelization is usually believed to be less effective at exploiting implicit parallelism in sparse/irregular programs than in their dense/regular counterparts. However, not much is really known because there have been few research reports on this topic. In this work, we have studied the possibility of using an automatic parallelizing compiler to detect the parallelism in sparse/irregular programs. The study with a collection of sparse/irregular programs led us to some common loop patterns. Based on these patterns new techniques were derived that produced good speedups when manually applied to our benchmark codes. More importantly, these parallelization methods can be implemented in a parallelizing compiler and can be applied automatically.
Spatially coupled low-density parity-check error correction for holographic data storage
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels
Directory of Open Access Journals (Sweden)
Yun Mao
2012-01-01
Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.
Diseño de decodificadores de altas prestaciones para código LDPC
Angarita Preciado, Fabian Enrique
2013-01-01
En esta tesis se han investigado los algoritmos de decodificación para códigos de comprobación de paridad de baja densidad (LDPC) y las arquitecturas para la implementación hardware de éstos. El trabajo realizado se centra en los algoritmos del tipo de intercambio de mensajes para códigos estructurados los cuales se incluyen en varios estándares de comunicaciones. Inicialmente se han evaluado las prestaciones de los algoritmos existentes Sum-product, Min-Sum y las principales variantes de...
Coded Modulation in C and MATLAB
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Directory of Open Access Journals (Sweden)
Hongwei ZHAO
2014-09-01
Full Text Available In this paper, the capacity of the BICM system over AWGN channels is first analyzed; the curves of BICM capacity versus SNR are also got by the Monte-Carlo simulations===?=== and compared with the curves of the CM capacity. Based on the analysis results, we simulate the error performances of BICM system with LDPC codes. Simulation results show that the capacity of BICM system with LDPC codes is enormously influenced by the mapping methods. Given a certain modulation method, the BICM system can obtain about 2-3 dB gain with Gray mapping compared with Non-Gray mapping. Meanwhile, the simulation results also demonstrate the correctness of the theory analysis.
Irregular Migration in Jordan, 1995-2007
AROURI, Fathi A.
2008-01-01
Euro-Mediterranean Consortium for Applied Research on International Migration (CARIM) This paper tackles the question of irregular migration in Jordan through its four main aspects. The first concerns irregular labour migrants and has been approached by using figures showing the socio-economic profile of non Jordanians working in Jordan and, additionally, unemployment in Jordan. This is done by assuming close similarities between legal and irregular labour migrants. The second is an attemp...
Joint nonbinary low-density parity-check codes and modulation diversity over fading channels
Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei
2010-09-01
A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels.
Design strategies for irregularly adapting parallel applications
International Nuclear Information System (INIS)
Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Sing, Jaswinder Pal
2000-01-01
Achieving scalable performance for dynamic irregular applications is eminently challenging. Traditional message-passing approaches have been making steady progress towards this goal; however, they suffer from complex implementation requirements. The use of a global address space greatly simplifies the programming task, but can degrade the performance of dynamically adapting computations. In this work, we examine two major classes of adaptive applications, under five competing programming methodologies and four leading parallel architectures. Results indicate that it is possible to achieve message-passing performance using shared-memory programming techniques by carefully following the same high level strategies. Adaptive applications have computational work loads and communication patterns which change unpredictably at runtime, requiring dynamic load balancing to achieve scalable performance on parallel machines. Efficient parallel implementations of such adaptive applications are therefore a challenging task. This work examines the implementation of two typical adaptive applications, Dynamic Remeshing and N-Body, across various programming paradigms and architectural platforms. We compare several critical factors of the parallel code development, including performance, programmability, scalability, algorithmic development, and portability
Ethical issues in irregular migration research
Duvell, F.; Triandafyllidou, A.; Vollmer, B.
2008-01-01
This paper is concerned with the ethical issues arising for researchers engaged in the study of irregular migration. Irregular migration is by definition an elusive phenomenon as it takes place in violation of the law and at the margins of society. This very nature of the phenomenon raises important
Bilayer Protograph Codes for Half-Duplex Relay Channels
Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria
2013-01-01
Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive
Comments on “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes”
Directory of Open Access Journals (Sweden)
Mark B. Yeary
2009-01-01
Full Text Available This is a comment article on the publication “Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes” Rovini et al. (2009. We mention that there has been similar work reported in the literature before, and the previous work has not been cited correctly, for example Gunnam et al. (2006, 2007. This brief note serves to clarify these issues.
Code-Hopping Based Transmission Scheme for Wireless Physical-Layer Security
Directory of Open Access Journals (Sweden)
Liuguo Yin
2018-01-01
Full Text Available Due to the broadcast and time-varying natures of wireless channels, traditional communication systems that provide data encryption at the application layer suffer many challenges such as error diffusion. In this paper, we propose a code-hopping based secrecy transmission scheme that uses dynamic nonsystematic low-density parity-check (LDPC codes and automatic repeat-request (ARQ mechanism to jointly encode and encrypt source messages at the physical layer. In this scheme, secret keys at the transmitter and the legitimate receiver are generated dynamically upon the source messages that have been transmitted successfully. During the transmission, each source message is jointly encoded and encrypted by a parity-check matrix, which is dynamically selected from a set of LDPC matrices based on the shared dynamic secret key. As for the eavesdropper (Eve, the uncorrectable decoding errors prevent her from generating the same secret key as the legitimate parties. Thus she cannot select the correct LDPC matrix to recover the source message. We demonstrate that our scheme can be compatible with traditional cryptosystems and enhance the security without sacrificing the error-correction performance. Numerical results show that the bit error rate (BER of Eve approaches 0.5 as the number of transmitted source messages increases and the security gap of the system is small.
Performance of Low-Density Parity-Check Coded Modulation
Hamkins, Jon
2010-01-01
This paper reports the simulated performance of each of the nine accumulate-repeat-4-jagged-accumulate (AR4JA) low-density parity-check (LDPC) codes [3] when used in conjunction with binary phase-shift-keying (BPSK), quadrature PSK (QPSK), 8-PSK, 16-ary amplitude PSK (16- APSK), and 32-APSK.We also report the performance under various mappings of bits to modulation symbols, 16-APSK and 32-APSK ring scalings, log-likelihood ratio (LLR) approximations, and decoder variations. One of the simple and well-performing LLR approximations can be expressed in a general equation that applies to all of the modulation types.
Capture of irregular satellites at Jupiter
International Nuclear Information System (INIS)
Nesvorný, David; Vokrouhlický, David; Deienno, Rogerio
2014-01-01
The irregular satellites of outer planets are thought to have been captured from heliocentric orbits. The exact nature of the capture process, however, remains uncertain. We examine the possibility that irregular satellites were captured from the planetesimal disk during the early solar system instability when encounters between the outer planets occurred. Nesvorný et al. already showed that the irregular satellites of Saturn, Uranus, and Neptune were plausibly captured during planetary encounters. Here we find that the current instability models present favorable conditions for capture of irregular satellites at Jupiter as well, mainly because Jupiter undergoes a phase of close encounters with an ice giant. We show that the orbital distribution of bodies captured during planetary encounters provides a good match to the observed distribution of irregular satellites at Jupiter. The capture efficiency for each particle in the original transplanetary disk is found to be (1.3-3.6) × 10 –8 . This is roughly enough to explain the observed population of jovian irregular moons. We also confirm Nesvorný et al.'s results for the irregular satellites of Saturn, Uranus, and Neptune.
Examining U.S. Irregular Warfare Doctrine
National Research Council Canada - National Science Library
Kimbrough, IV, James M
2008-01-01
... of insurgency and terrorism. In response to the associated strategic challenges, a growing debate occurred among military historians, strategists, and leaders about the proper principles necessary for contemporary irregular...
Locating irregularly shaped clusters of infection intensity
Directory of Open Access Journals (Sweden)
Niko Yiannakoulias
2010-05-01
Full Text Available Patterns of disease may take on irregular geographic shapes, especially when features of the physical environment influence risk. Identifying these patterns can be important for planning, and also identifying new environmental or social factors associated with high or low risk of illness. Until recently, cluster detection methods were limited in their ability to detect irregular spatial patterns, and limited to finding clusters that were roughly circular in shape. This approach has less power to detect irregularly-shaped, yet important spatial anomalies, particularly at high spatial resolutions. We employ a new method of finding irregularly-shaped spatial clusters at micro-geographical scales using both simulated and real data on Schistosoma mansoni and hookworm infection intensities. This method, which we refer to as the “greedy growth scan”, is a modification of the spatial scan method for cluster detection. Real data are based on samples of hookworm and S. mansoni from Kitengei, Makueni district, Kenya. Our analysis of simulated data shows how methods able to find irregular shapes are more likely to identify clusters along rivers than methods constrained to fixed geometries. Our analysis of infection intensity identifies two small areas within the study region in which infection intensity is elevated, possibly due to local features of the physical or social environment. Collectively, our results show that the “greedy growth scan” is a suitable method for exploratory geographical analysis of infection intensity data when irregular shapes are suspected, especially at micro-geographical scales.
Joint beam design and user selection over non-binary coded MIMO interference channel
Li, Haitao; Yuan, Haiying
2013-03-01
In this paper, we discuss the problem of sum rate improvement for coded MIMO interference system, and propose joint beam design and user selection over interference channel. Firstly, we have formulated non-binary LDPC coded MIMO interference networks model. Then, the least square beam design for MIMO interference system is derived, and the low complexity user selection is presented. Simulation results confirm that the sum rate can be improved by the joint user selection and beam design comparing with single interference aligning beamformer.
On locality of Generalized Reed-Muller codes over the broadcast erasure channel
Alloum, Amira
2016-07-28
One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
Characterizing neural activities evoked by manual acupuncture through spiking irregularity measures
International Nuclear Information System (INIS)
Xue Ming; Wang Jiang; Deng Bin; Wei Xi-Le; Yu Hai-Tao; Chen Ying-Yuan
2013-01-01
The neural system characterizes information in external stimulations by different spiking patterns. In order to examine how neural spiking patterns are related to acupuncture manipulations, experiments are designed in such a way that different types of manual acupuncture (MA) manipulations are taken at the ‘Zusanli’ point of experimental rats, and the induced electrical signals in the spinal dorsal root ganglion are detected and recorded. The interspike interval (ISI) statistical histogram is fitted by the gamma distribution, which has two parameters: one is the time-dependent firing rate and the other is a shape parameter characterizing the spiking irregularities. The shape parameter is the measure of spiking irregularities and can be used to identify the type of MA manipulations. The coefficient of variation is mostly used to measure the spike time irregularity, but it overestimates the irregularity in the case of pronounced firing rate changes. However, experiments show that each acupuncture manipulation will lead to changes in the firing rate. So we combine four relatively rate-independent measures to study the irregularity of spike trains evoked by different types of MA manipulations. Results suggest that the MA manipulations possess unique spiking statistics and characteristics and can be distinguished according to the spiking irregularity measures. These studies have offered new insights into the coding processes and information transfer of acupuncture. (interdisciplinary physics and related areas of science and technology)
Irregular menstruation according to occupational status.
Kwak, Yeunhee; Kim, Yoonjung
2017-07-06
This cross-sectional study explored associations of irregular menstruation with occupational characteristics, using secondary analyses of data from 4,731 women aged 19-54 years, collected from a nationally representative sample, the Korea National Health and Nutrition Examination Survey-V during 2010-2012. The associations between irregular menstruation and occupation were explored using multiple logistic regression. Compared to non-manual workers, service/sales workers had a greater odds of irregular menstruation (adjusted odds ratio [aOR]: 1.44; 95percent confidence interval [CI]: 1.04-1.99) as did manual workers and unemployed women (aOR: 1.56; 95percent CI: 1.10-2.22, aOR: 1.46; 95percent CI: 1.14-1.89, respectively). Compared to regular workers, temporary workers and unemployed women had aORs of 1.52 (95percent CI: 1.08-2.13) and 1.33 (95percent CI: 1.05-1.69), respectively. Also, when compared to full-time workers, part-time workers and unemployed women had greater odds of irregular menstruation (aOR: 1.41; 95percent CI: 1.00-2.00 and aOR: 1.29; 95percent CI: 1.03-1.63, respectively). Furthermore, compared to daytime workers, shift workers and unemployed women had greater odds irregular menstruation (aOR: 1.39; 95percent CI: 1.03-1.88 and aOR: 1.28; 95percent CI: 1.04-1.59, respectively). Women with these occupational characteristics should be screened for early diagnosis and intervention for irregular menstruation.
Advances in electron dosimetry of irregular fields
International Nuclear Information System (INIS)
Mendez V, J.
1998-01-01
In this work it is presented an advance in Electron dosimetry of irregular fields for beams emitted by linear accelerators. At present diverse methods exist which are coming to apply in the Radiotherapy centers. In this work it is proposed a method for irregular fields dosimetry. It will be allow to calculate the dose rate absorbed required for evaluating the time for the treatment of cancer patients. Utilizing the results obtained by the dosimetric system, it has been possible to prove the validity of the method describe for 12 MeV energy and for square field 7.5 x 7.5 cm 2 with percentile error less than 1 % . (Author)
New Model for Ionospheric Irregularities at Mars
Keskinen, M. J.
2018-03-01
A new model for ionospheric irregularities at Mars is presented. It is shown that wind-driven currents in the dynamo region of the Martian ionosphere can be unstable to the electromagnetic gradient drift instability. This plasma instability can generate ionospheric density and magnetic field irregularities with scale sizes of approximately 15-20 km down to a few kilometers. We show that the instability-driven magnetic field fluctuation amplitudes relative to background are correlated with the ionospheric density fluctuation amplitudes relative to background. Our results can explain recent observations made by the Mars Atmosphere and Volatile EvolutioN spacecraft in the Martian ionosphere dynamo region.
High energy model for irregular absorbing particles
International Nuclear Information System (INIS)
Chiappetta, Pierre.
1979-05-01
In the framework of a high energy formulation of relativistic quantum scattering a model is presented which describes the scattering functions and polarization of irregular absorbing particles, whose dimensions are greater than the incident wavelength. More precisely in the forward direction an amplitude parametrization of eikonal type is defined which generalizes the usual diffraction theory, and in the backward direction a reflective model is used including a shadow function. The model predictions are in good agreement with the scattering measurements off irregular compact and fluffy particles performed by Zerull, Giese and Weiss (1977)
Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.
2015-03-01
We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
Synchronizing data from irregularly sampled sensors
Uluyol, Onder
2017-07-11
A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.
Natural convection inside an irregular porous cavity
International Nuclear Information System (INIS)
Beltran, Jorge I. LLagostera; Trevisan, Osvair Vidal
1990-01-01
Natural convection flow induced by heating from below in a irregular porous cavity is investigated numerically. The influence of the modified Rayleigh number and geometric ratios on heat transfer and fluid flow is studied. Global and local Nusselt for Rayleigh numbers covering the range 0 - 1600 and for several geometric ratios. The fluid flow and the temperature field are illustrated by contour maps. (author)
DEFF Research Database (Denmark)
Høholdt, Tom; Janwa, Heeralal
2009-01-01
We characterize optimaal bipartitet expander graphs and give nessecary and sufficient conditions for optimality. We determine the expansion parameters of the BIBD graphs and show that they yield optimal expander graphs and also bipartitet Ramanujan graphs. in particular, we show that the bipartit...
Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus
2018-04-01
This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.
Directory of Open Access Journals (Sweden)
Guevara-Palma Luis
2015-01-01
Full Text Available The nesting problem of irregular shapes within irregular areas has been studied from several approaches due to their application in different industries. The particular case of cutting leather involves several restrictions that add complexity to this problem, it is necessary to generate products that comply with the quality required by customers This paper presents a methodology for the accommodation of irregular shapes in an irregular area (leather considering the constraints set by the footwear industry, and the results of this methodology when applied by a computer system. The scope of the system is to develop a working prototype that operates under the guidelines of a commercial production line of a sponsor company. Preliminary results got a reduction of 70% of processing time and improvement of 5% to 7% of the area usage when compared with manual accommodation.
Generating Performance Models for Irregular Applications
Energy Technology Data Exchange (ETDEWEB)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.
Long wavelength irregularities in the equatorial electrojet
Kudeki, E.; Farley, D. T.; Fejer, Bela G.
1982-01-01
We have used the radar interferometer technique at Jicamarca to study in detail irregularities with wavelengths of a few kilometers generated in the unstable equatorial electrojet plasma during strong type 1 conditions. In-situ rocket observations of the same instability process are discussed in a companion paper. These large scale primary waves travel essentially horizontally and have large amplitudes. The vertical electron drift velocities driven by the horizontal wave electric fields reach...
Improving Transactional Memory Performance for Irregular Applications
Pedrero, Manuel; Gutiérrez, Eladio; Romero, Sergio; Plata, Óscar
2015-01-01
Transactional memory (TM) offers optimistic concurrency support in modern multicore archi- tectures, helping the programmers to extract parallelism in irregular applications when data dependence information is not available before runtime. In fact, recent research focus on ex- ploiting thread-level parallelism using TM approaches. However, the proposed techniques are of general use, valid for any type of application. This work presents ReduxSTM, a software TM system specially d...
Star formation histories of irregular galaxies
International Nuclear Information System (INIS)
Gallagher, J.S. III; Hunter, D.A.; Tutukov, A.V.
1984-01-01
We explore the star formation histories of a selection of irregular and spiral galaxies by using three parameters that sample the star formation rate (SFR) at different epochs: (1) the mass of a galaxy in the form of stars measures the SFR integrated over a galaxy's lifetime; (2) the blue luminosity is dominated primarily by stars formed over the past few billion years; and (3) Lyman continuum photon fluxes derived from Hα luminosities give the current ( 8 yr) SFR
Parallel Computing Strategies for Irregular Algorithms
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Sotnikov, V. I.; Kim, T. C.; Mishin, E. V.; Kil, H.; Kwak, Y. S.; Paraschiv, I.
2017-12-01
Ionospheric irregularities cause scintillations of electromagnetic signals that can severely affect navigation and transionospheric communication, in particular during space storms. At mid-latitudes the source of F-region Field Aligned Irregularities (FAI) is yet to be determined. They can be created in enhanced subauroral flow channels (SAI/SUBS), where strong gradients of electric field, density and plasma temperature are present. Another important source of FAI is connected with Medium-scale travelling ionospheric disturbances (MSTIDs). Related shear flows and plasma density troughs point to interchange and Kelvin-Helmholtz type instabilities as a possible source of plasma irregularities. A model of nonlinear development of these instabilities based on the two-fluid hydrodynamic description with inclusion of finite Larmor radius effects will be presented. This approach allows to resolve density irregularities on the meter scale. A numerical code in C language to solve the derived nonlinear equations for analysis of interchange and flow velocity shear instabilities in the ionosphere was developed. This code will be used to analyze competition between interchange and Kelvin-Helmholtz instabilities in the mid-latitude region. The high-resolution simulations with continuous density and velocity profiles will be driven by the ambient conditions corresponding to the in situ data obtained during the 2016 Daejeon (Korea) and MU (Japan) radar campaign and data collected simultaneously by the Swarm satellites passed over Korea and Japan. PA approved #: 88ABW-2017-3641
The Impact of Irregular Warfare on the US Army
National Research Council Canada - National Science Library
McDonald, III, Roger L
2006-01-01
Although the U.S. Army has yet to clearly define irregular warfare, it is imperative that the Army take near-term action to enhance the ability of Soldiers and units to operate effectively in an irregular warfare environment...
State reconstruction and irregular wavefunctions for the hydrogen atom
Krähmer, D. S.; Leonhardt, U.
1997-07-01
Inspired by a recently proposed procedure by Leonhardt and Raymer for wavepacket reconstruction, we calculate the irregular wavefunctions for the bound states of the Coulomb potential. We select the irregular solutions which have the simplest semiclassical limit.
On irregularity strength of disjoint union of friendship graphs
Directory of Open Access Journals (Sweden)
Ali Ahmad
2013-11-01
Full Text Available We investigate the vertex total and edge total modication of the well-known irregularity strength of graphs. We have determined the exact values of the total vertex irregularity strength and the total edge irregularity strength of a disjoint union of friendship graphs.
16 CFR 501.6 - Cellulose sponges, irregular dimensions.
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Cellulose sponges, irregular dimensions. 501... REQUIREMENTS AND PROHIBITIONS UNDER PART 500 § 501.6 Cellulose sponges, irregular dimensions. Variety packages of cellulose sponges of irregular dimensions, are exempted from the requirements of § 500.25 of this...
PERFORMANCE EVOLUTION OF PAPR REDUCTION IN OFDM WITH AND WITHOUT LDPC TECHNIQUE
Punit Upmanyu*; Prof. Saurabh Gaur
2016-01-01
The OFDM is one of the proven multicarrier modulation techniques, which provides high spectral efficiency, low implementation complexity, less vulnerability to echoes and non-linear distortion. Apart from the above advantages presently this technique is used by almost all wireless standards and above. The one major shortcoming in the implementation of this system is the high PAPR (peak-to-average power ratio) of this system. In this paper, Irregular Low-Density-Parity Check encoder is used ef...
Finite Length Analysis of Irregular Repetition Slotted ALOHA in the Waterfall Region
Amat, Alexandre Graell i; Liva, Gianluigi
2018-01-01
A finite length analysis is introduced for irregular repetition slotted ALOHA (IRSA) that enables to accurately estimate its performance in the moderate-to-high packet loss probability regime, i.e., in the so-called waterfall region. The analysis is tailored to the collision channel model, which enables mapping the description of the successive interference cancellation process onto the iterative erasure decoding of low-density parity-check codes. The analysis provides accurate estimates of t...
Measurements of linear attenuation coefficients of irregular shaped samples by two media method
International Nuclear Information System (INIS)
Singh, Sukhpal; Kumar, Ashok; Thind, Kulwant Singh; Mudahar, Gurmel S.
2008-01-01
The linear attenuation coefficient values of regular and irregular shaped flyash materials have been measured without knowing the thickness of a sample using a new technique namely 'two media method'. These values have also been measured with a standard gamma ray transmission method and obtained theoretically with winXCOM computer code. From the comparison it is reported that the two media method has given accurate results of attenuation coefficients of flyash materials
Energy Technology Data Exchange (ETDEWEB)
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
1966-09-01
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
Locating irregularly shaped clusters of infection intensity
DEFF Research Database (Denmark)
Yiannakoulias, Niko; Wilson, Shona; Kariuki, H. Curtis
2010-01-01
of infection intensity identifies two small areas within the study region in which infection intensity is elevated, possibly due to local features of the physical or social environment. Collectively, our results show that the "greedy growth scan" is a suitable method for exploratory geographical analysis...... for cluster detection. Real data are based on samples of hookworm and S. mansoni from Kitengei, Makueni district, Kenya. Our analysis of simulated data shows how methods able to find irregular shapes are more likely to identify clusters along rivers than methods constrained to fixed geometries. Our analysis...
Equatorial Ionospheric Irregularities Study from ROCSAT Data
2017-10-20
UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT Ionospheric irregularity/scintillation occurrences can be caused by external driving ...Academia Sinica, Taipei, Taiwan e-mail: chliu2@gate.sinica.edu.tw phone :886-3-4227151x34757 CoPI: Shin-Yi Su Institution: National Central...University, Chung-Li, Taiwan e-mail: sysu@csrsr.ncu.edu.tw phone :886-3-4227151x57643 CoPI: Lung-Chi Tsai Institution: National Central University, Chung-Li
Artificial periodic irregularities in the auroral ionosphere
Directory of Open Access Journals (Sweden)
M. T. Rietveld
1996-12-01
Full Text Available Artificial periodic irregularities (API are produced in the ionospheric plasma by a powerful standing electromagnetic wave reflected off the F region. The resulting electron-density irregularities can scatter other high-frequency waves if the Bragg scattering condition is met. Such measurements have been performed at mid-latitudes for two decades and have been developed into a useful ionospheric diagnostic technique. We report here the first measurements from a high-latitude station, using the EISCAT heating facility near Tromsø, Norway. Both F-region and lower-altitude ionospheric echoes have been obtained, but the bulk of the data has been in the E and D regions with echoes extending down to 52-km altitude. Examples of API are shown, mainly from the D region, together with simultaneous VHF incoherent-scatter-radar (ISR data. Vertical velocities derived from the rate of phase change during the irregularity decay are shown and compared with velocities derived from the ISR. Some of the API-derived velocities in the 75–115-km height range appear consistent with vertical neutral winds as shown by their magnitudes and by evidence of gravity waves, while other data in the 50–70-km range show an unrealistically large bias. For a comparison with ISR data it has proved difficult to get good quality data sets overlapping in height and time. The initial comparisons show some agreement, but discrepancies of several metres per second do not yet allow us to conclude that the two techniques are measuring the same quantity. The irregularity decay time-constants between about 53 and 70 km are compared with the results of an advanced ion-chemistry model, and height profiles of recorded signal power are compared with model estimates in the same altitude range. The calculated amplitude shows good agreement with the data in that the maximum occurs at about the same height as that of the measured amplitude. The calculated time-constant agrees very well with the
International Nuclear Information System (INIS)
Tabuchi, M.; Tatsumi, M.; Ohoka, Y.; Nagano, H.; Ishizaki, K.
2017-01-01
This paper describes overview of AEGIS/SCOPE2 system, an advanced in-core fuel management system for pressurized water reactors, and its validation results of actual core follow calculations including irregular operational conditions. AEGIS and SCOPE2 codes adopt more detailed and accurate calculation models compared to the current core design codes while computational cost is minimized with various techniques on numerical and computational algorithms. Verification and validation of AEGIS/SCOPE2 has been intensively performed to confirm validity of the system. As a part of the validation, core follow calculations have been carried out mainly for typical operational conditions. After the Fukushima Daiichi nuclear power plant accident, however, all the nuclear reactors in Japan suffered from long suspension and irregular operational conditions. In such situations, measured data in the restart and operation of the reactors should be good examinations for validation of the codes. Therefore, core follow calculations were carried out with AEGIS/SCOPE2 for various cases including zero power reactor physics tests with irregular operational conditions. Comparisons between measured data and predictions by AEGIS/SCOPE2 revealed the validity and robustness of the system. (author)
Photonic circuits for iterative decoding of a class of low-density parity-check codes
International Nuclear Information System (INIS)
Pavlichin, Dmitri S; Mabuchi, Hideo
2014-01-01
Photonic circuits in which stateful components are coupled via guided electromagnetic fields are natural candidates for resource-efficient implementation of iterative stochastic algorithms based on propagation of information around a graph. Conversely, such message=passing algorithms suggest novel circuit architectures for signal processing and computation that are well matched to nanophotonic device physics. Here, we construct and analyze a quantum optical model of a photonic circuit for iterative decoding of a class of low-density parity-check (LDPC) codes called expander codes. Our circuit can be understood as an open quantum system whose autonomous dynamics map straightforwardly onto the subroutines of an LDPC decoding scheme, with several attractive features: it can operate in the ultra-low power regime of photonics in which quantum fluctuations become significant, it is robust to noise and component imperfections, it achieves comparable performance to known iterative algorithms for this class of codes, and it provides an instructive example of how nanophotonic cavity quantum electrodynamic components can enable useful new information technology even if the solid-state qubits on which they are based are heavily dephased and cannot support large-scale entanglement. (paper)
Legal aspects of the EU policy on irregular immigration
Directory of Open Access Journals (Sweden)
Voinikov Vadim
2015-12-01
Full Text Available This article addresses the issues pertaining to the adoption and development of legislation on irregular migration in the context of uncontrolled growth in the number of immigrants from North Africa and the Middle East to the EU. The article attempts at studying the EU legislation on irregular migration, classifying it, and analysing the prospects of EU migration legislation in the light of an increase in irregular immigration into the EU. The author systematises, classifies the current EU legislation on irregular immigration, and analyses the conditions, in which this legislation was developed. Using the legislation analysis method, the author proposes the following system of EU legislation on irregular immigration: rules preventing assistance to irregular immigration, rules preventing employment of irregular immigrants, rules on the return of irregular migrants and readmission, rules on border control, and rules on collaboration with third countries. The author pays special attention to analysing the current state of irregular immigration to the EU, which was dubbed the ‘greatest migration crisis in Europe’. The conclusion is that the European Union succeeded in the development of pioneering legislation on irregular immigration, which can serve as the basis for reception by other states. However, changes in the political and economic situation in the EU’s southern borderlands made the current legal mechanisms incapable of withstanding new threats. It necessitates a radical reform of the legislation on irregular immigration.
Directory of Open Access Journals (Sweden)
Ahmed Abdulkadhim Hamad
2017-08-01
Full Text Available In this paper, different techniques are used to improve the turbo decoding of regular repeat accumulate (RA and irregular repeat accumulate (IRA codes. The adaptive scaling of a-posteriori information produced by Soft-output Viterbi decoder (SOVA is proposed. The encoded pilots are another scheme that applied for short length RA codes. This work also suggests a simple and a fast method to generate a random interleaver having a free 4 cycle Tanner graph. Progressive edge growth algorithm (PEG is also studied and simulated to create the Tanner graphs which have a great girth.
Computing proton dose to irregularly moving targets
International Nuclear Information System (INIS)
Phillips, Justin; Gueorguiev, Gueorgui; Grassberger, Clemens; Dowdell, Stephen; Paganetti, Harald; Sharp, Gregory C; Shackleford, James A
2014-01-01
Purpose: While four-dimensional computed tomography (4DCT) and deformable registration can be used to assess the dose delivered to regularly moving targets, there are few methods available for irregularly moving targets. 4DCT captures an idealized waveform, but human respiration during treatment is characterized by gradual baseline shifts and other deviations from a periodic signal. This paper describes a method for computing the dose delivered to irregularly moving targets based on 1D or 3D waveforms captured at the time of delivery. Methods: The procedure uses CT or 4DCT images for dose calculation, and 1D or 3D respiratory waveforms of the target position at time of delivery. Dose volumes are converted from their Cartesian geometry into a beam-specific radiological depth space, parameterized in 2D by the beam aperture, and longitudinally by the radiological depth. In this new frame of reference, the proton doses are translated according to the motion found in the 1D or 3D trajectory. These translated dose volumes are weighted and summed, then transformed back into Cartesian space, yielding an estimate of the dose that includes the effect of the measured breathing motion. The method was validated using a synthetic lung phantom and a single representative patient CT. Simulated 4DCT was generated for the phantom with 2 cm peak-to-peak motion. Results: A passively-scattered proton treatment plan was generated using 6 mm and 5 mm smearing for the phantom and patient plans, respectively. The method was tested without motion, and with two simulated breathing signals: a 2 cm amplitude sinusoid, and a 2 cm amplitude sinusoid with 3 cm linear drift in the phantom. The tumor positions were equally weighted for the patient calculation. Motion-corrected dose was computed based on the mid-ventilation CT image in the phantom and the peak exhale position in the patient. Gamma evaluation was 97.8% without motion, 95.7% for 2 cm sinusoidal motion, 95.7% with 3 cm drift in
Two media method for linear attenuation coefficient determination of irregular soil samples
International Nuclear Information System (INIS)
Vici, Carlos Henrique Georges
2004-01-01
In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)
Long wavelength irregularities in the equatorial electrojet
International Nuclear Information System (INIS)
Kudeki, E.; Farley, D.T.; Fejer, B.G.
1982-01-01
We have used the radar interferometer technique at Jicamarca to study in detail irregularities with wavelengths of a few kilometers generated in the unstable equatorial electrojet plasma during strong type 1 conditions. In-situ rocket observations of the same instability process are discussed in a companion paper. These large scale primary waves travel essentially horizontally and have large amplitudes. The vertical electron drift velocities driven by the horizontal wave electric fields reach or exceed the ion-acoustic velocity even though the horizontal phase velocity of the wave is considerably smaller. A straightforward extension to the long wavelength regime of the usual linear theory of the electrojet instability explains this and several other observed features of these dominant primary waves
Irregular employment amongst migrants in Spanish cities.
Sole, C; Ribas, N; Bergalli, V; Parella, S
1998-04-01
This article presents the irregular employment situation of non-European union immigrants in Spanish cities. Foreign labor is remarkable for its heterogeneity in terms of country of origin, demographic characteristics, and the different ways in which immigrants have entered the job market. Legal immigrants tend to concentrate in five different branches of activity, such as domestic service (mostly women), hotel and restaurant industry, agriculture, building and retail trade. Migrants who work in agriculture suffer the worst labor conditions than all other migrants. However, all migrants experience difficulty in obtaining residency and labor permits. Four integration strategies among Moroccan immigrants in Catalonia are discussed and can be viewed as support networks of the immigrants.
Irregular activity arises as a natural consequence of synaptic inhibition
International Nuclear Information System (INIS)
Terman, D.; Rubin, J. E.; Diekman, C. O.
2013-01-01
Irregular neuronal activity is observed in a variety of brain regions and states. This work illustrates a novel mechanism by which irregular activity naturally emerges in two-cell neuronal networks featuring coupling by synaptic inhibition. We introduce a one-dimensional map that captures the irregular activity occurring in our simulations of conductance-based differential equations and mathematically analyze the instability of fixed points corresponding to synchronous and antiphase spiking for this map. We find that the irregular solutions that arise exhibit expansion, contraction, and folding in phase space, as expected in chaotic dynamics. Our analysis shows that these features are produced from the interplay of synaptic inhibition with sodium, potassium, and leak currents in a conductance-based framework and provides precise conditions on parameters that ensure that irregular activity will occur. In particular, the temporal details of spiking dynamics must be present for a model to exhibit this irregularity mechanism and must be considered analytically to capture these effects
Irregular activity arises as a natural consequence of synaptic inhibition
Energy Technology Data Exchange (ETDEWEB)
Terman, D., E-mail: terman@math.ohio-state.edu [Department of Mathematics, The Ohio State University, Columbus, Ohio 43210 (United States); Rubin, J. E., E-mail: jonrubin@pitt.edu [Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania 15260 (United States); Diekman, C. O., E-mail: diekman@njit.edu [Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, New Jersey 07102 (United States)
2013-12-15
Irregular neuronal activity is observed in a variety of brain regions and states. This work illustrates a novel mechanism by which irregular activity naturally emerges in two-cell neuronal networks featuring coupling by synaptic inhibition. We introduce a one-dimensional map that captures the irregular activity occurring in our simulations of conductance-based differential equations and mathematically analyze the instability of fixed points corresponding to synchronous and antiphase spiking for this map. We find that the irregular solutions that arise exhibit expansion, contraction, and folding in phase space, as expected in chaotic dynamics. Our analysis shows that these features are produced from the interplay of synaptic inhibition with sodium, potassium, and leak currents in a conductance-based framework and provides precise conditions on parameters that ensure that irregular activity will occur. In particular, the temporal details of spiking dynamics must be present for a model to exhibit this irregularity mechanism and must be considered analytically to capture these effects.
Decomposing Oriented Graphs into Six Locally Irregular Oriented Graphs
DEFF Research Database (Denmark)
Bensmail, Julien; Renault, Gabriel
2016-01-01
An undirected graph G is locally irregular if every two of its adjacent vertices have distinct degrees. We say that G is decomposable into k locally irregular graphs if there exists a partition E1∪E2∪⋯∪Ek of the edge set E(G) such that each Ei induces a locally irregular graph. It was recently co...
A novel method of the image processing on irregular triangular meshes
Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta
2018-04-01
The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).
Effects of Irregular Bridge Columns and Feasibility of Seismic Regularity
Thomas, Abey E.
2018-05-01
Bridges with unequal column height is one of the main irregularities in bridge design particularly while negotiating steep valleys, making the bridges vulnerable to seismic action. The desirable behaviour of bridge columns towards seismic loading is that, they should perform in a regular fashion, i.e. the capacity of each column should be utilized evenly. But, this type of behaviour is often missing when the column heights are unequal along the length of the bridge, allowing short columns to bear the maximum lateral load. In the present study, the effects of unequal column height on the global seismic performance of bridges are studied using pushover analysis. Codes such as CalTrans (Engineering service center, earthquake engineering branch, 2013) and EC-8 (EN 1998-2: design of structures for earthquake resistance. Part 2: bridges, European Committee for Standardization, Brussels, 2005) suggests seismic regularity criterion for achieving regular seismic performance level at all the bridge columns. The feasibility of adopting these seismic regularity criterions along with those mentioned in literatures will be assessed for bridges designed as per the Indian Standards in the present study.
Characteristics of ionospheric irregularities causing scintillations at VHF/UHF
International Nuclear Information System (INIS)
Vats, H.O.; Deshpande, M.R.; Rastogi, R.G.
1978-01-01
Some properties of ionization irregularities using amplitude scintillation records of radio beacons from ATS-6 (phase II) at Ootacamund, India have been investigated. For the estimation of scale-size and strength of the irregularities a simple diffraction model has been used which explains only weak and moderate equatorial scintillation observations. It was found that the scale sizes of day time E-region irregularities are smaller than those in the F-region during night time in addition, irregularities are generated initially at large scale sizes which later break up into smaller scale sizes
Energy Technology Data Exchange (ETDEWEB)
Mendez V, J. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Avenida Angamos Este 2520, Lima 34 (Peru)
1998-12-31
In this work it is presented an advance in Electron dosimetry of irregular fields for beams emitted by linear accelerators. At present diverse methods exist which are coming to apply in the Radiotherapy centers. In this work it is proposed a method for irregular fields dosimetry. It will be allow to calculate the dose rate absorbed required for evaluating the time for the treatment of cancer patients. Utilizing the results obtained by the dosimetric system, it has been possible to prove the validity of the method describe for 12 MeV energy and for square field 7.5 x 7.5 cm{sup 2} with percentile error less than 1 % . (Author)
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
Irregular radiation response of a chondrosarcoma
International Nuclear Information System (INIS)
Marsden, J.J.; Kember, N.F.; Shaw, J.E.H.
1980-01-01
The DC II mouse chondrosarcoma was shown to be a potentially valuable radiobiological tumour system since it recovered from radiation injury by regrowth from clones that could be counted in histological sections. Unfortunately, the normal growth of this tumour following s.c. implantation in the thigh was irregular both in the time before growth became evident and in the rate of growth. The response to radiation was also unreliable since tumours irradiated with the same dose (e.g. 30 Gy) showed a range of responses from shrinkage to no detectable change in growth rate. The delay in normal growth can be attributed largely to delays in vascularization while changes in growth rate may be explained by differences in tumour architecture. Radiation response may depend on variations in hypoxic fraction and in relative cellularity. Tumours having the same external dimensions may differ by a factor of 80 in the numbers of tumour cells they contain. This chondrosarcoma may prove a closer model to some human tumours than many transplantable tumours that display regular growth patterns. (author)
Regularities and irregularities in order flow data
Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas
2017-11-01
We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.
Evaporation From Soil Containers With Irregular Shapes
Assouline, Shmuel; Narkis, Kfir
2017-11-01
Evaporation from bare soils under laboratory conditions is generally studied using containers of regular shapes where the vertical edges are parallel to the flow lines in the drying domain. The main objective of this study was to investigate the impact of irregular container shapes, for which the flow lines either converge or diverge toward the surface. Evaporation from initially saturated sand and sandy loam soils packed in cones and inverted cones was compared to evaporation from corresponding cylindrical columns. The initial evaporation rate was higher in the cones, and close to potential evaporation. At the end of the experiment, the cumulative evaporation depth in the sand cone was equal to that in the column but higher than in the inverted cone, while in the sandy loam, the order was cone > column > inverted cone. By comparison to the column, stage 1 evaporation was longer in the cones, and practically similar in the inverted cones. Stage 2 evaporation rate decreased with the increase of the evaporating surface area. These results were more pronounced in the sandy loam. For the sand column, the transition between stage 1 and stage 2 evaporation occurred when the depth of the saturation front was approximately equal to the characteristic length of the soil. However, for the cone and the inverted cone, it occurred for a shallower depth of the saturation front. It seems therefore that the concept of the characteristic length derived from the soil hydraulic properties is related to drying systems of regular shapes.
Multiresolution Analysis Adapted to Irregularly Spaced Data
Directory of Open Access Journals (Sweden)
Anissa Mokraoui
2009-01-01
Full Text Available This paper investigates the mathematical background of multiresolution analysis in the specific context where the signal is represented by irregularly sampled data at known locations. The study is related to the construction of nested piecewise polynomial multiresolution spaces represented by their corresponding orthonormal bases. Using simple spline basis orthonormalization procedures involves the construction of a large family of orthonormal spline scaling bases defined on consecutive bounded intervals. However, if no more additional conditions than those coming from multiresolution are imposed on each bounded interval, the orthonormal basis is represented by a set of discontinuous scaling functions. The spline wavelet basis also has the same problem. Moreover, the dimension of the corresponding wavelet basis increases with the spline degree. An appropriate orthonormalization procedure of the basic spline space basis, whatever the degree of the spline, allows us to (i provide continuous scaling and wavelet functions, (ii reduce the number of wavelets to only one, and (iii reduce the complexity of the filter bank. Examples of the multiresolution implementations illustrate that the main important features of the traditional multiresolution are also satisfied.
Resolution optimization with irregularly sampled Fourier data
International Nuclear Information System (INIS)
Ferrara, Matthew; Parker, Jason T; Cheney, Margaret
2013-01-01
Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)
Irregular Homogeneity Domains in Ternary Intermetallic Systems
Directory of Open Access Journals (Sweden)
Jean-Marc Joubert
2015-12-01
Full Text Available Ternary intermetallic A–B–C systems sometimes have unexpected behaviors. The present paper examines situations in which there is a tendency to simultaneously form the compounds ABx, ACx and BCx with the same crystal structure. This causes irregular shapes of the phase homogeneity domains and, from a structural point of view, a complete reversal of site occupancies for the B atom when crossing the homogeneity domain. This work reviews previous studies done in the systems Fe–Nb–Zr, Hf–Mo–Re, Hf–Re–W, Mo–Re–Zr, Re–W–Zr, Cr–Mn–Si, Cr–Mo–Re, and Mo–Ni–Re, and involving the topologically close-packed Laves, χ and σ phases. These systems have been studied using ternary isothermal section determination, DFT calculations, site occupancy measurement using joint X-ray, and neutron diffraction Rietveld refinement. Conclusions are drawn concerning this phenomenon. The paper also reports new experimental or calculated data on Co–Cr–Re and Fe–Nb–Zr systems.
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
14 CFR 135.65 - Reporting mechanical irregularities.
2010-01-01
... irregularities and their correction. (b) The pilot in command shall enter or have entered in the aircraft maintenance log each mechanical irregularity that comes to the pilot's attention during flight time. Before each flight, the pilot in command shall, if the pilot does not already know, determine the status of...
Uniform irradiation of irregularly shaped cavities for photodynamic therapy
Rem, A. I.; van Gemert, M. J.; van der Meulen, F. W.; Gijsbers, G. H.; Beek, J. F.
1997-01-01
It is difficult to achieve a uniform light distribution in irregularly shaped cavities. We have conducted a study on the use of hollow 'integrating' moulds for more uniform light delivery of photodynamic therapy in irregularly shaped cavities such as the oral cavity. Simple geometries such as a
Software support for irregular and loosely synchronous problems
Choudhary, A.; Fox, G.; Hiranandani, S.; Kennedy, K.; Koelbel, C.; Ranka, S.; Saltz, J.
1992-01-01
A large class of scientific and engineering applications may be classified as irregular and loosely synchronous from the perspective of parallel processing. We present a partial classification of such problems. This classification has motivated us to enhance FORTRAN D to provide language support for irregular, loosely synchronous problems. We present techniques for parallelization of such problems in the context of FORTRAN D.
Regularisation of irregular verbs in child English second language ...
African Journals Online (AJOL)
Data was collected from the language of English medium preschool children. The study concludes that when the Blocking Principle interferes, children resort to a novel interlanguage rule that regularises irregular verbs. This interlanguage rule applies in a similar way to all irregular verbs, thus children produce utterances ...
Irregular conformal block, spectral curve and flow equations
International Nuclear Information System (INIS)
Choi, Sang Kwan; Rim, Chaiho; Zhang, Hong
2016-01-01
Irregular conformal block is motivated by the Argyres-Douglas type of N=2 super conformal gauge theory. We investigate the classical/NS limit of irregular conformal block using the spectral curve on a Riemann surface with irregular punctures, which is equivalent to the loop equation of irregular matrix model. The spectral curve is reduced to the second order (Virasoro symmetry, SU(2) for the gauge theory) and third order (W_3 symmetry, SU(3)) differential equations of a polynomial with finite degree. The conformal and W symmetry generate the flow equations in the spectral curve and determine the irregular conformal block, hence the partition function of the Argyres-Douglas theory ala AGT conjecture.
Star Formation Histories of Dwarf Irregular Galaxies
Skillman, Evan
1995-07-01
We propose to obtain deep WFPC2 `BVI' color-magnitude diagrams {CMDs} for the dwarf irregular {dI} Local Group galaxies GR 8, Leo A, Pegasus, and Sextans A. In addition to resolved stars, we will use star clusters, and especially any globulars, to probe the history of intense star formation. These data will allow us to map the Pop I and Pop II stellar components, and thereby construct the first detailed star formation histories for non-interacting dI galaxies. Our results will bear on a variety of astrophysical problems, including the evolution of small galaxies, distances in the Local Group, age-metallicity distributions in small galaxies, ages of dIs, and the physics of star formation. The four target galaxies are typical dI systems in terms of luminosity, gas content, and H II region abundance, and represent a range in current star forming activity. They are sufficiently near to allow us to reach to stars at M_V = 0, have 0.1 of the luminosity of the SMC and 0.25 of its oxygen abundance. Unlike the SMC, these dIs are not near giant galaxies. This project will allow the extension of our knowledge of stellar populations in star forming galaxies from the spirals in the Local Group down to its smallest members. We plan to take maximum advantage of the unique data which this project will provide. Our investigator team brings extensive and varied experience in studies of dwarf galaxies, stellar populations, imaging photometry, and stellar evolution to this project.
General Lines of Disregard for the Legal Personality on Irregular Dissolution the Company
Directory of Open Access Journals (Sweden)
Fábio Augusto Barcelos Moreira Corrêa
2016-12-01
Full Text Available This article will analyze the Institute of disregard for the legal personality in situations involving irregular dissolution the limited liability company, in light of the jurisprudence of the Superior Court Tribunal. We highlight the impact that new code of Civil procedure will provide for analysis to safeguard the autonomy of assets of the legal person, as well as the guarantee of due process and of ample defense, directly impacting on business law. The analysis aims to contribute to the understanding of the Institute, and the systematic procedure. Adopting the dialectical methodology and criticism.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Directory of Open Access Journals (Sweden)
Fabio Burderi
2007-05-01
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Traffic dispersion through a series of signals with irregular split
Nagatani, Takashi
2016-01-01
We study the traffic behavior of a group of vehicles moving through a sequence of signals with irregular splits on a roadway. We present the stochastic model of vehicular traffic controlled by signals. The dynamic behavior of vehicular traffic is clarified by analyzing traffic pattern and travel time numerically. The group of vehicles breaks up more and more by the irregularity of signal's split. The traffic dispersion is induced by the irregular split. We show that the traffic dispersion depends highly on the cycle time and the strength of split's irregularity. Also, we study the traffic behavior through the series of signals at the green-wave strategy. The dependence of the travel time on offset time is derived for various values of cycle time. The region map of the traffic dispersion is shown in (cycle time, offset time)-space.
Irregular Warfare: Impact on Future Professional Military Education
National Research Council Canada - National Science Library
Paschal, David G
2006-01-01
... to operate effectively in an irregular warfare environment. The utility of a decisive war between nation states continues to decline and will eventually reach critical mass based upon the extreme imbalance of military power and a U.S. monopoly...
Irregular Warfare: Special Operations Joint Professional Military Education Transformation
National Research Council Canada - National Science Library
Cannady, Bryan H
2008-01-01
... on today's battlefront in Afghanistan and Iraq and in the Global War on Terrorism (GWOT). At the forefront of the GWOT and irregular warfare are the United States Special Operations Command (USSOCOM...
Drug Intoxicated Irregular Fighters: Complications, Dangers, and Responses
National Research Council Canada - National Science Library
Kan, Paul R
2008-01-01
.... Drug consumption in contemporary wars has coincided with the use of child soldiers, has led to increased unpredictability among irregular fighters, provided the conditions for the breakdown of social...
Justice: A Problem for Military Ethics during Irregular War
National Research Council Canada - National Science Library
Bauer, John W
2008-01-01
... is?" or "Justice according to whom?" The relative nature of the term "justice" creates a problem for military ethics, particularly when soldiers try to determine what actions are morally acceptable when they are engaged in irregular warfare...
Irregular Warfare: New Challenges for Civil-Military Relations
National Research Council Canada - National Science Library
Cronin, Patrick M
2008-01-01
.... Irregular warfare introduces new complications to what Eliot Cohen has called an unequal dialogue between civilian and military leaders in which civilian leaders hold the true power but must modulate...
Role of parametric decay instabilities in generating ionospheric irregularities
International Nuclear Information System (INIS)
Kuo, S.P.; Cheo, B.R.; Lee, M.C.
1983-01-01
We show that purely growing instabilities driven by the saturation spectrum of parametric decay instabilities can produce a broad spectrum of ionospheric irregularities. The threshold field Vertical BarE/sub th/Vertical Bar of the instabilities decreases with the scale lengths lambda of the ionospheric irregularities as Vertical BarE/sub th/Vertical Barproportionallambda -2 in the small-scale range ( -2 with scale lengths larger than a few kilometers. The excitation of kilometer-scale irregularities is strictly restricted by the instabilities themselves and by the spatial inhomogeneity of the medium. These results are drawn from the analyses of four-wave interaction. Ion-neutral collisions impose no net effect on the instabilities when the excited ionospheric irregularities have a field-aligned nature
Edge irregular total labellings for graphs of linear size
DEFF Research Database (Denmark)
Brandt, Stephan; Rautenbach, D.; Miškuf, J.
2009-01-01
As an edge variant of the well-known irregularity strength of a graph G = (V, E) we investigate edge irregular total labellings, i.e. functions f : V ∪ E → {1, 2, ..., k} such that f (u) + f (u v) + f (v) ≠ f (u) + f (u v) + f (v) for every pair of different edges u v, u v ∈ E. The smallest possi...
Design Optimization of Irregular Cellular Structure for Additive Manufacturing
Song, Guo-Hua; Jing, Shi-Kai; Zhao, Fang-Lei; Wang, Ye-Dong; Xing, Hao; Zhou, Jing-Tao
2017-09-01
Irregularcellular structurehas great potential to be considered in light-weight design field. However, the research on optimizing irregular cellular structures has not yet been reporteddue to the difficulties in their modeling technology. Based on the variable density topology optimization theory, an efficient method for optimizing the topology of irregular cellular structures fabricated through additive manufacturing processes is proposed. The proposed method utilizes tangent circles to automatically generate the main outline of irregular cellular structure. The topological layoutof each cellstructure is optimized using the relative density informationobtained from the proposed modified SIMP method. A mapping relationship between cell structure and relative densityelement is builtto determine the diameter of each cell structure. The results show that the irregular cellular structure can be optimized with the proposed method. The results of simulation and experimental test are similar for irregular cellular structure, which indicate that the maximum deformation value obtained using the modified Solid Isotropic Microstructures with Penalization (SIMP) approach is lower 5.4×10-5 mm than that using the SIMP approach under the same under the same external load. The proposed research provides the instruction to design the other irregular cellular structure.
Ionospheric Irregularities at Mars Probed by MARSIS Topside Sounding
Harada, Y.; Gurnett, D. A.; Kopf, A. J.; Halekas, J. S.; Ruhunusiri, S.
2018-01-01
The upper ionosphere of Mars contains a variety of perturbations driven by solar wind forcing from above and upward propagating atmospheric waves from below. Here we explore the global distribution and variability of ionospheric irregularities around the exobase at Mars by analyzing topside sounding data from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument on board Mars Express. As irregular structure gives rise to off-vertical echoes with excess propagation time, the diffuseness of ionospheric echo traces can be used as a diagnostic tool for perturbed reflection surfaces. The observed properties of diffuse echoes above unmagnetized regions suggest that ionospheric irregularities with horizontal wavelengths of tens to hundreds of kilometers are particularly enhanced in the winter hemisphere and at high solar zenith angles. Given the known inverse dependence of neutral gravity wave amplitudes on the background atmospheric temperature, the ionospheric irregularities probed by MARSIS are most likely associated with plasma perturbations driven by atmospheric gravity waves. Though extreme events with unusually diffuse echoes are more frequently observed for high solar wind dynamic pressures during some time intervals, the vast majority of the diffuse echo events are unaffected by varying solar wind conditions, implying limited influence of solar wind forcing on the generation of ionospheric irregularities. Combination of remote and in situ measurements of ionospheric irregularities would offer the opportunity for a better understanding of the ionospheric dynamics at Mars.
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Fossorier Marc
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
Combined Source-Channel Coding of Images under Power and Bandwidth Constraints
Directory of Open Access Journals (Sweden)
Marc Fossorier
2007-01-01
Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.
Development of a calculation methodology for potential flow over irregular topographies
International Nuclear Information System (INIS)
Del Carmen, Alejandra F.; Ferreri, Juan C.; Boutet, Luis I.
2003-01-01
Full text: Computer codes for the calculation of potential flow fields over surfaces with irregular topographies have been developed. The flows past multiple simple obstacles and past the neighboring region of the Embalse Nuclear Power Station have been considered. The codes developed allow the calculation of velocities quite near the surface. It, in turn, imposed developing high accuracy techniques. The Boundary Element Method, using a linear approximation on triangular plane elements and an analytical integration methodology has been applied. A particular and quite efficient technique for the calculation of the solid angle at each node vertex was also considered. The results so obtained will be applied to predict the dispersion of passive pollutants coming from discontinuous emissions. (authors)
Susuki, I.
1981-11-01
The results of an analysis of the irregularity factors of stationary and Gaussian random processes which are generated by filtering the output of a pure or a band-limited white noise are presented. An ideal band pass filter, a trapezoidal filter, and a Butterworth type band pass filter were examined. It was found that the values of the irregularity factors were approximately equal among these filters if only the end-slopes were the same rates. As the band width of filters increases, irregularity factors increase monotonically and approach the respective constant values depending on the end-slopes. This implies that the noise characteristics relevant to the fatigue damage such as statistical aspects of the height of the rise and fall or the distribution of the peak values are not changed for a broad band random time history. It was also found that the effect of band limitation of input white noise on irregularity factors is negligibly small.
Stability analysis by ERATO code
International Nuclear Information System (INIS)
Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi
1979-12-01
Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)
Orbital and Collisional Evolution of the Irregular Satellites
Nesvorný, David; Alvarellos, Jose L. A.; Dones, Luke; Levison, Harold F.
2003-07-01
The irregular moons of the Jovian planets are a puzzling part of the solar system inventory. Unlike regular satellites, the irregular moons revolve around planets at large distances in tilted and eccentric orbits. Their origin, which is intimately linked with the origin of the planets themselves, is yet to be explained. Here we report a study of the orbital and collisional evolution of the irregular satellites from times after their formation to the present epoch. The purpose of this study is to find out the features of the observed irregular moons that can be attributed to this evolution and separate them from signatures of the formation process. We numerically integrated ~60,000 test satellite orbits to map orbital locations that are stable on long time intervals. We found that the orbits highly inclined to the ecliptic are unstable due to the effect of the Kozai resonance, which radially stretches them so that satellites either escape from the Hill sphere, collide with massive inner moons, or impact the parent planet. We also found that prograde satellite orbits with large semimajor axes are unstable due to the effect of the evection resonance, which locks the orbit's apocenter to the apparent motion of the Sun around the parent planet. In such a resonance, the effect of solar tides on a resonant moon accumulates at each apocenter passage of the moon, which causes a radially outward drift of its orbital apocenter; once close to the Hill sphere, the moon escapes. By contrast, retrograde moons with large orbital semimajor axes are long-lived. We have developed an analytic model of the distant satellite orbits and used it to explain the results of our numerical experiments. In particular, we analytically studied the effect of the Kozai resonance. We numerically integrated the orbits of the 50 irregular moons (known by 2002 August 16) for 108 yr. All orbits were stable on this time interval and did not show any macroscopic variations that would indicate
Xiong, Chenrong; Yan, Zhiyuan
2014-10-01
Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.
Energy Technology Data Exchange (ETDEWEB)
Vici, Carlos Henrique Georges
2004-07-01
In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient ({mu}) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the {mu} determination. It consists of the {mu} determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of {mu} was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Total edge irregularity strength of (n,t)-kite graph
Winarsih, Tri; Indriati, Diari
2018-04-01
Let G(V, E) be a simple, connected, and undirected graph with vertex set V and edge set E. A total k-labeling is a map that carries vertices and edges of a graph G into a set of positive integer labels {1, 2, …, k}. An edge irregular total k-labeling λ :V(G)\\cup E(G)\\to \\{1,2,\\ldots,k\\} of a graph G is a labeling of vertices and edges of G in such a way that for any different edges e and f, weights wt(e) and wt(f) are distinct. The weight wt(e) of an edge e = xy is the sum of the labels of vertices x and y and the label of the edge e. The total edge irregularity strength of G, tes(G), is defined as the minimum k for which a graph G has an edge irregular total k-labeling. An (n, t)-kite graph consist of a cycle of length n with a t-edge path (the tail) attached to one vertex of a cycle. In this paper, we investigate the total edge irregularity strength of the (n, t)-kite graph, with n > 3 and t > 1. We obtain the total edge irregularity strength of the (n, t)-kite graph is tes((n, t)-kite) = \\lceil \\frac{n+t+2}{3}\\rceil .
Measurement of Dynamic Friction Coefficient on the Irregular Free Surface
International Nuclear Information System (INIS)
Yeom, S. H.; Seo, K. S.; Lee, J. H.; Lee, K. H.
2007-01-01
A spent fuel storage cask must be estimated for a structural integrity when an earthquake occurs because it freely stands on ground surface without a restriction condition. Usually the integrity estimation for a seismic load is performed by a FEM analysis, the friction coefficient for a standing surface is an important parameter in seismic analysis when a sliding happens. When a storage cask is placed on an irregular ground surface, measuring a friction coefficient of an irregular surface is very difficult because the friction coefficient is affected by the surface condition. In this research, dynamic friction coefficients on the irregular surfaces between a concrete cylinder block and a flat concrete slab are measured with two methods by one direction actuator
NEOWISE: OBSERVATIONS OF THE IRREGULAR SATELLITES OF JUPITER AND SATURN
Energy Technology Data Exchange (ETDEWEB)
Grav, T. [Planetary Science Institute, Tucson, AZ 85719 (United States); Bauer, J. M.; Mainzer, A. K.; Masiero, J. R.; Sonnett, S.; Kramer, E. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Nugent, C. R.; Cutri, R. M., E-mail: tgrav@psi.edu [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States)
2015-08-10
We present thermal model fits for 11 Jovian and 3 Saturnian irregular satellites based on measurements from the WISE/NEOWISE data set. Our fits confirm spacecraft-measured diameters for the objects with in situ observations (Himalia and Phoebe) and provide diameters and albedo for 12 previously unmeasured objects, 10 Jovian and 2 Saturnian irregular satellites. The best-fit thermal model beaming parameters are comparable to what is observed for other small bodies in the outer solar system, while the visible, W1, and W2 albedos trace the taxonomic classifications previously established in the literature. Reflectance properties for the irregular satellites measured are similar to the Jovian Trojan and Hilda Populations, implying common origins.
Bottomside sinusoidal irregularities in the equatorial F region
Valladares, C. E.; Hanson, W. B.; Mcclure, J. P.; Cragin, B. L.
1983-01-01
By using the Ogo 6 satellite, McClure and Hanson (1973) have discovered sinusoidal irregularities in the equatorial F region ion number density. In the present investigation, a description is provided of the properties of a distinct category of sinusoidal irregularities found in equatorial data from the AE-C and AE-E satellites. The observed scale sizes vary from about 300 m to 3 km in the direction perpendicular to B, overlapping with and extending the range observed by using Ogo 6. Attention is given to low and high resolution data, a comparison with Huancayo ionograms, the confinement of 'bottomside sinusoidal' (BSS) irregularities essentially to the bottomside of the F layer, spectral characteristics, and BSS, scintillation, and ionosonde observations.
Irregular flowering patterns in terrestrial orchids: theories vs. empirical data
Directory of Open Access Journals (Sweden)
P. Kindlmann
2001-11-01
Full Text Available Empirical data on many species of terrestrial orchids suggest that their between-year flowering pattern is extremely irregular and unpredictable. A long search for the reason has hitherto proved inconclusive. Here we summarise and critically review the hypotheses that were put forward as explanations of this phenomenon: irregular flowering was attributed to costs associated with sexual reproduction, to herbivory, or to the chaotic behaviour of the system represented by difference equations describing growth of the vegetative and reproductive organs. None of these seems to explain fully the events of a transition from flowering one year to sterility or absence the next year. Data on the seasonal growth of leaves and inflorescence of two terrestrial orchid species, Epipactis albensis and Dactylorhiza fuchsii and our previous results are then used here to fill gaps in what has been published until now and to test alternative explanations of the irregular flowering patterns of orchids.
Track Irregularity Time Series Analysis and Trend Forecasting
Directory of Open Access Journals (Sweden)
Jia Chaolong
2012-01-01
Full Text Available The combination of linear and nonlinear methods is widely used in the prediction of time series data. This paper analyzes track irregularity time series data by using gray incidence degree models and methods of data transformation, trying to find the connotative relationship between the time series data. In this paper, GM (1,1 is based on first-order, single variable linear differential equations; after an adaptive improvement and error correction, it is used to predict the long-term changing trend of track irregularity at a fixed measuring point; the stochastic linear AR, Kalman filtering model, and artificial neural network model are applied to predict the short-term changing trend of track irregularity at unit section. Both long-term and short-term changes prove that the model is effective and can achieve the expected accuracy.
Exploring Manycore Multinode Systems for Irregular Applications with FPGA Prototyping
Energy Technology Data Exchange (ETDEWEB)
Ceriani, Marco; Palermo, Gianluca; Secchi, Simone; Tumeo, Antonino; Villa, Oreste
2013-04-29
We present a prototype of a multi-core architecture implemented on FPGA, designed to enable efficient execution of irregular applications on distributed shared memory machines, while maintaining high performance on regular workloads. The architecture is composed of off-the-shelf soft-core cores, local interconnection and memory interface, integrated with custom components that optimize it for irregular applications. It relies on three key elements: a global address space, multithreading, and fine-grained synchronization. Global addresses are scrambled to reduce the formation of network hot-spots, while the latency of the transactions is covered by integrating an hardware scheduler within the custom load/store buffers to take advantage from the availability of multiple executions threads, increasing the efficiency in a transparent way to the application. We evaluated a dual node system irregular kernels showing scalability in the number of cores and threads.
DEFF Research Database (Denmark)
Cox, Geoff
Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.
Directory of Open Access Journals (Sweden)
Yahya Karimipanah
Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally
Propagation and scattering of electromagnetic waves by the ionospheric irregularities
International Nuclear Information System (INIS)
Ho, A.Y.; Kuo, S.P.; Lee, M.C.
1993-01-01
The problem of wave propagation and scattering in the ionosphere is particularly important in the areas of communications, remote-sensing and detection. The ionosphere is often perturbed with coherently structured (quasiperiodic) density irregularities. Experimental observations suggest that these irregularities could give rise to significant ionospheric effect on wave propagation such as causing spread-F of the probing HF sounding signals and scintillation of beacon satellite signals. It was show by the latter that scintillation index S 4 ∼ 0.5 and may be as high as 0.8. In this work a quasi-particle theory is developed to study the scintillation phenomenon. A Wigner distribution function for the wave intensity in the (k,r) space is introduced and its governing equation is derived with an effective collision term giving rise to the attenuation and scattering of the wave. This kinetic equation leads to a hierarchy of moment equations in r space. This systems of equations is then truncated to the second moment which is equivalent to assuming a cold quasi-particle distribution In this analysis, the irregularities are modeled as a two dimensional density modulation on an uniform background plasma. The analysis shows that this two dimensional density grating, effectively modulates the intensity of the beacon satellite signals. This spatial modulation of the wave intensity is converted into time modulation due to the drift of the ionospheric irregularities, which then contributes to the scintillation of the beacon satellite signals. Using the proper plasma parameters and equatorial measured data of irregularities, it is shown that the scintillation index defined by S4=( 2 >- 2 )/ 2 where stands for spatial average over an irregularity wavelength is in the range of the experimentally detected values
Irregular Shaped Building Design Optimization with Building Information Modelling
Directory of Open Access Journals (Sweden)
Lee Xia Sheng
2016-01-01
Full Text Available This research is to recognise the function of Building Information Modelling (BIM in design optimization for irregular shaped buildings. The study focuses on a conceptual irregular shaped “twisted” building design similar to some existing sculpture-like architectures. Form and function are the two most important aspects of new buildings, which are becoming more sophisticated as parts of equally sophisticated “systems” that we are living in. Nowadays, it is common to have irregular shaped or sculpture-like buildings which are very different when compared to regular buildings. Construction industry stakeholders are facing stiff challenges in many aspects such as buildability, cost effectiveness, delivery time and facility management when dealing with irregular shaped building projects. Building Information Modelling (BIM is being utilized to enable architects, engineers and constructors to gain improved visualization for irregular shaped buildings; this has a purpose of identifying critical issues before initiating physical construction work. In this study, three variations of design options differing in rotating angle: 30 degrees, 60 degrees and 90 degrees are created to conduct quantifiable comparisons. Discussions are focused on three major aspects including structural planning, usable building space, and structural constructability. This research concludes that Building Information Modelling is instrumental in facilitating design optimization for irregular shaped building. In the process of comparing different design variations, instead of just giving “yes or no” type of response, stakeholders can now easily visualize, evaluate and decide to achieve the right balance based on their own criteria. Therefore, construction project stakeholders are empowered with superior evaluation and decision making capability.
Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System)
DEFF Research Database (Denmark)
Celestinos, Adrian; Nielsen, Sofus Birkedal
2011-01-01
of an irregular room model using the FDTD (Finite Difference Time Domain) method has been presented. CABS has been simulated in the irregular room model. Measurements of CABS in a real irregular room have been performed. The performance of CABS was affected by the irregular shape of the room due to the corner...
Characterizing spontaneous irregular behavior in coupled map lattices
International Nuclear Information System (INIS)
Dobyns, York; Atmanspacher, Harald
2005-01-01
Two-dimensional coupled map lattices display, in a specific parameter range, a stable phase (quasi-) periodic in both space and time. With small changes to the model parameters, this stable phase develops spontaneous eruptions of non-periodic behavior. Although this behavior itself appears irregular, it can be characterized in a systematic fashion. In particular, parameter-independent features of the spontaneous eruptions may allow useful empirical characterizations of other phenomena that are intrinsically hard to predict and reproduce. Specific features of the distributions of lifetimes and emergence rates of irregular states display such parameter-independent properties
Characterizing spontaneous irregular behavior in coupled map lattices
Energy Technology Data Exchange (ETDEWEB)
Dobyns, York [PEAR, Princeton University Princeton, NJ 08544-5263 (United States); Atmanspacher, Harald [Institut fuer Grenzgebiete der Psychologie und Psychohygiene Wilhelmstrasse 3a, Freiburg 79098 (Germany)]. E-mail: haa@igpp.de
2005-04-01
Two-dimensional coupled map lattices display, in a specific parameter range, a stable phase (quasi-) periodic in both space and time. With small changes to the model parameters, this stable phase develops spontaneous eruptions of non-periodic behavior. Although this behavior itself appears irregular, it can be characterized in a systematic fashion. In particular, parameter-independent features of the spontaneous eruptions may allow useful empirical characterizations of other phenomena that are intrinsically hard to predict and reproduce. Specific features of the distributions of lifetimes and emergence rates of irregular states display such parameter-independent properties.
[Artificial cycle therapy of acupuncture and moxibustion for irregular menstruation].
Wu, Jie; Yang, Lijie; Chen, Yajie; Li, Qing; Chen, Lin
2015-03-01
Through the discussion on TCM physiological characters of females in follicular, ovulatory, luteal and menstrual phases and treatment principles, the clinical application of artificial cycle therapy of acupuncture and moxibustion was introduced for irregular menstruation and the typical cases were attached. It is suggested that the menstrual cycle follows the growth-consumption rule of yin, yang, qi and blood. The corresponding treatment principles should be applied in accordance with the change rule of menstrual cycle. Hence, it is worth to adopt the artificial cycle therapy of acupuncture and moxibustion for irregular menstruation in clinical application.
Uniform irradiation of irregularly shaped cavities for photodynamic therapy.
Rem, A I; van Gemert, M J; van der Meulen, F W; Gijsbers, G H; Beek, J F
1997-03-01
It is difficult to achieve a uniform light distribution in irregularly shaped cavities. We have conducted a study on the use of hollow 'integrating' moulds for more uniform light delivery of photodynamic therapy in irregularly shaped cavities such as the oral cavity. Simple geometries such as a cubical box, a sphere, a cylinder and a 'bottle-neck' geometry have been investigated experimentally and the results have been compared with computed light distributions obtained using the 'radiosity method'. A high reflection coefficient of the mould and the best uniform direct irradiance possible on the inside of the mould were found to be important determinants for achieving a uniform light distribution.
Performance of sparse graph codes on a four-dimensional CDMA System in AWGN and multipath fading
CSIR Research Space (South Africa)
Vlok, JD
2007-09-01
Full Text Available (bit) = 1× 10−5. Index Terms—Block Turbo codes (BTC), complex spread- ing sequences (CSS), channel modelling, log-likelihood ratio (LLR), low-density parity-check (LDPC) codes, multi-layered- modulation (MLM), multi-dimensional (MD), repeat...~ ~ ~ ~ 1,3 −1 1,2 −1 1,2 1,2 1,3 1,3 2,3 2,3 −1 Fig. 4. Three-dimensional block turbo decoder structure The output of SISO module m is a 3D cube ΛE,m;m = 1, 2, 3, containing the extrinsic log-likelihood ratio (LLR) of each data bit xk...
Constellation labeling optimization for bit-interleaved coded APSK
Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe
2016-05-01
This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.
Runtime buffer management to improve the performance in irregular ...
Indian Academy of Sciences (India)
S¯adhan¯a Vol. 40, Part 4, June 2015, pp. 1117–1137. c Indian Academy of Sciences. Runtime buffer management to improve the performance in irregular Network-on-Chip architecture. UMAMAHESWARI S1,∗, MEGANATHAN D2 and. RAJA PAUL PERINBAM J3. 1Department of Information Technology, Anna University, ...
Structure Irregularity Impedes Drop Roll-Off at Superhydrophobic Surfaces
DEFF Research Database (Denmark)
Larsen, Simon Tylsgaard; Andersen, Nis Korsgaard; Søgaard, Emil
2014-01-01
-off angles is found to be caused by a decrease of the receding contact angle, which in turn is caused by an increase of the triple phase contact line of the drops for those more irregular surfaces. To understand the observation, we propose to treat the microdrops as rigid bodies and apply a torque balance...
The regularized monotonicity method: detecting irregular indefinite inclusions
DEFF Research Database (Denmark)
Garde, Henrik; Staboulis, Stratos
2018-01-01
inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...
Size and Albedo of Irregular Saturnian Satellites from Spitzer Observations
Mueller, Michael; Grav, T.; Trilling, D.; Stansberry, J.; Sykes, M.
2008-01-01
Using MIPS onboard the Spitzer Space Telescope, we observed the thermal emission (24 and, for some targets, 70 um) of eight irregular satellites of Saturn: Albiorix, Siarnaq, Paaliaq, Kiviuq, Ijiraq, Tarvos, Erriapus, and Ymir. We determined the size and albedo of all targets. An analysis of
Interagency Cooperation for Irregular Warfare at the Combatant Command
2009-01-01
enemy’s command capability.16 Salamoni argued that the term “irregular warfare” belies an ethnocentric perspective of conflict that will limit military...duty military staffing to form the nucleus of the organization, which would receive augmentation from additional assigned reservists and interagency
Convection-diffusion lattice Boltzmann scheme for irregular lattices
Sman, van der R.G.M.; Ernst, M.H.
2000-01-01
In this paper, a lattice Boltzmann (LB) scheme for convection diffusion on irregular lattices is presented, which is free of any interpolation or coarse graining step. The scheme is derived using the axioma that the velocity moments of the equilibrium distribution equal those of the
Swiveling Lathe Jaw Concept for Holding Irregular Pieces
David, J.
1966-01-01
Clamp holds irregularly shaped pieces in lathe chuck without damage and eliminates excessive time in selecting optimum mounting. Interchangeable jaws ride in standard jaw slots but swivel so that the jaw face bears evenly against the workpiece regardless of contour. The jaws can be used on both engine and turret lathes.
Why type 2 supernovae do not explode in irregular galaxies
International Nuclear Information System (INIS)
Shklovskij, I.S.
1984-01-01
The conclusion is drawn that reason for an absence of type 2 supernovae explosions in irregular galaxies is their peculiar chemical composition. The observed lack of stellar wind from massive hot giants is due to relatively low heavy element abundance. For this reason evolving massive stars do not form an extended dense envelopes that is a necessary condition for the type 2 supernova phenomenon
First stellar abundances in the dwarf irregular galaxy Sextans A
Kaufer, A; Venn, KA; Tolstoy, E; Pinte, C; Kudritzki, RP
We present the abundance analyses of three isolated A-type supergiant stars in the dwarf irregular galaxy Sextans A (= DDO 75) from high-resolution spectra obtained with the Ultraviolet-Visual Echelle Spectrograph (UVES) on the Kueyen telescope (UT2) of the ESO Very Large Telescope (VLT). Detailed
Spectral element method for wave propagation on irregular domains
Indian Academy of Sciences (India)
Yan Hui Geng
2018-03-14
Mar 14, 2018 ... Abstract. A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the ...
On the Total Edge Irregularity Strength of Generalized Butterfly Graph
Dwi Wahyuna, Hafidhyah; Indriati, Diari
2018-04-01
Let G(V, E) be a connected, simple, and undirected graph with vertex set V and edge set E. A total k-labeling is a map that carries vertices and edges of a graph G into a set of positive integer labels {1, 2, …, k}. An edge irregular total k-labeling λ: V(G) ∪ E(G) → {1, 2, …, k} of a graph G is a total k-labeling such that the weights calculated for all edges are distinct. The weight of an edge uv in G, denoted by wt(uv), is defined as the sum of the label of u, the label of v, and the label of uv. The total edge irregularity strength of G, denoted by tes(G), is the minimum value of the largest label k over all such edge irregular total k-labelings. A generalized butterfly graph, BFn , obtained by inserting vertices to every wing with assumption that sum of inserting vertices to every wing are same then it has 2n + 1 vertices and 4n ‑ 2 edges. In this paper, we investigate the total edge irregularity strength of generalized butterfly graph, BFn , for n > 2. The result is tes(B{F}n)=\\lceil \\frac{4n}{3}\\rceil .
On a new process for cusp irregularity production
Directory of Open Access Journals (Sweden)
H. C. Carlson
2008-09-01
Full Text Available Two plasma instability mechanisms were thought until 2007 to dominate the formation of plasma irregularities in the F region high latitude and polar ionosphere; the gradient-drift driven instability, and the velocity-shear driven instability. The former mechanism was accepted as accounting for plasma structuring in polar cap patches, the latter for plasma structuring in polar cap sun aligned arcs. Recent work has established the need to replace this view of the past two decades with a new patch plasma structuring process (not a new mechanism, whereby shear-driven instabilities first rapidly structure the entering plasma, after which gradient drift instabilities build on these large "seed" irregularities. Correct modeling of cusp and early polar cap patch structuring will not be accomplished without allowing for this compound process. This compound process explains several previously unexplained characteristics of cusp and early polar cap patch irregularities. Here we introduce additional data, coincident in time and space, to extend that work to smaller irregularity scale sizes and relate it to the structured cusp current system.
Irregular ionization and scintillation of the ionosphere in equator region
International Nuclear Information System (INIS)
Shinno, Kenji
1974-01-01
The latest studies on the scintillation in satellite communication and its related irregularities of ionosphere are reviewed. They were made clear by means of spread-F, the direct measurement with scientific satellites, VHF radar observation, and radio wave propagation in equator region. The fundamental occurrence mechanism may be instability of plasma caused by the interaction of movement of neutral atmosphere and magnetic field. Comparison of the main characteristics of scintillation, namely the dependence on region, solar activity, season, local time, geomagnetic activity, movement in ionosphere, scattering source, frequency and transmission mode, was made and the correlation among spread-F, TEP and scintillation was summarized. The latest principal studies were the observations made by Intelsat and by ATS. Scintillation of Syncom-3 and Intelsat-II-F2 and spread-F by ionosphere observation were compared by Huang. It is reasonable to consider that the occurrence of scintillation is caused by the irregularities in ionosphere which are particular in equator region, because of the similar characteristics of spread-F and VHF propagation in the equator region. These three phenomena may occur in relation to the irregularities of ionosphere. Interpretation of spread-F and the abnormal propagation wave across the equator are given. The study using VHF radar and the movement of irregular ionization by the direct observation with artificial satellites are reviewd. (Iwakiri, K.)
Third-order theory for multi-directional irregular waves
DEFF Research Database (Denmark)
Madsen, Per A.; Fuhrman, David R.
2012-01-01
A new third-order solution for multi-directional irregular water waves in finite water depth is presented. The solution includes explicit expressions for the surface elevation, the amplitude dispersion and the vertical variation of the velocity potential. Expressions for the velocity potential at...
Irregular menses: an independent risk factor for gestational diabetes mellitus.
Haver, Mary Claire; Locksmith, Gregory J; Emmet, Emily
2003-05-01
Our purpose was to determine whether a history of irregular menses predicts gestational diabetes mellitus independently of traditional risk factors. We analyzed demographic characteristics, body mass index, and menstrual history of 85 pregnant women with gestational diabetes mellitus and compared them with 85 systematically selected control subjects who were matched for age, race, and delivery year. Subjects with pregestational diabetes mellitus, previous gestational diabetes mellitus, family history of diabetes mellitus, weight >200 pounds, previous macrosomic infants, or previous stillbirth were excluded. Demographic characteristics between case and control groups were similar. Mean body mass index was higher among cases (26.5 kg/m(2)) versus control subjects (24.5 kg/m(2), P =.004). Irregular cycles were more prevalent in the cases (24% vs 7%, P =.006). With the use of body mass index as a stratification factor, menstrual irregularity maintained a strong association with gestational diabetes mellitus (P =.014). A history of irregular menstrual cycles was a significant independent predictor of gestational diabetes mellitus. If selective screening is implemented for gestational diabetes mellitus, such history should be considered in the decision of whom to test.
Spectral element method for wave propagation on irregular domains
Indian Academy of Sciences (India)
A spectral element approximation of acoustic propagation problems combined with a new mapping method on irregular domains is proposed. Following this method, the Gauss–Lobatto–Chebyshev nodes in the standard space are applied to the spectral element method (SEM). The nodes in the physical space are ...
Characteristics of low latitude ionospheric E-region irregularities ...
Indian Academy of Sciences (India)
154°E, dip angle = 37.3°, sub-ionospheric dip = 34°) have been analyzed to study the behaviour of ionospheric E-region irregularities during the active solar and magnetic periods. The autocorrelation functions, power spectral densities, signal de-correlation times are computed to study the temporal features of ionospheric ...
Directory of Open Access Journals (Sweden)
Anthony McCosker
2014-03-01
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.
Classical limit of irregular blocks and Mathieu functions
International Nuclear Information System (INIS)
Piątek, Marcin; Pietrykowski, Artur R.
2016-01-01
The Nekrasov-Shatashvili limit of the N = 2 SU(2) pure gauge (Ω-deformed) super Yang-Mills theory encodes the information about the spectrum of the Mathieu operator. On the other hand, the Mathieu equation emerges entirely within the frame of two-dimensional conformal field theory (2d CFT) as the classical limit of the null vector decoupling equation for some degenerate irregular block. Therefore, it seems to be possible to investigate the spectrum of the Mathieu operator employing the techniques of 2d CFT. To exploit this strategy, a full correspondence between the Mathieu equation and its realization within 2d CFT has to be established. In our previous paper http://dx.doi.org/10.1007/JHEP12(2014)032, we have found that the expression of the Mathieu eigenvalue given in terms of the classical irregular block exactly coincides with the well known weak coupling expansion of this eigenvalue in the case in which the auxiliary parameter is the noninteger Floquet exponent. In the present work we verify that the formula for the corresponding eigenfunction obtained from the irregular block reproduces the so-called Mathieu exponent from which the noninteger order elliptic cosine and sine functions may be constructed. The derivation of the Mathieu equation within the formalism of 2d CFT is based on conjectures concerning the asymptotic behaviour of irregular blocks in the classical limit. A proof of these hypotheses is sketched. Finally, we speculate on how it could be possible to use the methods of 2d CFT in order to get from the irregular block the eigenvalues of the Mathieu operator in other regions of the coupling constant.
Comparison of correlation analysis techniques for irregularly sampled time series
Directory of Open Access Journals (Sweden)
K. Rehfeld
2011-06-01
Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.
All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.
We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ^{18}O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.
Numerical simulations of type II gradient drift irregularities in the equatorial electrojet
International Nuclear Information System (INIS)
Ferch, R.L.; Sudan, R.N.
1977-01-01
Two-dimensional numerical studies of the development of type II irregularities in the equatorial electrojet have been carried out using a method similar to that of McDonald et al., (1974) except that ion inertia has been neglected. This simplification is shown to be a valid approximation whenever the electron drift velocity is small in comparison with the ion acoustic velocity and the values of the other parameters are those appropriate for the equatorial E layer. This code enables us to follow the development of quasi-steady state turbulence from appropriate initial pertubations. The two-dimensional turbulent spectrum of electron density perturbations excited is studied both for the case of devlopment from initial perturbations and for the case of a continuously pumped single driving wave
Energy Technology Data Exchange (ETDEWEB)
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Optimal codes as Tanner codes with cyclic component codes
DEFF Research Database (Denmark)
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
2014-01-01
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...
International Nuclear Information System (INIS)
Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.
2017-09-01
This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)
DEFF Research Database (Denmark)
Soon, Winnie; Cox, Geoff
2018-01-01
a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...
International Nuclear Information System (INIS)
Rattan, D.S.
1993-11-01
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
Wettability measurements of irregular shapes with Wilhelmy plate method
Park, Jaehyung; Pasaogullari, Ugur; Bonville, Leonard
2018-01-01
One of the most accurate methods for measuring the dynamic contact angle of liquids on solid surfaces is the Wilhelmy plate method. This method generally requires the use of rectangular samples having a constant perimeter in the liquid during advancing and receding cycles. A new formulation based on the Wilhelmy force balance equation to determine the contact angle for plate samples with irregular shapes has been developed. This method employs a profile plot obtained from an optical image to determine the perimeter (i.e. wetted length) of the sample as a function of the immersion depth. The raw force data measured by the force tensiometer is manipulated using the profile plot and the Wilhelmy equation to determine the wetting force and consequently advancing and the receding contact angle. This method is verified with both triangular and irregular PTFE samples in water, and measured contact angles are in good agreement with results from conventional regular shaped samples with a constant perimeter.
Rocket measurements of electron density irregularities during MAC/SINE
Ulwick, J. C.
1989-01-01
Four Super Arcas rockets were launched at the Andoya Rocket Range, Norway, as part of the MAC/SINE campaign to measure electron density irregularities with high spatial resolution in the cold summer polar mesosphere. They were launched as part of two salvos: the turbulent/gravity wave salvo (3 rockets) and the EISCAT/SOUSY radar salvo (one rocket). In both salvos meteorological rockets, measuring temperature and winds, were also launched and the SOUSY radar, located near the launch site, measured mesospheric turbulence. Electron density irregularities and strong gradients were measured by the rocket probes in the region of most intense backscatter observed by the radar. The electron density profiles (8 to 4 on ascent and 4 on descent) show very different characteristics in the peak scattering region and show marked spatial and temporal variability. These data are intercompared and discussed.
The scholarly rebellion of the early Baker Street Irregulars
Directory of Open Access Journals (Sweden)
George Mills
2017-03-01
Full Text Available This work provides and analyzes an early institutional history of the pioneering Sherlock Holmes American fan club, the Baker Street Irregulars (BSI. Using the publications and records of these devoted Sherlockians, I track the BSI's development from a speakeasy gathering in 1934 to a national organization by the mid-1940s. This growth was built on a foundation of Victorian nostalgia and playful humor. Yet at the same time the members of the Irregulars took their fandom seriously, producing Sherlockian scholarship and creating an infrastructure of journals, conferences, and credentialing that directly mimicked the academy. They positioned themselves in contrast to prevailing scholarly practices of the period, such as New Criticism. I trace both how their fan practices developed over time and how this conflict with the academy led to many of the BSI's defining characteristics.
Constructing C1 Continuous Surface on Irregular Quad Meshes
Institute of Scientific and Technical Information of China (English)
HE Jun; GUO Qiang
2013-01-01
A new method is proposed for surface construction on irregular quad meshes as extensions to uniform B-spline surfaces. Given a number of control points, which form a regular or irregular quad mesh, a weight function is constructed for each control point. The weight function is defined on a local domain and is C1 continuous. Then the whole surface is constructed by the weighted combination of all the control points. The property of the new method is that the surface is defined by piecewise C1 bi-cubic rational parametric polynomial with each quad face. It is an extension to uniform B-spline surfaces in the sense that its definition is an analogy of the B-spline surface, and it produces a uniform bi-cubic B-spline surface if the control mesh is a regular quad mesh. Examples produced by the new method are also included.
Conflict Without Casualties: Non-Lethal Weapons in Irregular Warfare
2007-09-01
the body,” and the Geneva Protocol of 1925, bans the use of chemical and biological weapons .11 On 8 April 1975, President Ford issued Executive...E Funding – PE 63851M) (accessed 15 December 2006). The American Journal of Bioethics . “Medical Ethics and Non-Lethal Weapons .” Bioethics.net...CASUALTIES: NON-LETHAL WEAPONS IN IRREGULAR WARFARE by Richard L. Scott September 2007 Thesis Advisor: Robert McNab Second Reader
Active Absorption of Irregular Gravity Waves in BEM-Models
DEFF Research Database (Denmark)
Brorsen, Michael; Frigaard, Peter
1992-01-01
The boundary element method is applied to the computation of irregular gravity waves. The boundary conditions at the open boundaries are obtained by a digital filtering technique, where the surface elevations in front of the open boundary are filtered numerically yielding the velocity to be presc...... to be prescribed at the boundary. By numerical examples it is shown that well designed filters can reduce the wave reflection to a few per cent over a frequency range corresponding to a Jonswap spectrum....
Using Little's Irregularity Index in orthodontics: outdated and inaccurate?
LENUS (Irish Health Repository)
Macauley, Donal
2012-12-01
Little\\'s Irregularity Index (LII) was devised to objectively score mandibular incisor alignment for epidemiological studies but has been extended to assess the relative performance of orthodontic brackets, retainer or treatment modalities. Our aim was to examine the repeatability and precision of LII measurements of four independent examiners on the maxillary arch of orthodontic patients. The hypothesis was that the reproducibility of individual contact point displacement measurements, used to calculate the LII score, are inappropriate.
Computing Homology Group Generators of Images Using Irregular Graph Pyramids
Peltier , Samuel; Ion , Adrian; Haxhimusa , Yll; Kropatsch , Walter; Damiand , Guillaume
2007-01-01
International audience; We introduce a method for computing homology groups and their generators of a 2D image, using a hierarchical structure i.e. irregular graph pyramid. Starting from an image, a hierarchy of the image is built, by two operations that preserve homology of each region. Instead of computing homology generators in the base where the number of entities (cells) is large, we first reduce the number of cells by a graph pyramid. Then homology generators are computed efficiently on...
Analysis of irregular opacities of silicosis using computed tomography
International Nuclear Information System (INIS)
Maeda, Atsushi; Shida, Hisao; Chiyotani, Keizo; Saito, Kenichi; Mishina, Michihito
1983-01-01
Classification in used to codify Chest CT images of abnormalities of the lung in a simple reproducible manner. Simbols to record CT features of importance are listed. We applied CT to 92 cases of silicosis and roentgenological analysis was performed. Bullae, honeycombing, cavity, emphysema, pleural thickning and calcification were more clearly demonstrated in CT images than routine chest roentgenograms. Irregular opacities were considered to be a combined profusion of small round and streak or strand. (author)
Evaluation of irregular menses in perimenarcheal girls: a pilot study.
Browner-Elhanan, Karen J; Epstein, Jonathan; Alderman, Elizabeth M
2003-12-01
Acyclic vaginal bleeding in girls within three years of menarche is most commonly attributed to an immature hypothalamic-pituitary-ovarian axis. Assuming this diagnosis may preclude the practitioner from performing more definitive studies and thereby diagnosing other, treatable causes of menstrual irregularities. A retrospective chart review of 178 girls presenting to an inner-city hospital-based adolescent clinic within three years of menarche was performed. Personal and family medical and menarcheal history was assessed, and findings on physical and laboratory examination performed were evaluated. Of the 178 girls still perimenarcheal at presentation, 47 were the focus of this study. Of these, 39 had no significant findings on physical examination, while 3 had signs of functional ovarian hyperandrogenism (FOH) including obesity, hirsutism, and moderate acne with corresponding LH/FSH ratios>3, although pelvic ultrasound examination revealed normal ovaries. Four of the 39 patients with normal physical exams had LH/FSH testing done, and 1 of the 4 had an abnormal LH/FSH ratio, indicating possible FOH. Two of the 47 patients were pregnant. Other laboratory abnormalities included microcytic, hypochromic anemia in patients, and an elevated Erythrocyte Sedimentation Rate in a patient later diagnosed with a rheumatologic disorder. Those perimenarcheal girls presenting with irregular menses and findings including obesity, acne, or pallor, were likely to have treatable causes of menstrual irregularities. In one of the four girls with a normal physical examination, hormonal testing indicated possible FOH, thus suggesting that hormonal evaluation of perimenarcheal girls with menstrual irregularities may be justified, as it may reveal previously unsuspected pathology.
Energy Technology Data Exchange (ETDEWEB)
Delbecq, J.M
1999-07-01
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Backscatter measurements of 11-cm equatorial spread-F irregularities
International Nuclear Information System (INIS)
Tsunoda, R.T.
1980-01-01
In the equatorial F-region ionosphere, a turbulent cascade process has been found to exist that extends from irregularity spatial wavelengths longer than tens of kilometers down to wavelengths as short as 36 cm. To investigate the small-scale regime of wavelengths less than 36 cm, an equatorial radar experiment was conducted using a frequency of 1320 MHz that corresponds to an irregularity wavelength of 11 cm. The first observations of radar backscatter from 11-cm field-aligned irregularities (FAI) are described. These measurements extend the spatial wavelength regime of F-region FAI to lengths that approach both electron gyroradius and the Debye length. Agreement of these results with the theory of high-frequency drift waves suggests that these observations may be unique to the equatorial ionosphere. That is, the requirement of low electron densities for which the theroy calls may preclude the existence of 11-cm FAI elsewhere in the F-region ionosphere, except in equatorial plasma bubbles
[Comparision of Different Methods of Area Measurement in Irregular Scar].
Ran, D; Li, W J; Sun, Q G; Li, J Q; Xia, Q
2016-10-01
To determine a measurement standard of irregular scar area by comparing the advantages and disadvantages of different measurement methods in measuring same irregular scar area. Irregular scar area was scanned by digital scanning and measured by coordinate reading method, AutoCAD pixel method, Photoshop lasso pixel method, Photoshop magic bar filled pixel method and Foxit PDF reading software, and some aspects of these methods such as measurement time, repeatability, whether could be recorded and whether could be traced were compared and analyzed. There was no significant difference in the scar areas by the measurement methods above. However, there was statistical difference in the measurement time and repeatability by one or multi performers and only Foxit PDF reading software could be traced back. The methods above can be used for measuring scar area, but each one has its advantages and disadvantages. It is necessary to develop new measurement software for forensic identification. Copyright© by the Editorial Department of Journal of Forensic Medicine
Seismic performance for vertical geometric irregularity frame structures
Ismail, R.; Mahmud, N. A.; Ishak, I. S.
2018-04-01
This research highlights the result of vertical geometric irregularity frame structures. The aid of finite element analysis software, LUSAS was used to analyse seismic performance by focusing particularly on type of irregular frame on the differences in height floors and continued in the middle of the building. Malaysia’s building structures were affected once the earthquake took place in the neighbouring country such as Indonesia (Sumatera Island). In Malaysia, concrete is widely used in building construction and limited tension resistance to prevent it. Analysing structural behavior with horizontal and vertical static load is commonly analyses by using the Plane Frame Analysis. The case study of this research is to determine the stress and displacement in the seismic response under this type of irregular frame structures. This study is based on seven-storey building of Clinical Training Centre located in Sungai Buloh, Selayang, Selangor. Since the largest earthquake occurs in Acheh, Indonesia on December 26, 2004, the data was recorded and used in conducting this research. The result of stress and displacement using IMPlus seismic analysis in LUSAS Modeller Software under the seismic response of a formwork frame system states that the building is safe to withstand the ground and in good condition under the variation of seismic performance.
DEFF Research Database (Denmark)
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...
International Nuclear Information System (INIS)
Lindemuth, I.R.
1979-01-01
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.
A novel chaotic encryption scheme based on arithmetic coding
International Nuclear Information System (INIS)
Mi Bo; Liao Xiaofeng; Chen Yong
2008-01-01
In this paper, under the combination of arithmetic coding and logistic map, a novel chaotic encryption scheme is presented. The plaintexts are encrypted and compressed by using an arithmetic coder whose mapping intervals are changed irregularly according to a keystream derived from chaotic map and plaintext. Performance and security of the scheme are also studied experimentally and theoretically in detail
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC
Stromal haze, myofibroblasts, and surface irregularity after PRK.
Netto, Marcelo V; Mohan, Rajiv R; Sinha, Sunilima; Sharma, Ajay; Dupps, William; Wilson, Steven E
2006-05-01
The aim of this study was to investigate the relationship between the level of stromal surface irregularity after photorefractive keratectomy (PRK) and myofibroblast generation along with the development of corneal haze. Variable levels of stromal surface irregularity were generated in rabbit corneas by positioning a fine mesh screen in the path of excimer laser during ablation for a variable percentage of the terminal pulses of the treatment for myopia that does not otherwise generate significant opacity. Ninety-six rabbits were divided into eight groups: [see table in text]. Slit lamp analysis and haze grading were performed in all groups. Rabbits were sacrificed at 4 hr or 4 weeks after surgery and histochemical analysis was performed on corneas for apoptosis (TUNEL assay), myofibroblast marker alpha-smooth muscle actin (SMA), and integrin alpha4 to delineate the epithelial basement membrane. Slit-lamp grading revealed severe haze formation in corneas in groups IV and VI, with significantly less haze in groups II, III, and VII and insignificant haze compared with the unwounded control in groups I and V. Analysis of SMA staining at 4 weeks after surgery, the approximate peak of haze formation in rabbits, revealed low myofibroblast formation in group I (1.2+/-0.2 cells/400x field) and group V (1.8+/-0.4), with significantly more in groups II (3.5+/-1.8), III (6.8+/-1.6), VII (7.9+/-3.8), IV (12.4+/-4.2) and VI (14.6+/-5.1). The screened groups were significantly different from each other (p PRK groups. The -9.0 diopter PRK group VI had significantly more myofibroblast generation than the -9.0 diopter PRK with PTK-smoothing group VII (p PRK and the level of stromal surface irregularity. PTK-smoothing with methylcellulose was an effective method to reduce stromal surface irregularity and decreased both haze and associated myofibroblast density. We hypothesize that stromal surface irregularity after PRK for high myopia results in defective basement membrane
DEFF Research Database (Denmark)
Bonnesen, Barbara; Oddgeirsdóttir, Hanna L; Naver, Klara Vinsand
2016-01-01
INTRODUCTION: Very few studies describe the obstetric and neonatal outcome of spontaneous pregnancies in women with irregular menstrual cycles. However, menstrual cycle irregularities are common and may be associated with increased risk, and women who develop pregnancy complications more frequent...
Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section
Directory of Open Access Journals (Sweden)
Chaolong Jia
2014-01-01
Full Text Available Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described.
International Nuclear Information System (INIS)
Altomare, S.; Minton, G.
1975-02-01
PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)
International Nuclear Information System (INIS)
Gara, P.; Martin, E.
1983-01-01
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
Directory of Open Access Journals (Sweden)
Valenzise G
2009-01-01
Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.
Performance and Complexity Evaluation of Iterative Receiver for Coded MIMO-OFDM Systems
Directory of Open Access Journals (Sweden)
Rida El Chall
2016-01-01
Full Text Available Multiple-input multiple-output (MIMO technology in combination with channel coding technique is a promising solution for reliable high data rate transmission in future wireless communication systems. However, these technologies pose significant challenges for the design of an iterative receiver. In this paper, an efficient receiver combining soft-input soft-output (SISO detection based on low-complexity K-Best (LC-K-Best decoder with various forward error correction codes, namely, LTE turbo decoder and LDPC decoder, is investigated. We first investigate the convergence behaviors of the iterative MIMO receivers to determine the required inner and outer iterations. Consequently, the performance of LC-K-Best based receiver is evaluated in various LTE channel environments and compared with other MIMO detection schemes. Moreover, the computational complexity of the iterative receiver with different channel coding techniques is evaluated and compared with different modulation orders and coding rates. Simulation results show that LC-K-Best based receiver achieves satisfactory performance-complexity trade-offs.
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Kilometer-Spaced GNSS Array for Ionospheric Irregularity Monitoring
Su, Yang
This dissertation presents automated, systematic data collection, processing, and analysis methods for studying the spatial-temporal properties of Global Navigation Satellite Systems (GNSS) scintillations produced by ionospheric irregularities at high latitudes using a closely spaced multi-receiver array deployed in the northern auroral zone. The main contributions include 1) automated scintillation monitoring, 2) estimation of drift and anisotropy of the irregularities, 3) error analysis of the drift estimates, and 4) multi-instrument study of the ionosphere. A radio wave propagating through the ionosphere, consisting of ionized plasma, may suffer from rapid signal amplitude and/or phase fluctuations known as scintillation. Caused by non-uniform structures in the ionosphere, intense scintillation can lead to GNSS navigation and high-frequency (HF) communication failures. With specialized GNSS receivers, scintillation can be studied to better understand the structure and dynamics of the ionospheric irregularities, which can be parameterized by altitude, drift motion, anisotropy of the shape, horizontal spatial extent and their time evolution. To study the structuring and motion of ionospheric irregularities at the sub-kilometer scale sizes that produce L-band scintillations, a closely-spaced GNSS array has been established in the auroral zone at Poker Flat Research Range, Alaska to investigate high latitude scintillation and irregularities. Routinely collecting low-rate scintillation statistics, the array database also provides 100 Hz power and phase data for each channel at L1/L2C frequency. In this work, a survey of seasonal and hourly dependence of L1 scintillation events over the course of a year is discussed. To efficiently and systematically study scintillation events, an automated low-rate scintillation detection routine is established and performed for each day by screening the phase scintillation index. The spaced-receiver technique is applied to cross
Similarity estimators for irregular and age uncertain time series
Rehfeld, K.; Kurths, J.
2013-09-01
Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many datasets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age uncertain time series. We compare the Gaussian-kernel based cross correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity
Similarity estimators for irregular and age-uncertain time series
Rehfeld, K.; Kurths, J.
2014-01-01
Paleoclimate time series are often irregularly sampled and age uncertain, which is an important technical challenge to overcome for successful reconstruction of past climate variability and dynamics. Visual comparison and interpolation-based linear correlation approaches have been used to infer dependencies from such proxy time series. While the first is subjective, not measurable and not suitable for the comparison of many data sets at a time, the latter introduces interpolation bias, and both face difficulties if the underlying dependencies are nonlinear. In this paper we investigate similarity estimators that could be suitable for the quantitative investigation of dependencies in irregular and age-uncertain time series. We compare the Gaussian-kernel-based cross-correlation (gXCF, Rehfeld et al., 2011) and mutual information (gMI, Rehfeld et al., 2013) against their interpolation-based counterparts and the new event synchronization function (ESF). We test the efficiency of the methods in estimating coupling strength and coupling lag numerically, using ensembles of synthetic stalagmites with short, autocorrelated, linear and nonlinearly coupled proxy time series, and in the application to real stalagmite time series. In the linear test case, coupling strength increases are identified consistently for all estimators, while in the nonlinear test case the correlation-based approaches fail. The lag at which the time series are coupled is identified correctly as the maximum of the similarity functions in around 60-55% (in the linear case) to 53-42% (for the nonlinear processes) of the cases when the dating of the synthetic stalagmite is perfectly precise. If the age uncertainty increases beyond 5% of the time series length, however, the true coupling lag is not identified more often than the others for which the similarity function was estimated. Age uncertainty contributes up to half of the uncertainty in the similarity estimation process. Time series irregularity
Ionospheric wave and irregularity measurements using passive radio astronomy techniques
International Nuclear Information System (INIS)
Erickson, W.C.; Mahoney, M.J.; Jacobson, A.R.; Knowles, S.H.
1988-01-01
The observation of midlatitude structures using passive radio astronomy techniques is discussed, with particular attention being given to the low-frequency radio telescope at the Clark Lake Radio Observatory. The present telescope operates in the 10-125-MHz frequency range. Observations of the ionosphere at separations of a few kilometers to a few hundreds of kilometers by the lines of sight to sources are possible, allowing the determination of the amplitude, wavelength, direction of propagation, and propagation speed of ionospheric waves. Data are considered on large-scale ionospheric gradients and the two-dimensional shapes and sizes of ionospheric irregularities. 10 references
From concatenated codes to graph codes
DEFF Research Database (Denmark)
Justesen, Jørn; Høholdt, Tom
2004-01-01
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
The Biosynthetic Origin of Irregular Monoterpenes in Lavandula
Demissie, Zerihun A.; Erland, Lauren A. E.; Rheault, Mark R.; Mahmoud, Soheil S.
2013-01-01
Lavender essential oils are constituted predominantly of regular monoterpenes, for example linalool, 1,8-cineole, and camphor. However, they also contain irregular monoterpenes including lavandulol and lavandulyl acetate. Although the majority of genes responsible for the production of regular monoterpenes in lavenders are now known, enzymes (including lavandulyl diphosphate synthase (LPPS)) catalyzing the biosynthesis of irregular monoterpenes in these plants have not been described. Here, we report the isolation and functional characterization of a novel cis-prenyl diphosphate synthase cDNA, termed Lavandula x intermedia lavandulyl diphosphate synthase (LiLPPS), through a homology-based cloning strategy. The LiLPPS ORF, encoding for a 305-amino acid long protein, was expressed in Escherichia coli, and the recombinant protein was purified by nickel-nitrilotriacetic acid affinity chromatography. The approximately 34.5-kDa bacterially produced protein specifically catalyzed the head-to-middle condensation of two dimethylallyl diphosphate units to LPP in vitro with apparent Km and kcat values of 208 ± 12 μm and 0.1 s−1, respectively. LiLPPS is a homodimeric enzyme with a sigmoidal saturation curve and Hill coefficient of 2.7, suggesting a positive co-operative interaction among its catalytic sites. LiLPPS could be used to modulate the production of lavandulol and its derivatives in plants through metabolic engineering. PMID:23306202
Size and Albedo of Irregular Saturnian Satellites from Spitzer Observations
Mueller, Michael; Grav, T.; Trilling, D.; Stansberry, J.; Sykes, M.
2008-09-01
Using MIPS onboard the Spitzer Space Telescope, we observed the thermal emission (24 and, for some targets, 70 um) of eight irregular satellites of Saturn: Albiorix, Siarnaq, Paaliaq, Kiviuq, Ijiraq, Tarvos, Erriapus, and Ymir. We determined the size and albedo of all targets. An analysis of archived MIPS observations of Phoebe reproduces Cassini results very accurately, thereby validating our method. For all targets, the geometric albedo is found to be low, probably below 10% and clearly below 15%. Irregular satellites are much darker than the large regular satellites. Their albedo is, however, quite similar to that of small bodies in the outer Solar System (such as cometary nuclei, Jupiter Trojans, or TNOs). This is consistent with color measurements as well as dynamical considerations which suggest a common origin of the said populations. There appear to be significant object-to-object albedo differences. Similar albedos found for some members of dynamical clusters support the idea that they may have originated in the breakup of a parent body. For three satellites, thermal data at two wavelengths are available, enabling us to constrain their thermal properties. Sub-solar temperatures are similar to that found from Cassini's Phoebe fly-by. This suggests a rather low thermal inertia, as expected for regolith-covered objects. This work is based on observations made with the Spitzer Space Telescope, which is operated by JPL under a contract with NASA. Support for this work was provided by NASA.
Non-storm irregular variation of the Dst index
Directory of Open Access Journals (Sweden)
S. Nakano
2012-01-01
Full Text Available The Dst index has a long-term variation that is not associated with magnetic storms. We estimated the long-term non-storm component of the Dst variation by removing the short-term variation related to magnetic storms. The results indicate that the variation of the non-storm component includes not only a seasonal variation but also an irregular variation. The irregular long-term variation is likely to be due to an anti-correlation with the long-term variation of solar-wind activity. In particular, a clear anti-correlation is observed between the non-storm component of Dst and the long-term variation of the solar-wind dynamic pressure. This means that in the long term, the Dst index tends to increase when the solar-wind dynamic pressure decreases. We interpret this anti-correlation as an indication that the long-term non-storm variation of Dst is influenced by the tail current variation. The long-term variation of the solar-wind dynamic pressure controls the plasma sheet thermal pressure, and the change of the plasma sheet thermal pressure would cause the non-storm tail current variation, resulting in the non-storm variation of Dst.
Irregular working hours and fatigue of cabin crew.
Castro, Marta; Carvalhais, José; Teles, Júlia
2015-01-01
Beyond workload and specific environmental factors, flight attendants can be exposed to irregular working hours, conflicting with their circadian rhythms and having a negative impact in sleep, fatigue, health, social and family life, and performance which is critical to both safety and security in flight operations. This study focuses on the irregular schedules of cabin crew as a trigger of fatigue symptoms in a wet lease Portuguese airline. The aim was to analyze: what are the requirements of the cabin crew work; whether the schedules being observed and effective resting timeouts are triggering factors of fatigue; and the existence of fatigue symptoms in the cabin crew. A questionnaire has been adapted and applied to a sample of 73 cabin crew-members (representing 61.9% of the population), 39 females and 34 males, with an average age of 27.68 ± 4.27 years. Our data indicate the presence of fatigue and corresponding health symptoms among the airline cabin crew, despite of the sample favorable characteristics. Senior workers and women are more affected. Countermeasures are required. Recommendations can be made regarding the fatigue risk management, including work organization, education and awareness training programmes and specific countermeasures.
Model tracking dual stochastic controller design under irregular internal noises
International Nuclear Information System (INIS)
Lee, Jong Bok; Heo, Hoon; Cho, Yun Hyun; Ji, Tae Young
2006-01-01
Although many methods about the control of irregular external noise have been introduced and implemented, it is still necessary to design a controller that will be more effective and efficient methods to exclude for various noises. Accumulation of errors due to model tracking, internal noises (thermal noise, shot noise and l/f noise) that come from elements such as resistor, diode and transistor etc. in the circuit system and numerical errors due to digital process often destabilize the system and reduce the system performance. New stochastic controller is adopted to remove those noises using conventional controller simultaneously. Design method of a model tracking dual controller is proposed to improve the stability of system while removing external and internal noises. In the study, design process of the model tracking dual stochastic controller is introduced that improves system performance and guarantees robustness under irregular internal noises which can be created internally. The model tracking dual stochastic controller utilizing F-P-K stochastic control technique developed earlier is implemented to reveal its performance via simulation
Using Radio Irregularity for Increasing Residential Energy Awareness
Directory of Open Access Journals (Sweden)
A. Miljković
2012-06-01
Full Text Available Radio irregularity phenomenon is often considered as a shortcoming of wireless networks. In this paper, the method of using radio irregularity as an efficient human presence detection sensor in smart homes is presented. The method is mainly based on monitoring variations of the received signal strength indicator (RSSI within the messages used for the communication between wireless smart power outlets. The radio signals used for the inter-outlets communication can be absorbed, diffracted or reflected by objects in their propagation paths. When a human enters the existing radio communication field, the variation of the signal strength at the receiver is even more expressed. Based on the detected changes and compared to the initial thresholds set during the initialization phase, the system detects human presence. The proposed solution increases user awareness and automates the power control in households, with the primary goal to contribute in residential energy savings. Compared to conventional sensor networks, this approach preserves the sensorial intelligence, simplicity and low installation costs, without the need for additional sensors integration.
Colombia: la guerra irregular en el fin de siglo
Directory of Open Access Journals (Sweden)
Alfredo RANGEL SUÁREZ
2009-11-01
Full Text Available RESUMEN: El artículo analiza las transformaciones de la guerra irregular de las guerrillas colombianas en las últimas décadas. Para ello, se estudian los cambios que han afectado a distintos factores, como los objetivos estratégicos, los medios financieros, militares, las relaciones que mantienen con los partidos políticos tradicionales en el nivel local, sus parámetros ideológicos, el origen social de sus miembros. Con esta perspectiva, se analiza el cálculo político/temporal que la guerrilla hace en la actual coyuntura y sus consecuencias para el proceso de paz.ABSTRACT: This article examines the transformations of the irregular war of the colombian guerrillas, studying the changes of several factors such as the strategic goals, the financial and army resources, the relationship among the political parties in the local level, the evolution of their ideological parameters and of the social origin of their members. With this perspectiva, the author analyses temporal and political calculation that the guerrilla makes at this point, and its consequences for the peace process.
Nonadiabatic two-electron transfer mediated by an irregular bridge
International Nuclear Information System (INIS)
Petrov, E.G.; Shevchenko, Ye.V.; May, V.
2004-01-01
Nonadiabatic two-electron transfer (TET) mediated by a linear molecular bridge is studied theoretically. Special attention is put on the case of a irregular distribution of bridge site energies as well as on the inter-site Coulomb interaction. Based on the unified description of electron transfer reactions [J. Chem. Phys. 115 (2001) 7107] a closed set of kinetic equations describing the TET process is derived. A reduction of this set to a single exponential donor-acceptor (D-A) TET is performed together with a derivation of an overall D-A TET rate. The latter contains a contribution of the stepwise as well as of the concerted route of D-A TET. The stepwise contribution is determined by two single-electron steps each of them associated with a sequential and a superexchange pathway. A two-electron unistep superexchange transition between the D and A forms the concerted contribution to the overall rate. Both contributions are analyzed in their dependency on the bridge length. The irregular distribution of the bridge site energies as well as the influence of the Coulomb interaction facilitates the D-A TET via a modification of the stepwise and the concerted part of the overall rate. At low temperatures and for short bridges with a single or two units the concerted contribution exceeds the stepwise contribution. If the bridge contains more than two units, the stepwise contribution dominates the overall rate
Designing Next Generation Massively Multithreaded Architectures for Irregular Applications
Energy Technology Data Exchange (ETDEWEB)
Tumeo, Antonino; Secchi, Simone; Villa, Oreste
2012-08-31
Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory reference aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.
Christ, Jacob P; Falcone, Tommaso
2018-03-02
To characterize the impact of bariatric surgery on reproductive and metabolic features common to polycystic ovary syndrome (PCOS) and to assess the relevance of preoperative evaluations in predicting likelihood of benefit from surgery. A retrospective chart review of records from 930 women who had undergone bariatric surgery at the Cleveland Clinic Foundation from 2009 to 2014 was completed. Cases of PCOS were identified from ICD coding and healthy women with pelvic ultrasound evaluations were identified using Healthcare Common Procedure Coding System coding. Pre- and postoperative anthropometric evaluations, menstrual cyclicity, ovarian volume (OV) as well as markers of hyperandrogenism, dyslipidemia, and dysglycemia were evaluated. Forty-four women with PCOS and 65 controls were evaluated. Both PCOS and non-PCOS had significant reductions in body mass index (BMI) and markers of dyslipidemia postoperatively (p PCOS had significant reductions in androgen levels (p irregular menses (p PCOS, independent of preoperative BMI and age, preoperative OV associated with change in hemoglobin A1c (β 95% (confidence interval) 0.202 (0.011-0.393), p = 0.04) and change in triglycerides (6.681 (1.028-12.334), p = 0.03), and preoperative free testosterone associated with change in total cholesterol (3.744 (0.906-6.583), p = 0.02) and change in non-HDL-C (3.125 (0.453-5.796), p = 0.03). Bariatric surgery improves key diagnostic features seen in women with PCOS and ovarian volume, and free testosterone may have utility in predicting likelihood of metabolic benefit from surgery.
Compensation for unfavorable characteristics of irregular individual shift rotas.
Knauth, Peter; Jung, Detlev; Bopp, Winfried; Gauderer, Patric C; Gissel, Andreas
2006-01-01
Some employees of TV companies, such as those who produce remote TV programs, have to cope with very irregular rotas and many short-term schedule deviations. Many of these employees complain about the negative effects of such on their wellbeing and private life. Therefore, a working group of employers, council representatives, and researchers developed a so-called bonus system. Based on the criteria of the BESIAK system, the following list of criteria for the ergonomic assessment of irregular shift systems was developed: proportion of night hours worked between 22 : 00 and 01 : 00 h and between 06 : 00 and 07 : 00 h, proportion of night hours worked between 01 : 00 and 06 : 00 h, number of successive night shifts, number of successive working days, number of shifts longer than 9 h, proportion of phase advances, off hours on weekends, work hours between 17 : 00 and 23 : 00 h from Monday to Friday, number of working days with leisure time at remote places, and sudden deviations from the planned shift rota. Each individual rota was evaluated in retrospect. If pre-defined thresholds of criteria were surpassed, bonus points were added to the worker's account. In general, more bonus points add up to more free time. Only in particular cases was monetary compensation possible for some criteria. The bonus point system, which was implemented in the year 2002 for about 850 employees of the TV company, has the advantages of more transparency concerning the unfavorable characteristics of working-time arrangements, incentive for superiors to design "good" rosters that avoid the bonus point thresholds (to reduce costs), positive short-term effects on the employee social life, and expected positive long-term effects on the employee health. In general, the most promising approach to cope with the problems of shift workers in irregular and flexible shift systems seems to be to increase their influence on the arrangement of working times. If this is not possible, bonus point systems
Airway surface irregularities promote particle diffusion in the human lung
International Nuclear Information System (INIS)
Martonen, T.; North Carolina Univ., Chapel Hill, NC; Zhang, Z.; Yang, Y.; Bottei, G.
1995-01-01
Current NCRP and ICRP particle deposition models employed in risk assessment analyses treat the airways of the human lung as smooth-walled tubes. However, the upper airways of the tracheobronchial (TB) tree are line with cartilaginous rings. Recent supercomputer simulations of in vivo conditions (cited herein), where cartilaginous ring morphologies were based upon fibre-optic bronchoscope examinations, have clearly demonstrated their profound effects upon fluid dynamics. A physiologically based analytical model of fluid dynamics is presented, focusing upon applications to particle diffusion within the TB tree. The new model is the first to describe particle motion while simultaneously simulating effects of wall irregularities, entrance conditions and tube curvatures. This study may explain the enhanced deposition by particle diffusion detected in replica case experiments and have salient implications for the clinically observed preferential distributions of bronchogenic carcinomas associated with inhaled radionuclides. (author)
Massive stars in the Sagittarius Dwarf Irregular Galaxy
Garcia, Miriam
2018-02-01
Low metallicity massive stars hold the key to interpret numerous processes in the past Universe including re-ionization, starburst galaxies, high-redshift supernovae, and γ-ray bursts. The Sagittarius Dwarf Irregular Galaxy [SagDIG, 12+log(O/H) = 7.37] represents an important landmark in the quest for analogues accessible with 10-m class telescopes. This Letter presents low-resolution spectroscopy executed with the Gran Telescopio Canarias that confirms that SagDIG hosts massive stars. The observations unveiled three OBA-type stars and one red supergiant candidate. Pending confirmation from high-resolution follow-up studies, these could be the most metal-poor massive stars of the Local Group.
Evaluation of Surface Slope Irregularity in Linear Parabolic Solar Collectors
Directory of Open Access Journals (Sweden)
F. Francini
2012-01-01
Full Text Available The paper describes a methodology, very simple in its application, for measuring surface irregularities of linear parabolic collectors. This technique was principally developed to be applied in cases where it is difficult to use cumbersome instruments and to facilitate logistic management. The instruments to be employed are a digital camera and a grating. If the reflector surface is defective, the image of the grating, reflected on the solar collector, appears distorted. Analyzing the reflected image, we can obtain the local slope of the defective surface. These profilometric tests are useful to identify and monitor the mirror portions under mechanical stress and to estimate the losses caused by the light rays deflected outside the absorber.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-01
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
Irregularities of ionospheric VTEC during lightning activity over Antarctic Peninsula
International Nuclear Information System (INIS)
Suparta, W; Wan Mohd Nor, W N A
2017-01-01
This paper investigates the irregularities of vertical total electron content (VTEC) during lightning activity and geomagnetic quiet days over Antarctic Peninsula in year 2014. During the lightning event, the ionosphere may be disturbed which may cause disruption in the radio signal. Thus, it is important to understand the influence of lightning on VTEC in the study of upper-lower interaction. The lightning data is obtained from World Wide Lightning Location Network (WWLLN) and the VTEC data has analyzed from Global Positioning System (GPS) for O’Higgins (OHI3), Palmer (PALV), and Rothera (ROTH). The results demonstrate the VTEC variation of ∼0.2 TECU during low lightning activity which could be caused by energy dissipation through lightning discharges from troposphere into the thermosphere. (paper)
Global scale ionospheric irregularities associated with thunderstorm activity
International Nuclear Information System (INIS)
Pulinets, Sergey A.; Depuev, Victor H.
2003-01-01
The potential difference near 280 kV exists between ground and ionosphere. This potential difference is generated by thunderstorm discharges all over the world, and return current closes the circuit in the areas of fair weather (so-called fair weather current). The model calculations and experimental measurements clearly demonstrate non-uniform latitude-longitude distribution of electric field within the atmosphere. The recent calculations show that the strong large scale vertical atmospheric electric field can penetrate into the ionosphere and create large scale irregularities of the electron concentration. To check this the global distributions of thunderstorm activity obtained with the satellite monitoring for different seasons were compared with the global distributions of ionosphere critical frequency (which is equivalent to peak electron concentration) obtained with the help of satellite topside sounding. The similarity of the obtained global distributions clearly demonstrates the effects of thunderstorm electric fields onto the Earth's ionosphere. (author)
Global scale ionospheric irregularities associated with thunderstorm activity
Pulinets, S A
2002-01-01
The potential difference near 280 kV exists between ground and ionosphere. This potential difference is generated by thunderstorm discharges all over the world, and return current closes the circuit in the areas of fair weather (so-called fair weather current). The model calculations and experimental measurements clearly demonstrate non-uniform latitude-longitude distribution of electric field within the atmosphere. The recent calculations show that the strong large scale vertical atmospheric electric field can penetrate into the ionosphere and create large scale irregularities of the electron concentration. To check this the global distributions of thunderstorm activity obtained with the satellite monitoring for different seasons were compared with the global distributions of ionosphere critical frequency (which is equivalent to peak electron concentration) obtained with the help of satellite topside sounding. The similarity of the obtained global distributions clearly demonstrates the effects of thunderstor...
Asymmetry and irregularity border as discrimination factor between melanocytic lesions
Sbrissa, David; Pratavieira, Sebastião.; Salvio, Ana Gabriela; Kurachi, Cristina; Bagnato, Vanderlei Salvadori; Costa, Luciano Da Fontoura; Travieso, Gonzalo
2015-06-01
Image processing tools have been widely used in systems supporting medical diagnosis. The use of mobile devices for the diagnosis of melanoma can assist doctors and improve their diagnosis of a melanocytic lesion. This study proposes a method of image analysis for melanoma discrimination from other types of melanocytic lesions, such as regular and atypical nevi. The process is based on extracting features related with asymmetry and border irregularity. It were collected 104 images, from medical database of two years. The images were obtained with standard digital cameras without lighting and scale control. Metrics relating to the characteristics of shape, asymmetry and curvature of the contour were extracted from segmented images. Linear Discriminant Analysis was performed for dimensionality reduction and data visualization. Segmentation results showed good efficiency in the process, with approximately 88:5% accuracy. Validation results presents sensibility and specificity 85% and 70% for melanoma detection, respectively.
Manufacturing of Cast Metal Foams with Irregular Cell Structure
Directory of Open Access Journals (Sweden)
Kroupová I.
2015-06-01
Full Text Available Metallic foams are materials of which the research is still on-going, with the broad applicability in many different areas (e.g. automotive industry, building industry, medicine, etc.. These metallic materials have specific properties, such as large rigidity at low density, high thermal conductivity, capability to absorb energy, etc. The work is focused on the preparation of these materials using conventional casting technology (infiltration method, which ensures rapid and economically feasible method for production of shaped components. In the experimental part we studied conditions of casting of metallic foams with open pores and irregular cell structure made of ferrous and nonferrous alloys by use of various types of filler material (precursors.
Method for hot pressing irregularly shaped refractory articles
Steinkamp, William E.; Ballard, Ambrose H.
1982-01-01
The present invention is directed to a method for hot pressing irregularly haped refractory articles with these articles of varying thickness being provided with high uniform density and dimensional accuracy. Two partially pressed compacts of the refractory material are placed in a die cavity between displaceable die punches having compact-contacting surfaces of the desired article configuration. A floating, rotatable block is disposed between the compacts. The displacement of the die punches towards one another causes the block to rotate about an axis normal to the direction of movement of the die punches to uniformly distribute the pressure loading upon the compacts for maintaining substantially equal volume displacement of the powder material during the hot pressing operation.
Irregular Migration - between legal status and social practices
DEFF Research Database (Denmark)
Lund Thomsen, Trine
2012-01-01
Arnfinn H. and Rogstad, Jon 2.Book reviews by null 3.INVISIBLE IMMIGRANTS, VISIBLE EXPATS? Americans in Finnish discourses on immigration and internationalization by Leinonen, Johanna 4.Migrants in the Scandinavian Welfare State by Brochmann, Grete and Hagelund, Anniken 5.TOWARD AN IDENTITY STRESS....... Language and religious affiliations of an immigrant adolescent in Norway by Haque, Shahzaman View Top 20 Most Downloaded Articles Previous Article Next Article Go to table of contents Download full text pdf (PDF, 425 KB) Irregular Migration – Between Legal Status and Social Practices Narratives of Polish...... connected to the specific area of activity and to the accumulated capital of the individual. The aim is to identify how opportunity structures affect the migration process and how migrants react to them depending on the available capital and biographical knowledge and experiences. The horizon of experience...
New Computational Approach to Electron Transport in Irregular Graphene Nanostructures
Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey
2009-03-01
For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.
Kriging for interpolation of sparse and irregularly distributed geologic data
Energy Technology Data Exchange (ETDEWEB)
Campbell, K.
1986-12-31
For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.
Towards intrinsic magnetism of graphene sheets with irregular zigzag edges.
Chen, Lianlian; Guo, Liwei; Li, Zhilin; Zhang, Han; Lin, Jingjing; Huang, Jiao; Jin, Shifeng; Chen, Xiaolong
2013-01-01
The magnetism of graphene has remained divergent and controversial due to absence of reliable experimental results. Here we show the intrinsic magnetism of graphene edge states revealed based on unidirectional aligned graphene sheets derived from completely carbonized SiC crystals. It is found that ferromagnetism, antiferromagnetism and diamagnetism along with a probable superconductivity exist in the graphene with irregular zigzag edges. A phase diagram is constructed to show the evolution of the magnetism. The ferromagnetic ordering curie-temperature of the fundamental magnetic order unit (FMOU) is 820 ± 80 K. The antiferromagnetic ordering Neel temperature of the FMOUs belonging to different sublattices is about 54 ± 2 K. The diamagnetism is similar to that of graphite and can be well described by the Kotosonov's equation. Our experimental results provide new evidences to clarify the controversial experimental phenomena observed in graphene and contribute to a deeper insight into the nature of magnetism in graphene based system.
Combined radar observations of equatorial electrojet irregularities at Jicamarca
Directory of Open Access Journals (Sweden)
D. L. Hysell
2007-03-01
Full Text Available Daytime equatorial electrojet plasma irregularities were investigated using five distinct radar diagnostics at Jicamarca including range-time-intensity (RTI mapping, Faraday rotation, radar imaging, oblique scattering, and multiple-frequency scattering using the new AMISR prototype UHF radar. Data suggest the existence of plasma density striations separated by 3–5 km and propagating slowly downward. The striations may be caused by neutral atmospheric turbulence, and a possible scenario for their formation is discussed. The Doppler shifts of type 1 echoes observed at VHF and UHF frequencies are compared and interpreted in light of a model of Farley Buneman waves based on kinetic ions and fluid electrons with thermal effects included. Finally, the up-down and east-west asymmetries evident in the radar observations are described and quantified.
Comparison of different dose calculation methods for irregular photon fields
International Nuclear Information System (INIS)
Zakaria, G.A.; Schuette, W.
2000-01-01
In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de
Experimental Study of Irregular Waves on a Gravel Beach
Hu, Nai-Ren; Wu, Yun-Ta; Hwung, Hwung-Hweng; Yang, Ray-Yeng
2017-04-01
In the east coast of Taiwan, the sort grain size more belongs to cobble or gravel, which is physically distinct compared to the sandy beach in the west coast of Taiwan. Although gravel beaches can dissipate more of wave energy, gravel beaches were eroded and coastal road were damaged especially during typhoons. The purpose of this study is to investigate the geomorphological response of gravel beach due to irregular waves. This experiment was carry out in a 21m long, 50 cm wide, 70 cm high wave tank at Tainan Hydraulics Laboratory, National Cheng-Kung University, Taiwan. To simulate of the geometry in the east coast of Taiwan, a physical model with 1/36 scale-down was used, in which the seawall was 10cm built upon a 1:10 slope and gravel grains with D50 being 3.87 mm was nourished in front of the seawall. In terms of typhoon-scale wave condition, irregular waves with scale-down conditions were generated for 600 s for each scenarios and, three different water levels with respect to the gravel beach are designed. Application of laser combined with image processing to produce 3D topographic map, the erosion zone and accretion zone would be found. The resulting morphological change of gravel beach will be measured using an integrated laser and image processing tool to have 3D topographic maps. It is expected to have more understanding about under what conditions the gravel coasts suffer the least damage. In particular, the relation between erosion rates of gravel beach, the angle of gravel slope and the length of the plane on the gravel slope will be achieved
Tamoxifen treatment of bleeding irregularities associated with Norplant use.
Abdel-Aleem, Hany; Shaaban, Omar M; Amin, Ahmed F; Abdel-Aleem, Aly M
2005-12-01
To evaluate the possible role of tamoxifen (selective estrogen receptor modulators, SERM) in treating bleeding irregularities associated with Norplant contraceptive use. Randomized clinical trial including 100 Norplant users complaining of vaginal bleeding irregularities. The trial was conducted in the Family Planning Clinic of Assiut University Hospital. Women were assigned at random to receive tamoxifen tablets (10 mg) twice daily for 10 days or similar placebo. Women were followed-up for 3 months. The end points were percentage of women who stopped bleeding during treatment, bleeding/spotting days during the period of follow-up, effect of treatment on their lifestyle, and side effects and discontinuation of contraception. There was good compliance with treatment. At the end of treatment, a significantly higher percentage of tamoxifen users stopped bleeding in comparison to the control group (88% vs. 68%, respectively; p=.016). Women who used tamoxifen had significantly less bleeding and/or spotting days than women who used placebo, during the first and second months. During the third month, there were no significant differences between the two groups. Women who used tamoxifen reported improvement in performing household activities, religious duties and in sexual life, during the first 2 months. In the third month, there were no differences between the two groups. There were no significant differences between tamoxifen and placebo groups in reporting side effects. In the group who used tamoxifen, two women discontinued Norplant use because of bleeding vs. nine women in the placebo group. Tamoxifen use at a dose of 10 mg twice daily orally, for 10 days, has a beneficial effect on vaginal bleeding associated with Norplant use. In addition, the bleeding pattern was better in women who used tamoxifen for the following 2 months after treatment. However, these results have to be confirmed in a larger trial before advocating this line of treatment.
ESA' s novel gravitational modeling of irregular planetary bodies
Ortega, Guillermo
A detailed understanding and modeling of the gravitational modeling is required for realistic investigation of the dynamics of orbits close to irregularly shaped bodies. Gravity field modelling up to a certain maximum spherical harmonic degree N involves N2 unkown spherical harmonic coefficients or complex harmonics. The corresponding number of matrix entries reaches till N4 . For missions like CHAMP, GRACE or GOCE, the maximum degree of resolution is 75, 150 and 300 respectively. Therefore, the number of unknowns for a satellite like GOCE will be around 100.000. Since these missions usually fly for a period of time of several years, the number of observations is huge. Hence, gravity field recovery from these missions is a high demanding task. The classical approaches like spherical expansion of the potential lead generally to a high number of coefficients, which reduce the software computational efficiency of the orbit propagation and which have mostly a limited physical meaning. One of the main targets of the activity is the modelling of asteroids, small moons, and cometary bodies. All celestial bodies are irregular by definition. However, the scope of the activity is broad enough as to be able to use the models and the software in quasy-regular bodies as well. Therefore the models and tools could be used for bodies such as the Moon, Mars, Venus, Deimos, Europa, Eros, Mathilda, and Churyumov-Gerasimenko, etc., being these applications relevant for scientific (Rosetta, Bepi Colombo), exploration (Exo-Mars), NEO mitigation (Don Quijote) and Earth observation (GOCE) missions of ESA.
Volume determination of irregularly-shaped quasi-spherical nanoparticles.
Attota, Ravi Kiran; Liu, Eileen Cherry
2016-11-01
Nanoparticles (NPs) are widely used in diverse application areas, such as medicine, engineering, and cosmetics. The size (or volume) of NPs is one of the most important parameters for their successful application. It is relatively straightforward to determine the volume of regular NPs such as spheres and cubes from a one-dimensional or two-dimensional measurement. However, due to the three-dimensional nature of NPs, it is challenging to determine the proper physical size of many types of regularly and irregularly-shaped quasi-spherical NPs at high-throughput using a single tool. Here, we present a relatively simple method that determines a better volume estimate of NPs by combining measurements from their top-down projection areas and peak heights using two tools. The proposed method is significantly faster and more economical than the electron tomography method. We demonstrate the improved accuracy of the combined method over scanning electron microscopy (SEM) or atomic force microscopy (AFM) alone by using modeling, simulations, and measurements. This study also exposes the existence of inherent measurement biases for both SEM and AFM, which usually produce larger measured diameters with SEM than with AFM. However, in some cases SEM measured diameters appear to have less error compared to AFM measured diameters, especially for widely used IS-NPs such as of gold, and silver. The method provides a much needed, proper high-throughput volumetric measurement method useful for many applications. Graphical Abstract The combined method for volume determination of irregularly-shaped quasi-spherical nanoparticles.
Reproducibility of irregular radiation fields for malignant lymphoma
International Nuclear Information System (INIS)
Mock, U.; Dieckmann, K.; Poetter, R.; Molitor, A.M.; Haverkamp, U.
1998-01-01
Purpose: Radiation treatment for malignant lymphoma requires large field irradiation with irregular blocks according to the individual anatomy and tumor configuration. For determination of safety margins (PTV) we quantitatively analysed the accuracy of field and block placement with regard to different anatomical regions. Patients and Methods: Forty patients with malignant lymphoma were irradiated using the classical supra-/infradiaphragmatic field arrangements. Treatment was performed with 10-MeV photons and irregularly shaped, large opposing fields. We evaluated the accuracy of field and block placements during the treatment courses by comparing the regularly performed verification - with the simulation films. Deviations were determined with respect to the field edges and the central axis, along the x- and z-axis. Results: With regard to the field edges, mean deviations of 2.0 mm and 3.4 mm were found along the x- and z-axis. The corresponding standard deviations were 3.4 mm and 5.5 mm, respectively. With regard to the shielding blocks, mean displacement along the x- and z-axis was 2.2 mm and 3.8 mm. In addition, overall standard deviations of 5.7 mm (x-axis) and 7.1 mm (z-axis) were determined. During the course of time an improved accuracy of block placement was notable. Conclusion: Systematic analysis of port films gives information for a better defining safety margins in external radiotherapy. Evaluation of verification films on a regular basis improves set-up accuracy by reducing displacements. (orig.) [de
Automatic coding method of the ACR Code
International Nuclear Information System (INIS)
Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi
1993-01-01
The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Measurements of electron density irregularities in the ionosphere of Jupiter by Pioneer 10
International Nuclear Information System (INIS)
Woo, R.; Yang, F.
1976-01-01
In this paper we demonstrate that when the frequency spectrum of the log amplitude fluctuations is used, the radio occultation experiment is a powerful tool for detecting, identifying, and studying ionospheric irregularities. Analysis of the Pioneer 10 radio occultation measurements reveals that the Jovian ionosphere possesses electron density irregularities which are very similar to those found in the earth's ionosphere. This is the first time such irregularities have been found in a planetary ionosphere other than that of earth. The Pioneer 10 results indicate that the spatial wave number spectrum of the electron density irregularities is close to the Kolmogorov spectrum and that the outer scale size is greater than the Fresnel size (6.15 km). This type of spectrum suggests that the irregularities are probably produced by the turbulent dissipation of irregularities larger than the outer scale size
Gagie, Travis
2005-01-01
We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.
Directory of Open Access Journals (Sweden)
Atamewoue Surdive
2017-12-01
Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.
Crime among irregular immigrants and the influence of internal border control
Leerkes, Arjen; Engbersen, Godfried; Leun, Joanne
2012-01-01
textabstractBoth the number of crime suspects without legal status and the number of irregular or undocumented immigrants held in detention facilities increased substantially in theNetherlands between 1997 and 2003. In this period, theDutch state increasingly attempted to exclude irregular immigrants from the formal labour market and public provisions. At the same time the registered crime among irregular migrants rose. The 'marginalisation thesis' asserts that a larger number of migrants hav...
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
International Nuclear Information System (INIS)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.; Moreno, F.; Penttilä, A.; Muinonen, K.
2017-01-01
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected to mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
Energy Technology Data Exchange (ETDEWEB)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.; Moreno, F. [Instituto de Astrofìsica de Andalucìa, CSIC, Glorieta de la Astronomìa s/n, E-18008 Granada (Spain); Penttilä, A.; Muinonen, K. [Department of Physics, P.O. Box 64, FI-00014 University of Helsinki (Finland)
2017-03-20
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected to mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.
Effects of magnetic storm phases on F layer irregularities below the auroral oval
International Nuclear Information System (INIS)
Aarons, J.; Gurgiolo, C.; Rodger, A.S.
1988-01-01
Observations of F-layer irregularity development and intensity were obtained between September and October 1981, primarily over subauroral latitudes in the area of the plasmapause. The results reveal the descent of the auroral irregularity region to include subauroral latitudes in the general area of the plasmapause during the main phases of a series of magnetic storms. Irregularities were found primarily at lower latitudes during the subauroral or plasmapause storm. A model for the subauroral irregularities in recovery phases of magnetic storms is proposed in which energy stored in the ring current is slowly released. 27 references
Dependence on zenith angle of the strength of 3-meter equatorial electrojet irregularities
International Nuclear Information System (INIS)
Ierkic, H.M.; Fejer, B.G.; Farley, D.T.
1980-01-01
Radar measurements in Peru were used to deduce the zenith angle dependence of the scattering cross section of plasma irregularities generated by instabilities in the equatorial electrojet. The irregularities probed by the 50 MHz Jicamarca radar had a wavelength of 3m. The cross section for the type 2 irregularities was isotopic in the plane perpendicular to the magnetic field, while the cross section for the stronger type 1 irregularities varied with zenith angle at a rate of approximately 0.3 dB/degree; the horizontally traveling waves were more than 100 times stronger than those traveling vertically
Spatial irregularities in Jupiter's upper ionosphere observed by Voyager radio occultations
Hinson, D. P.; Tyler, G. L.
1982-01-01
Radio scintillations (at 3.6 and 13 cm) produced by scattering from ionospheric irregularities during the Voyager occultations are interpreted using a weak-scattering theory. Least squares solutions for ionospheric parameters derived from the observed fluctuation spectra yield estimates of (1) the axial ratio, (2) angular orientation of the anisotropic irregularities, (3) the power law exponent of the spatial spectrum of irregularities, and (4) the magnitude of the spatial variations in electron density. It is shown that the measured angular orientation of the anisotropic irregularities indicates magnetic field direction and may provide a basis for refining Jovian magnetic field models.
Energy Technology Data Exchange (ETDEWEB)
Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.; Tumeo, Antonino; Halappanavar, Mahantesh
2017-06-01
High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structured locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.
Fast algorithm for two-dimensional data table use in hydrodynamic and radiative-transfer codes
International Nuclear Information System (INIS)
Slattery, W.L.; Spangenberg, W.H.
1982-01-01
A fast algorithm for finding interpolated atomic data in irregular two-dimensional tables with differing materials is described. The algorithm is tested in a hydrodynamic/radiative transfer code and shown to be of comparable speed to interpolation in regularly spaced tables, which require no table search. The concepts presented are expected to have application in any situation with irregular vector lengths. Also, the procedures that were rejected either because they were too slow or because they involved too much assembly coding are described
Extended Schmidt law holds for faint dwarf irregular galaxies
Roychowdhury, Sambit; Chengalur, Jayaram N.; Shi, Yong
2017-12-01
Context. The extended Schmidt law (ESL) is a variant of the Schmidt which relates the surface densities of gas and star formation, with the surface density of stellar mass added as an extra parameter. Although ESL has been shown to be valid for a wide range of galaxy properties, its validity in low-metallicity galaxies has not been comprehensively tested. This is important because metallicity affects the crucial atomic-to-molecular transition step in the process of conversion of gas to stars. Aims: We empirically investigate for the first time whether low metallicity faint dwarf irregular galaxies (dIrrs) from the local universe follow the ESL. Here we consider the "global" law where surface densities are averaged over the galactic discs. dIrrs are unique not only because they are at the lowest end of mass and star formation scales for galaxies, but also because they are metal-poor compared to the general population of galaxies. Methods: Our sample is drawn from the Faint Irregular Galaxy GMRT Survey (FIGGS) which is the largest survey of atomic hydrogen in such galaxies. The gas surface densities are determined using their atomic hydrogen content. The star formation rates are calculated using GALEX far ultraviolet fluxes after correcting for dust extinction, whereas the stellar surface densities are calculated using Spitzer 3.6 μm fluxes. The surface densities are calculated over the stellar discs defined by the 3.6 μm images. Results: We find dIrrs indeed follow the ESL. The mean deviation of the FIGGS galaxies from the relation is 0.01 dex, with a scatter around the relation of less than half that seen in the original relation. In comparison, we also show that the FIGGS galaxies are much more deviant when compared to the "canonical" Kennicutt-Schmidt relation. Conclusions: Our results help strengthen the universality of the ESL, especially for galaxies with low metallicities. We suggest that models of star formation in which feedback from previous generations
Irregular Saturnian Moon Lightcurves from Cassini-ISS Observations: Update
Denk, Tilmann; Mottola, S.
2013-10-01
Cassini ISS-NAC observations of the irregular moons of Saturn revealed various physical information on these objects. 16 synodic rotational periods: Hati (S43): 5.45 h; Mundilfari (S25): 6.74 h; Suttungr (S23): ~7.4 h; Kari (S45): 7.70 h; Siarnaq (S29): 10.14 h; Tarvos (S21): 10.66 h; Ymir (S19, sidereal period): 11.92220 h ± 0.1 s; Skathi (S27): ~12 h; Hyrrokkin (S44): 12.76 h; Ijiraq (S22): 13.03 h; Albiorix (S26): 13.32 h; Bestla (S39): 14.64 h; Bebhionn (S37): ~15.8 h; Kiviuq (S24): 21.82 h; Thrymr (S30): ~27 h; Erriapus (S28): ~28 h. The average period for the prograde-orbiting moons is ~16 h, for the retrograde moons ~11½ h (includes Phoebe's 9.2735 h from Bauer et al., AJ, 2004). Phase-angle dependent behavior of lightcurves: The phase angles of the observations range from 2° to 105°. The lightcurves which were obtained at low phase (<40°) show the 2-maxima/ 2-minima pattern expected for this kind of objects. At higher phases, more complicated lightcurves emerge, giving rough indications on shapes. Ymir pole and shape: For satellite Ymir, a convex-hull shape model and the pole-axis orientation have been derived. Ymir's north pole points toward λ = 230°±180°, β = -85°±10°, or RA = 100°±20°, Dec = -70°±10°. This is anti-parallel to the rotation axes of the major planets, indicating that Ymir not just orbits, but also rotates in a retrograde sense. The shape of Ymir resembles a triangular prism with edge lengths of ~20, ~24, and ~25 km. The ratio between the longest 25 km) and shortest axis (pole axis, ~15 km) is ~1.7. Erriapus seasons: The pole direction of object Erriapus has probably a low ecliptic latitude. This gives this moon seasons similar to the Uranian regular moons with periods where the sun stands very high in the sky over many years, and with years-long periods of permanent night. Hati density: The rotational frequency of the fastest rotator (Hati) is close to the frequency where the object would lose material from the surface if
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Energy Technology Data Exchange (ETDEWEB)
Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com
2013-03-15
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Worst configurations (instantons) for compressed sensing over reals: a channel coding approach
International Nuclear Information System (INIS)
Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane
2010-01-01
We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.
SimCommSys: taking the errors out of error-correcting code simulations
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.
ON THE STAR FORMATION LAW FOR SPIRAL AND IRREGULAR GALAXIES
Energy Technology Data Exchange (ETDEWEB)
Elmegreen, Bruce G., E-mail: bge@us.ibm.com [IBM Research Division, T.J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598 (United States)
2015-12-01
A dynamical model for star formation on a galactic scale is proposed in which the interstellar medium is constantly condensing to star-forming clouds on the dynamical time of the average midplane density, and the clouds are constantly being disrupted on the dynamical timescale appropriate for their higher density. In this model, the areal star formation rate scales with the 1.5 power of the total gas column density throughout the main regions of spiral galaxies, and with a steeper power, 2, in the far outer regions and in dwarf irregular galaxies because of the flaring disks. At the same time, there is a molecular star formation law that is linear in the main and outer parts of disks and in dIrrs because the duration of individual structures in the molecular phase is also the dynamical timescale, canceling the additional 0.5 power of surface density. The total gas consumption time scales directly with the midplane dynamical time, quenching star formation in the inner regions if there is no accretion, and sustaining star formation for ∼100 Gyr or more in the outer regions with no qualitative change in gas stability or molecular cloud properties. The ULIRG track follows from high densities in galaxy collisions.
Influence of Ionospheric Irregularities on GNSS Remote Sensing
Directory of Open Access Journals (Sweden)
M. V. Tinin
2015-01-01
Full Text Available We have used numerical simulation to study the effects of ionospheric irregularities on accuracy of global navigation satellite system (GNSS measurements, using ionosphere-free (in atmospheric research and geometry-free (in ionospheric research dual-frequency phase combinations. It is known that elimination of these effects from multifrequency GNSS measurements is handi-capped by diffraction effects during signal propagation through turbulent ionospheric plasma with the inner scale being smaller than the Fresnel radius. We demonstrated the possibility of reducing the residual ionospheric error in dual-frequency GNSS remote sensing in ionosphere-free combination by Fresnel inversion. The inversion parameter, the distance to the virtual screen, may be selected from the minimum of amplitude fluctuations. This suggests the possibility of improving the accuracy of GNSS remote sensing in meteorology. In the study of ionospheric disturbances with the aid of geometry-free combination, the Fresnel inversion eliminates only the third-order error. To eliminate the random TEC component which, like the measured average TEC, is the first-order correction, we should use temporal filtering (averaging.
Timing irregularities of PSR J1705-1906
Liu, Y. L.; Yuan, J. P.; Wang, J. B.; Liu, X. W.; Wang, N.; Yuen, R.
2018-05-01
Timing analysis of PSR J1705-1906 using data from Nanshan 25-m and Parkes 64-m radio telescopes, which span over fourteen years, shows that the pulsar exhibits significant proper motion, and rotation instability. We updated the astrometry parameters and the spin parameters of the pulsar. In order to minimize the effect of timing irregularities on measuring its position, we employ the Cholesky method to analyse the timing noise. We obtain the proper motion of -77(3) mas yr-1 in right ascension and -38(29) mas yr-1 in declination. The power spectrum of timing noise is analyzed for the first time, which gives the spectral exponent α =-5.2 for the power-law model indicating that the fluctuations in spin frequency and spin-down rate dominate the red noise. We detect two small glitches from this pulsar with fractional jump in spin frequency of Δ ν /ν ˜ 2.9 × 10^{-10} around MJD 55199 and Δ ν /ν ˜ 2.7× 10^{-10} around MJD 55953. Investigations of pulse profile at different time segments suggest no significant changes in the pulse profiles around the two glitches.
Distributed sensing of ionospheric irregularities with a GNSS receiver array
Su, Yang; Datta-Barua, Seebany; Bust, Gary S.; Deshpande, Kshitija B.
2017-08-01
We present analysis methods for studying the structuring and motion of ionospheric irregularities at the subkilometer scale sizes that produce L band scintillations. Spaced-receiver methods are used for Global Navigation Satellite System (GNSS) receivers' phase measurements over approximately subkilometer to kilometer length baselines for the first time. The quantities estimated by these techniques are plasma drift velocity, diffraction anisotropy magnitude and orientation, and characteristic velocity. Uncertainties are quantified by ensemble simulation of noise on the phase signals carried through to the observations of the spaced-receiver linear system. These covariances are then propagated through to uncertainties on drifts through linearization about the estimated values of the state. Five receivers of SAGA, the Scintillation Auroral Global Positioning System (GPS) Array, provide 100 Hz power and phase data for each channel at L1 frequency. The array is sited in the auroral zone at Poker Flat Research Range, Alaska. A case study of a single scintillating satellite observed by the array is used to demonstrate the spaced-receiver and uncertainty estimation process. A second case study estimates drifts as measured by multiple scintillating channels. These scintillations are correlated with auroral activity, based on all-sky camera images. Measurements and uncertainty estimates made over a 30 min period are compared to a collocated incoherent scatter radar and show good agreement in horizontal drift speed and direction during periods of scintillation for which the characteristic velocity is less than the drift velocity.
Metallicity of Young and Old Stars in Irregular Galaxies
Tikhonov, N. A.
2018-01-01
Based on archived images obtained with the Hubble Space Telescope, stellar photometry for 105 irregular galaxies has been conducted. We have shown the red supergiant and giant branches in the obtained Hertzsprung-Russel diagrams. Using the TRGB method, distances to galaxies and metallicity of red giants have been determined. The color index ( V - I) of the supergiant branch at the luminosity level M I = -7 was chosen as the metallicity index of red supergiants. For the galaxies under study, the diagrams have been built, in which the correlation can be seen between the luminosity of galaxies ( M B ) and metallicity of red giants and supergiants. The main source of variance of the results in the obtained diagrams is, in our opinion, uncertainty inmeasurements of galaxy luminosities and star-forming outburst. The relation between metallicity of young and old stars shows that main enrichment of galaxies with metals has taken place in the remote past. Deviations of some galaxies in the obtained relation can possibly be explained with the fall of the intergalactic gas on them, although, this inconsiderably affects metallicities of the stellar content.
Climatic irregular staircases: generalized acceleration of global warming.
De Saedeleer, Bernard
2016-01-27
Global warming rates mentioned in the literature are often restricted to a couple of arbitrary periods of time, or of isolated values of the starting year, lacking a global view. In this study, we perform on the contrary an exhaustive parametric analysis of the NASA GISS LOTI data, and also of the HadCRUT4 data. The starting year systematically varies between 1880 and 2002, and the averaging period from 5 to 30 yr - not only decades; the ending year also varies . In this way, we uncover a whole unexplored space of values for the global warming rate, and access the full picture. Additionally, stairstep averaging and linear least squares fitting to determine climatic trends have been sofar exclusive. We propose here an original hybrid method which combines both approaches in order to derive a new type of climatic trend. We find that there is an overall acceleration of the global warming whatever the value of the averaging period, and that 99.9% of the 3029 Earth's climatic irregular staircases are rising. Graphical evidence is also given that choosing an El Niño year as starting year gives lower global warming rates - except if there is a volcanic cooling in parallel. Our rates agree and generalize several results mentioned in the literature.
Propagating star formation and irregular structure in spiral galaxies
International Nuclear Information System (INIS)
Mueller, M.W.; Arnett, W.D.
1976-01-01
A simple model is proposed which describes the irregular optical appearance often seen in late-type spiral galaxies. If high-mass stars produce spherical shock waves which induce star formation, new high-mass stars will be born which, in turn, produce new shock waves. When this process operates in a differentially rotating disk, our numerical model shows that large-scale spiral-shaped regions of star formation are built up. The structure is seen to be most sensitive to a parameter which governs how often a region of the interstellar medium can undergo star formation. For a proper choice of this parameter, large-scale features disappear before differential rotation winds them up. New spiral features continuously form, so some spiral structure is seen indefinitely. The structure is not the classical two-armed symmetric spiral pattern which the density-wave theory attempts to explain, but it is asymmetric and disorderly.The mechanism of propagating star formation used in our model is consistent with observations which connect young OB associations with expanding shells of gas. We discuss the possible interaction of this mechanism with density waves
Weighted statistical parameters for irregularly sampled time series
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
Irregular persistent activity induced by synaptic excitatory feedback
Directory of Open Access Journals (Sweden)
Francesca Barbieri
2007-11-01
Full Text Available Neurophysiological experiments on monkeys have reported highly irregular persistent activity during the performance of an oculomotor delayed-response task. These experiments show that during the delay period the coefficient of variation (CV of interspike intervals (ISI of prefrontal neurons is above 1, on average, and larger than during the fixation period. In the present paper, we show that this feature can be reproduced in a network in which persistent activity is induced by excitatory feedback, provided that (i the post-spike reset is close enough to threshold , (ii synaptic efficacies are a non-linear function of the pre-synaptic firing rate. Non-linearity between presynaptic rate and effective synaptic strength is implemented by a standard short-term depression mechanism (STD. First, we consider the simplest possible network with excitatory feedback: a fully connected homogeneous network of excitatory leaky integrate-and-fire neurons, using both numerical simulations and analytical techniques. The results are then confirmed in a network with selective excitatory neurons and inhibition. In both the cases there is a large range of values of the synaptic efficacies for which the statistics of firing of single cells is similar to experimental data.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
2016-02-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Marfleet, Philip; Blustein, David L.
2011-01-01
Using an integrative perspective drawn from vocational psychology and migration studies, this article explores the lives of irregular migrants, which represents a unique aspect of work-based migration. Irregular migrants are those individuals who travel from regions without much work to states that offer some means of employment, without formal…
Bloemsaat, J.G.; Galen, G.P. van; Meulenbroek, R.G.J.
2003-01-01
This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly
The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading
Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy
2012-01-01
This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…
Influence of initial stress, irregularity and heterogeneity on Love-type ...
Indian Academy of Sciences (India)
The present paper deals with the propagation of Love-type wave in an initially stressed irregular vertically heterogeneous layer lying over an initially stressed isotropic layer and an initially stressed isotropic half- space. Two different types of irregularities, viz., rectangular and parabolic, are considered at the interface.
Crime among irregular immigrants and the influence of internal border control
Leerkes, A.S.; Engbersen, G.; Leun, van der J.P.
2012-01-01
Abstract Both the number of crime suspects without legal status and the number of irregular or undocumented immigrants held in detention facilities increased substantially in theNetherlands between 1997 and 2003. In this period, theDutch state increasingly attempted to exclude irregular immigrants
Crime among irregular immigrants and the influence of internal border control
A.S. Leerkes (Arjen); G.B.M. Engbersen (Godfried); J.P. van der Leun (Joanne)
2012-01-01
textabstractBoth the number of crime suspects without legal status and the number of irregular or undocumented immigrants held in detention facilities increased substantially in theNetherlands between 1997 and 2003. In this period, theDutch state increasingly attempted to exclude irregular
Breaking Down Anonymity: Digital surveillance on irregular migrants in Germany and the Netherlands
D.W.J. Broeders (Dennis)
2009-01-01
textabstractThe presence of irregular migrants causes a tough problem for policy makers. Political and popular aversion against the presence of irregular migrants has mounted in most West-European societies for years, yet their presence remains. Their exact numbers are obviously unknown - only
Dynamics of long-period irregular pulsations in high latitudes during strong magnetic storms
International Nuclear Information System (INIS)
Kurazhkovskaya, N.A.; Klajn, B.I.
1995-01-01
Effects of strong magnetic storms within np type high-latitudinal long-period irregular pulsations at Mirny studied using data obtained at observatory of the magnetosphere south hemisphere. Variation of long-period irregular pulsation amplitude is shown to depend essentially on duration of storm initial phase and on the nature of solar wind heterogeneity enabling growth of strong storm. 14 refs
Spectral classification of medium-scale high-latitude F region plasma density irregularities
International Nuclear Information System (INIS)
Singh, M.; Rodriguez, P.; Szuszczewicz, E.P.; Sachs Freeman Associates, Bowie, MD)
1985-01-01
The high-latitude ionosphere represents a highly structured plasma. Rodriguez and Szuszczewicz (1984) reported a wide range of plasma density irregularities (150 km to 75 m) at high latitudes near 200 km. They have shown that the small-scale irregularities (7.5 km to 75 m) populated the dayside oval more often than the other phenomenological regions. It was suggested that in the lower F region the chemical recombination is fast enough to remove small-scale irregularities before convection can transport them large distances, leaving structured particle precipitation as the dominant source term for irregularities. The present paper provides the results of spectral analyses of pulsed plasma probe data collected in situ aboard the STP/S3-4 satellite during the period March-September 1978. A quantitative description of irregularity spectra in the high-latitude lower F region plasma density is given. 22 references
Significance of scatter radar studies of E and F region irregularities at high latitudes
International Nuclear Information System (INIS)
Greenwald, R.A.
1983-01-01
This chapter considers the mechanisms by which electron density irregularities may be generated in the high latitude ionosphere and the techniques through which they are observed with ground base radars. The capabilities of radars used for studying these irregularities are compared with the capabilities of radars used for incoherent scatter measurements. The use of irregularity scatter techniques for dynamic studies of larger scale structured phenomena is discussed. Topics considered include E-region irregularities, observations with auroral radars, plasma drifts associated with a westward travelling surge, and ionospheric plasma motions associated with resonant waves. It is shown why high latitude F-region irregularity studies must be made in the HF frequency band (3-30 MHz). The joint use of the European Incoherent Scatter Association (EISCAT), STARE and SAFARI facilities is examined, and it is concluded that the various techniques will enhance each other and provide a better understanding of the various processes being studied
New Opportunities for Remote Sensing Ionospheric Irregularities by Fitting Scintillation Spectra
Carrano, C. S.; Rino, C. L.; Groves, K. M.
2017-12-01
In a recent paper, we presented a phase screen theory for the spectrum of intensity scintillations when the refractive index irregularities follow a two-component power law [Carrano and Rino, DOI: 10.1002/2015RS005903]. More recently we have investigated the inverse problem, whereby phase screen parameters are inferred from scintillation time series. This is accomplished by fitting the spectrum of intensity fluctuations with a parametrized theoretical model using Maximum Likelihood (ML) methods. The Markov-Chain Monte-Carlo technique provides a-posteriori errors and confidence intervals. The Akaike Information Criterion (AIC) provides justification for the use of one- or two-component irregularity models. We refer to this fitting as Irregularity Parameter Estimation (IPE) since it provides a statistical description of the irregularities from the scintillations they produce. In this talk, we explore some new opportunities for remote sensing ionospheric irregularities afforded by IPE. Statistical characterization of irregularities and the plasma bubbles in which they are embedded provides insight into the development of the underlying instability. In a companion paper by Rino et al., IPE is used to interpret scintillation due to simulated EPB structure. IPE can be used to reconcile multi-frequency scintillation observations and to construct high fidelity scintillation simulation tools. In space-to-ground propagation scenarios, for which an estimate of the distance to the scattering region is available a-priori, IPE enables retrieval of zonal irregularity drift. In radio occultation scenarios, the distance to the irregularities is generally unknown but IPE enables retrieval of Fresnel frequency. A geometric model for the effective scan velocity maps Fresnel frequency to Fresnel scale, yielding the distance to the irregularities. We demonstrate this approach by geolocating irregularities observed by the CORISS instrument onboard the C/NOFS satellite.
Irregular Morphing for Real-Time Rendering of Large Terrain
Directory of Open Access Journals (Sweden)
S. Kalem
2016-06-01
Full Text Available The following paper proposes an alternative approach to the real-time adaptive triangulation problem. A new region-based multi-resolution approach for terrain rendering is described which improves on-the-fly the distribution of the density of triangles inside the tile after selecting appropriate Level-Of-Detail by an adaptive sampling. This proposed approach organizes the heightmap into a QuadTree of tiles that are processed independently. This technique combines the benefits of both Triangular Irregular Network approach and region-based multi-resolution approach by improving the distribution of the density of triangles inside the tile. Our technique morphs the initial regular grid of the tile to deformed grid in order to minimize approximation error. The proposed technique strives to combine large tile size and real-time processing while guaranteeing an upper bound on the screen space error. Thus, this approach adapts terrain rendering process to local surface characteristics and enables on-the-fly handling of large amount of terrain data. Morphing is based-on the multi-resolution wavelet analysis. The use of the D2WT multi-resolution analysis of the terrain height-map speeds up processing and permits to satisfy an interactive terrain rendering. Tests and experiments demonstrate that Haar B-Spline wavelet, well known for its properties of localization and its compact support, is suitable for fast and accurate redistribution. Such technique could be exploited in client-server architecture for supporting interactive high-quality remote visualization of very large terrain.
Feedback Limiting the Coastal Response to Irregularities in Shelf Bathymetry
List, J. H.; Benedet, L.
2007-12-01
Observations and engineering studies have shown that non-uniform inner shelf bathymetry can influence longshore sediment transport gradients and create patterns of shoreline change. One classic example is from Grand Isle, Louisiana, where two offshore borrow pits caused two zones of shoreline accretion landward of the pits. In addition to anthropogenic cases, many natural situations exist in which irregularities in coastal planform are thought to result from offshore shoals or depressions. Recent studies using the hydrodynamic model Delft3D have successfully simulated the observed nearshore erosion and accretion patterns landward of an inner shelf borrow pit. An analysis of the momentum balance in a steady-state simulation has demonstrated that both alongshore pressure gradients (due to alongshore variations in wave setup) and radiation stress gradients (terms relevant to alongshore forcing) are important for forcing the initial pattern of nearshore sedimentation in response to the borrow pit. The response of the coast to non-uniform inner shelf bathymetry appears to be limited, however, because observed shoreline undulations are often rather subtle. (An exception may exist in the case of a very high angle wave climate.) Therefore, feedbacks in processes must exist such that growth of the shoreline salient itself modifies the transport processes in a way that limits further growth (assuming the perturbation in inner shelf bathymetry itself remains unchanged). Examination of the Delft3D momentum balance for an inner shelf pit test case demonstrates that after a certain degree of morphologic development the forcing associated with the well-known shoreline smoothing process (a.k.a., diffusion) counteracts the forcing associated with the inner shelf pit, producing a negative feedback which arrests further growth of the shoreline salient. These results provide insights into the physical processes that control shoreline changes behind inner shelf bathymetric anomalies (i
TINITALY/01: a new Triangular Irregular Network of Italy
Directory of Open Access Journals (Sweden)
M. T. Pareschi
2007-06-01
Full Text Available A new Digital Elevation Model (DEM of the natural landforms of Italy is presented. A methodology is discussed to build a DEM over wide areas where elevation data from non-homogeneous (in density and accuracy input sources are available. The input elevation data include contour lines and spot heights derived from the Italian Regional topographic maps, satellite-based global positioning system points, ground based and radar altimetry data. Owing to the great heterogeneity of the input data density, the DEM format that better preserves the original accuracy is a Triangular Irregular Network (TIN. A Delaunay-based TIN structure is improved by using the DEST algorithm that enhances input data by evaluating inferred break-lines. Accordingly to this approach, biased distributions in slopes and elevations are absent. To prevent discontinuities at the boundary between regions characterized by data with different resolution a cubic Hermite blending weight S-shaped function is adopted. The TIN of Italy consists of 1.39×109 triangles. The average triangle area ranges from 12 to about 13000 m2 accordingly to different morphologies and different sources. About 50% of the model has a local average triangle area <500 m2. The vertical accuracy of the obtained DEM is evaluated by more than 200000 sparse control points. The overall Root Mean Square Error (RMSE is less than 3.5 m. The obtained national-scale DEM constitutes an useful support to carry out accurate geomorphological and geological investigations over large areas. The problem of choosing the best step size in deriving a grid from a TIN is then discussed and a method to quantify the loss of vertical information is presented as a function of the grid step. Some examples of DEM application are outlined. Under request, an high resolution stereo image database of the whole Italian territory (derived from the presented DEM is available to browse via internet.
Periods, poles, and shapes of Saturn's irregular moons
Denk, Tilmann; Mottola, Stefano
2016-10-01
We report rotational-lightcurve observations of irregular moons of Saturn based on disk-integrated observations with the Narrow-Angle Camera of the Cassini spacecraft. From 24 measured rotation periods, 20 are now known with an accuracy of ~2% or better. The numbers are as follows (in hours; an '*' marks the less reliable periods): Hati 5.42; Mundilfari 6.74; Loge 6.94*; Skoll 7.26; Kari 7.70; Suttungr 7.82*, Bergelmir 8.13; Phoebe 9.274; Siarnaq 10.188; Narvi 10.21; Tarvos 10.69; Skathi 11.30; Ymir 11.922; Hyrrokkin 12.76; Greip 12.79*; Ijiraq 13.03; Albiorix 13.32; Bestla 14.624; Bebhionn 16.40; Paaliaq 18.75; Kiviuq 21.96; Erriapus 28.15; Thrymr 35 or >45* Tarqeq 76.8.More recent data strengthen the notion that objects in orbits with an inclination supplemental angle i' > 27° have significantly slower spin rates than those at i' 27°, Siarnaq, stands opposed to at least eight objects with faster spins and i' 27° bin contains all nine known prograde moons and four retrograde objects.A total of 25 out of 38 known outer moons has been observed with Cassini, and there is no chance to observe the 13 missing objects until end-of-mission. However, all unobserved objects are part of the i' 27° are known, and none of them is a fast rotator, with no exception.Several objects were observed repeatedly to determine pole directions, sidereal periods, and convex shapes. A few lightcurves have been observed to show three maxima and three minima even at low phase angles, suggesting objects with a triangular equatorial cross-section. Some objects with 2 maxima/ 2 minima are probably quite elongated. One moon even shows lightcurves with 4 maxima/ 4 minima.
THE MEASUREMENT METHODOLOGY IMPROVEMENT OF THE HORIZONTAL IRREGULARITIES IN PLAN
Directory of Open Access Journals (Sweden)
O. M. Patlasov
2015-08-01
Full Text Available Purpose. Across the track superstructure (TSS there are structures where standard approach to the decision on the future of their operation is not entirely correct or acceptable. In particular, it concerns the track sections which are sufficiently quickly change their geometric parameters: the radius of curvature, angle of rotation, and the like. As an example, such portions of TSS may include crossovers where their component is within the so-called connecting part, which at a sufficiently short length, substantially changes curvature. The estimation of the position in terms of a design on the basis of the existing technique (by the difference in the adjacent arrows bending is virtually impossible. Therefore it is proposed to complement and improve the methodology for assessing the situation of the curve in plan upon difference in the adjacent versine. Methodology. The possible options for measuring horizontal curves in the plan were analyzed. The most adequate method, which does not contradict existing on the criterion of the possibility of using established standards was determined. The ease of measurement and calculation was took into account. Findings. Qualitative and quantitative verification of the proposed and existing methods showed very good agreement of the measurement results. This gives grounds to assert that this methodology can be recommended to the workers of track facilities in the assessment of horizontal irregularities in plan not only curves, but also within the connecting part of switch congresses. Originality. The existing method of valuation of the geometric position of the curves in the plan was improved. It does not create new regulations, and all results are evaluated by existing norms. Practical value. The proposed technique makes it possible, without creating a new regulatory framework, to be attached to existing one, and expanding the boundaries of its application. This method can be used not only for ordinary curves
Contribution of tropical instability waves to ENSO irregularity
Holmes, Ryan M.; McGregor, Shayne; Santoso, Agus; England, Matthew H.
2018-05-01
Tropical instability waves (TIWs) are a major source of internally-generated oceanic variability in the equatorial Pacific Ocean. These non-linear phenomena play an important role in the sea surface temperature (SST) budget in a region critical for low-frequency modes of variability such as the El Niño-Southern Oscillation (ENSO). However, the direct contribution of TIW-driven stochastic variability to ENSO has received little attention. Here, we investigate the influence of TIWs on ENSO using a 1/4° ocean model coupled to a simple atmosphere. The use of a simple atmosphere removes complex intrinsic atmospheric variability while allowing the dominant mode of air-sea coupling to be represented as a statistical relationship between SST and wind stress anomalies. Using this hybrid coupled model, we perform a suite of coupled ensemble forecast experiments initiated with wind bursts in the western Pacific, where individual ensemble members differ only due to internal oceanic variability. We find that TIWs can induce a spread in the forecast amplitude of the Niño 3 SST anomaly 6-months after a given sequence of WWBs of approximately ± 45% the size of the ensemble mean anomaly. Further, when various estimates of stochastic atmospheric forcing are added, oceanic internal variability is found to contribute between about 20% and 70% of the ensemble forecast spread, with the remainder attributable to the atmospheric variability. While the oceanic contribution to ENSO stochastic forcing requires further quantification beyond the idealized approach used here, our results nevertheless suggest that TIWs may impact ENSO irregularity and predictability. This has implications for ENSO representation in low-resolution coupled models.
METAL ABUNDANCES OF 12 DWARF IRREGULARS FROM THE ADBS SURVEY
Energy Technology Data Exchange (ETDEWEB)
Haurberg, Nathalie C.; Salzer, John J. [Department of Astronomy, Indiana University, 727 E. Third St., Bloomington, IN 47405 (United States); Rosenberg, Jessica, E-mail: nhaurber@astro.indiana.edu, E-mail: slaz@astro.indiana.edu, E-mail: jrosenb4@gmu.edu [School of Physics, Astronomy and Computational Science, George Mason University, MS 3F3, Fairfax, VA 22030 (United States)
2013-03-01
We have analyzed long-slit spectra of 12 dwarf irregular galaxies from the Arecibo Dual Beam Survey (ADBS). These galaxies represent a heterogeneous sample of objects detected by ADBS, but on average are relatively gas-rich, low-surface-brightness, and low-mass, thus represent a region of the galaxian population that is not commonly included in optical surveys. The metallicity-luminosity relationship for these galaxies is analyzed; the galaxies discussed in this paper appear to be under-abundant at a given luminosity when compared to a sample from the literature. We attempt to identify a 'second parameter' responsible for the intrinsic scatter and apparent under-abundance of our galaxies. We do not find a definitive second parameter but note the possible indication that infall or mixing of pristine gas may be responsible. We derive oxygen abundances for multiple H II regions in many of our galaxies but do not find any strong indications of metallicity variation within the galaxies except in one case where we see variation between an isolated H II region and the rest of the galaxy. Our data set includes the galaxy with the largest known H I-to-optical size ratio, ADBS 113845+2008. Our abundance analysis of this galaxy reveals that it is strongly over-enriched compared to galaxies of similar luminosity, indicating it is not a young object and confirming the result from Cannon et al. that this galaxy appears to be intrinsically rare in the local universe.
Ionospheric irregularities at low latitudes in the American sector
International Nuclear Information System (INIS)
Nakamura, Y.
1981-10-01
A detailed analysis of the atomic oxigem airglow emission at the wavelength of 6300 A observed at Cachoeira Paulista (22 0 41'S, 45 0 00'W) shows that intensity perturbations frequently occur and propagate from north to south and from west to east. Such irregularities originated in the ionospheric F region and occur essencially during the premidnight period. These perturbations have a high frequency of occurrence during spring and summer and are rare during winter and fall. The disturbances are correlated with range type spread F detected over Cachoeira Paulista, and have characteristics similar to equatorial ionospheric plasma bubbles (i.e., similar seazonal variation, time of occurrence, ionogram signatures, direction and speed of propagation, etc.). A numerical simulation is carried out for the generation and evolution of ionospheric bubbles based on the theory of the collisional Rayleigh-Taylor instability for the equatorial and Cachoeira Paulista regions. Also a study was made of the, evolution of the bubble as a function of the electron density profile and as a function of the amplitude of the initial density perturbation. Assuming the electron density profile perturbed by the bubble, the [OI] 6300 A intensity was calculated for various latitudes arbitrarily taken within the photometer scanning range. The bubble was assumed to be aligned with the Earth's magnetic field and extending from higher altitudes at the equatorial region down to be arbitrary height of 150 Km at which a negligible conductivity is assumed. It was also assumed that the bubble was moving upwards with the velocity of 120 m/s, which in turn was estimated from initial numerical simulation results. The airglow calculation results show that as the bubble goes up, the disturbances in the airglow intensity propagate from north to south, in accord with observed experimental results. (Author) [pt